You do not have permission to edit this page, for the following reason:
Event description:
Example topics: Specific topics of relevance include, but are not limited to: * * Novel assessments of learning, including those drawing on computational techniques for automated, peer, or human-assisted assessment. * New methods for validating inferences about human learning from established measures, assessments, or proxies. * Experimental interventions that show evidence of improved learning outcomes, such as * Domain independent interventions inspired by social psychology, behavioural economics, and related fields, including those with the potential to benefit learners from diverse socio-economic and cultural backgrounds * Domain specific interventions inspired by discipline-based educational research that may advance teaching and learning of specific ideas or theories within a field or redress misconceptions. * Heterogeneous treatment effects in large experiments that point the way towards personalized or adaptive interventions * Methodological papers that address challenges emerging from the “replication crisis” and “new statistics” in the context of Learning at Scale research: * Best practices in open scie nce, including pre-planning and pre-registration * Alternatives to conducting and reporting null hypothesis significance testing * Best practices in the archiving and reuse of learner data in safe, ethical ways * Advances in differential privacy and other methods that reconcile the opportunities of open science with the challenges of privacy protection * Tools or techniques for personalization and adaptation, based on log data, user modeling, or choice. * Approaches to fostering inclusive education at scale, such as: * The blended use of large-scale learning environments in specific residential or small-scale learning communities, or the use of sub-groups or small communities within large-scale learning environments * The application of insights from small-scale learning communities to large-scale learning environments * Learning environments for neurodevelopmental, cultural, and socio-economic diversity * Usability, efficacy and effectiveness studies of design elements for students or instructors, such as: * Status indicators of student progress or instructional effectiveness * Methods to promote community, support learning, or increase retention at scale * Tools and pedagogy such as open learner models, to promote self-efficacy, self-regulation and motivation * Log analysis of student behaviour, e.g.: * Assessing reasons for student outcome as determined by modifying tool design * Modelling learners based on responses to variations in tool design * Evaluation strategies such as quiz or discussion forum design * Instrumenting systems and data representation to capture relevant indicators of learning * New tools and techniques for learning at scale, such as: * Games for learning at scale * Automated feedback tools, such as for essay writing, programming, and so on * Automated grading tools * Tools for interactive tutoring * Tools for learner modelling * Tools for increasing learner autonomy in learning and self-assessment * Tools for representing learner models * Interfaces for harnessing learning data at scale * Innovations in platforms for supporting learning at scale * Tools to support for capturing, managing learning data * Tools and techniques for managing privacy of learning data The conference is co-located with and immediately precedes the 2019 International Conference on AI in Education in the same city and venue. The conference organizers are: John C. Mitchell, Stanford University, Program Co-Chair Kaska Porayska-Pomsta, University College London, Program Co-Chair David Joyner, Georgia Institute of Technology, General Chair
Save page Show preview Show changes Cancel