In recent years, we have seen a rise in the amount of education data available through the digitization of education. Schools are starting to use technology in classrooms to create personalized learning experiences. Massive open online courses (MOOCs) have attracted millions of learners and present an opportunity for us to apply and develop machine learning methods towards improving student learning outcomes, leveraging the data collected.
However, development in student data analysis remains limited, and education largely follows a one-size-fits-all approach today. We have an opportunity to have a significant impact in revolutionizing the way (human) learning can work.
The goal of this workshop is to foster discussion and spur research between machine learning experts and researchers in education fields that can solve fundamental problems in education.
For this year’s workshop, we are highlighting the following areas of interest:
-
Assessments and grading
Assessments are core in adaptive learning, formative learning, and summative evaluation. However, the creation and grading of quality assessments remains a difficult task for instructors. Machine learning methods can be applied to self-, peer-, auto-grading paradigms to both improve the quality of assessments and reduce the burden on instructors and students. These methods can also leverage the multimodal nature of learner data (i.e., textual/programming/mathematical open-form responses, demographic information, student interaction in discussion forums, video and audio recording of the class), posing challenges of how to effectively and efficiently fuse these different forms of data so that we can better understand learners.
-
Content augmentation and understanding
Learning content is rich and multimodal (e.g., programming code, video, text, audio). There has been a growth of online educational resources in the past years, and we have an opportunity to leverage them further. Recent advances in natural language understanding can be applied to understand learning materials better and connect different sources together to create better learning experiences. This can help learners by providing them with more relevant resources and instructors in the creation of content.
-
Personalized learning and active interventions
Personalized learning through custom feedback and interventions can make learning much more efficient, especially when we cater to the individual’s background, goals, state of understanding, and learning context. Methods such as Markov Decision Processes and Multi-armed Bandits are applicable in these context.
-
Human-interpretability
In education applications, transparency and interpretability is important as it can help learners better understand their learning state. Interpretability can provide instructors with insights to better guide their activities with students. It can also help education researchers better understand the foundations of human learning. This can also be especially critical where models are deployed in processes that grade students, as evaluation needs to demonstrate a degree of fairness.
This workshop will lead to new research directions in machine learning-driven educational research and also inspire the development of novel machine learning algorithms and theories that can extend to a large number of other applications that study human data.
ORGANIZERS
- Richard Baraniuk, Rice University
- Christoph Studer, Cornell University
- Jiquan Ngiam, Coursera
- Phillip Grimaldi, OpenStax
- Andrew Lan, Rice University
WORKSHOP SCHEDULE
08:30 AM Opening remarks
08:40 AM Phil Grimaldi, OpenStax/Rice University – BLAh: Boolean Logic Analysis for Graded Student Response Data
09:00 AM Steve Ritter, Carnegie Learning – Eliminating testing through continuous assessment
09:25 AM Pieter Abbeel, UC Berkeley – Gradescope – AI for Grading
09:50 AM Mihaela van der Schaar, UCLA – A Machine Learning Approach to Personalizing Education: Improving Individual Learning through Tracking and Course Recommendation
10:40 AM Zhenghao Chen, Coursera – Machine Learning Challenges and Opportunities in MOOCs
11:00 AM Lise Getoor, UC Santa Cruz – Understanding Engagement and Sentiment in MOOCs using Probabilistic Soft Logic (PSL)
11:25 AM Kangwook Lee, KAIST – Machine Learning Approaches for Learning Analytics: Collaborative Filtering Or Regression With Experts?
11:50 AM Poster spotlight
12:00 PM Lunch break
02:00 PM Poster session
02:30 PM Anna Rafferty, Carleton College – Using Computational Methods to Improve Feedback for Learners
02:55 PM Michael Mozer, CU Boulder – Estimating student proficiency: Deep learning is not the panacea
03:20 PM Yan Karklin, Knewton – Modeling skill interactions with multilayer item response functions
03:45 PM Coffee break
04:10 PM Utkarsh Upadhyay, MPI-SWS – On Crowdlearning: How do People Learn in the Wild?
04:35 PM Christopher Brinton, Zoomi – Beyond Assessment Scores: How Behavior Can Give Insight into Knowledge Transfer
05:00 PM Emma Brunskill, CMU – Using Old Data To Yield Better Personalized Tutoring Systems
05:25 PM Panel discussion and closing remarks
POSTERS
Ciara Pike-Burke and Steffen Grunewalder, Lancaster University – Optimistic planning for question selection
Rianne Conijn, Ad Kleingeld, Uwe Matzat, Chris Snijders, and Menno van Zaanen, Tilburg University and Eindhoven University of Technology – Influence of course characteristics, student characteristics, and behavior in learning management systems on student performance
Jorge Diez, Oscar Luaces, and Antonio Bahamonde, Universidad de Oviedo – Feedback in Peer Assessment for Open-Response Assignments Using a Multitask Factorization Approach
Steve Ritter, Stephen E. Fancsali, Michael Yudelson, Vasile Rus, and Susan Berman, Carnegie Learning, Inc., University of Memphis, and Carnegie Mellon University – Toward Intelligent Instructional Handoffs Between Humans and Machines