Transforming education through AI-powered adaptive experiments: U of T computer scientists' team wins XPRIZE Digital Learning Challenge

A team of researchers that includes University of Toronto computer scientists has been awarded the US$500,000 grand prize in the XPRIZE Digital Learning Challenge, a global competition to modernize, accelerate and improve the ways effective learning tools and processes are identified. 

Recognizing that a one-size-fits-all approach to teaching does not work best for every student, the Adaptive Experimentation Accelerator team has built a tool that uses AI-driven experiments to personalize students’ learning experiences.

The team, led by Assistant Professor Joseph Jay Williams of U of T’s Department of Computer Science in the Faculty of Arts & Science, includes U of T CS graduate students Ilya Musabirov, Mohi Reza, Pan Chen and Harsh Kumar along with collaborators from Carnegie Mellon University and North Carolina State University.

“Think of the toughest times as a student. Maybe you were feeling discouraged because you couldn’t solve a problem, and the way it was explained didn’t work. Or you were shaking with fear before a tough test and nothing you were saying helped you calm down,” says Williams, who is a member of U of T’s Data Sciences Institute and a faculty affiliate at the Vector Institute for Artificial Intelligence.

“Students need help to learn and manage themselves, but every student is different. Teachers also need support to know what to say and how to explain it to such a diverse range of students from different backgrounds, with different life experiences.”

The researchers employ a novel approach that uses machine learning algorithms with statistical models and human decision-making, similar to techniques used by technology companies like Google or Meta to show users content that is relevant to them.

Ultimately, they hope to make education more equitable, efficient and community oriented.

“I am thrilled to congratulate Professor Williams and his incredible group of students on their grand-prize-winning innovation, a tool that bridges machine learning and human psychology to customize curriculum and maximize student success,” says Melanie Woodin, dean of the Faculty of Arts & Science. “I look forward to students benefitting from the Adaptive Experimentation Accelerator in future classrooms.”

“I want to congratulate Joseph and his team on this outstanding achievement,” says Eyal de Lara, professor and chair of the Department of Computer Science. “Their dedication to improving learning outcomes for students is clear in this significant contribution to the areas of human-computer interaction and educational technology.”

The team’s toolkit combines machine learning and continuous experiments to provide personalized content and improved learning concepts for students.

It can be integrated into online learning management systems, and can help adjust online homework, emails and text messages. Instructors and researchers can continuously experiment, improve and personalize their content based on feedback from teachers, parents and students.

Using A/B experimentation to integrate AI with human intelligence, the tool uses algorithms to discover which ideas are most effective. This could include explaining esoteric calculus concepts in ways that a student truly grasps, or giving tailored guidance on how to manage anxiety before an exam.

“Instructors shouldn’t have to come up with every single idea themselves,” says Williams. “Adaptive experiments allow more ideas to be tested, so more voices can be included.”

By using an AI-powered tool, researchers and educators can rapidly test and deploy new lesson plans, explanations of complex concepts and even motivational emails “to give students what’s best in days versus years, a more typical timeline for experimental research,” he says.

“The reality is students come from different backgrounds and not every student has the same level of resources, and the COVID-19 pandemic has exacerbated this. The Adaptive Experimentation Accelerator aims to personalize education to help close the quality of education gap for each student, so that no student is left behind,” says Williams.

Williams says receiving the XPRIZE helps draw attention to the investments needed to provide teachers with access to AI and experimentation.

“Tech companies spent billions of dollars on AI and experimentation and have made far more money,” he says. “Educational A/B experimentation hasn’t had enough investment to help millions of students. XPRIZE is just the start to help us bring Adaptive Experimentation Accelerator to every classroom.”

Williams is hopeful that the prize will help speed up the classroom adoption of field research methods that can simultaneously advance science and provide practical benefits for all learners more quickly.

The Adaptive Experimentation Accelerator is underpinned by a software architecture previously developed by Williams and collaborators, known as MOOClet.

“Let’s say you, as an instructor teaching geometry, think that some students might prefer abstract explanations, but others prefer concrete examples. So, you search up explanations for the Pythagorean Theorem that other instructors have tried before, and they have tags like “concrete” and “abstract.” You pick a few, modify them to fit your specific students, and MOOClet uses AI to learn which students learn better from which types of explanations,” explains Williams. “Even better, other instructors see these results and can use them to give their students better explanations. It is worldwide collaboration on improving explanations for students.”

Williams poses the example of a student who incorrectly answers an online homework problem, who then receives an explanation of how to solve it.

“We implement that explanation as a MOOClet, which means the back end enables continual addition of different explanations to be tested, and access to AI algorithms that discover what is best for you as a student, and quickly give it more often,” he adds.

Williams envisions this approach as one that will transform the interfaces of K-12 and higher education into intelligent, adaptive systems that are perpetually improving — all while lowering the time and cost associated with testing new ideas, noting the costliness of having to print new textbooks with different explanations.

“Adaptive experimentation ensures every day we’re trying out better and better ideas for what will motivate students or help them learn,” he says.

Beyond education, Williams sees other future directions for adaptive experimentation, such as optimizing text messages for the management of stress and mental health.

Williams adds he hopes readers will see the importance of adaptive experimentation and encourages them to conduct their own experiments.

“I hope people will start to try out A/B experimentation to solve everyday problems. I hope they practice by sharing this project with two people, with two different explanations of why it’s important.” 

Categories