Do students adopt AI-based recommendations at a different rate than those from human counselors in high-stakes contexts like college applications---and if so, why?
with Sofoklis Goulas and Faidra Monachou
As AI enters domains traditionally served by humans, such as career counseling, success depends not only on predictive accuracy but on whether students adopt algorithmic guidance. Yet little evidence exists from real-world high-stakes contexts. I conduct a large-scale field experiment with 2,000 Greek high school students and find that aversion arises from perceptions of intent rather than ability. To address this, I develop an framework that optimizes adoption under limited human counselor constraints. Using causal forests and optimization, I show how policymakers can design hybrid human–AI systems that maximize adoption under heterogeneous adoption behavior.
When do teacher evaluation programs generate net benefits in practice, and what is the magnitude of those gains?
with Edieal J. Pinker
High-quality teachers substantially improve test scores, graduation, and lifetime earnings, yet little is known about how evaluation systems affect teacher quality in practice. I build a flow model of the teacher workforce, incorporating proficiency, retention, and differentiated salaries, and calibrate it with data from DCPS. I find that retaining proficient teachers reliably boosts outcomes, while dismissing non-proficient ones can backfire if replacements are scarce. To quantify gains, I link evaluation-induced quality improvements to student incomes, showing net benefits exceeding $75,000 per classroom at modest labor costs. This framework guides policymakers in designing evaluation systems that balance retention, quality, and cost.
How should we sequence tasks to maximize learning given students' limited time and diverse learning speeds?
with Faidra Monachou
Personalized learning platforms must decide whether a student should persist with a skill or move on. I model this pacing problem by combining Bayesian Knowledge Tracing with an optimal stopping rule, inspired by Weitzman’s search framework. The model compares the expected benefit of continued practice versus switching skills, adapting to differences in student time, effort, and learning speed. Calibrated with ASSISTments data, preliminary results suggest that simple stopping heuristics can improve learning efficiency by reducing repetition while maintaining engagement. This framework provides EdTech platforms a scalable way to approximate the benefits of one-on-one tutoring.