Publication Date
2025
Document Type
Article
Abstract
Algorithms often outperform humans in making decisions, in large part because they are more consistent. Despite this, there remains widespread demand to keep a “human in the loop” to address concerns about fairness and transparency. Although evidence suggests that most human overrides are errors, we argue these errors can provide value: they generate new data from which algorithms can learn. To remain accurate, algorithms must be updated over time, but data generated solely from algorithmic decisions is biased, including only cases selected by the algorithm (e.g., individuals released on parole). Training on this algorithmically selected data can significantly reduce predictive accuracy. When a human overrides an algorithmic denial, it generates valuable training data for updating the algorithm. On the other hand, overriding a grant removes potentially useful data. Fortunately, demand for human oversight is strongest for algorithmic denials of benefits, where overrides add the most value. This alignment suggests a politically feasible and accuracy-enhancing reform: limiting human overrides to algorithmic denials. The article illustrates the accuracy-sustaining benefits of strategically keeping “error in the loop” with datasets on parole, credit, and law school admissions. In all three contexts, we demonstrate that simulated human overrides of algorithmic denials significantly improve the predictive value of newly generated data.
Publication Title
Journal of Law and Empirical Analysis
Recommended Citation
Ryan W. Copus, Cait Spackman & Hannah Laqueur,
Error in the Loop: How Human Mistakes Can Improve Algorithmic Learning,
Journal of Law and Empirical Analysis
1
(2025).
Available at:
https://irlaw.umkc.edu/faculty_works/1039
Included in
Artificial Intelligence and Robotics Commons, Criminal Law Commons, Criminal Procedure Commons, Data Science Commons, Legal Education Commons