Details

Abstract
As U.S. immigration courts face insurmountable caseloads, there's a rising temptation to rely on automated decision-making. Where individuals lack typical legal protections, mere predictive accuracy increasingly becomes the standard. In this talk, I argue that the allure of so-called "precarious accurate predictions" — those where predictive accuracy masks the problematic nature of underlying reasoning patterns — poses significant ethical and procedural risks within immigration law. Drawing on insights from philosophy of mind and psychology, I highlight two ways in which these predictions falter under scrutiny: through their reliance on stealth proxies and their susceptibility to illusions of depth. These in turn establish two currently unsatisfied desiderata for a robust integration of predictive analytics in legal decision-making: that predictive models address proxy discrimination beyond statistical correlations and recognize casual-explanatory connections between features. The talk ends by underscoring the urgent need for a critical reevaluation of the prospective use of automated predictions in legal processes, cautioning against their expansion into domains like criminal justice, where they threaten to erode established rights and safeguards.