Plausible Cause: Explanatory Standards in the Age of Powerful Machines

53 Pages Posted: 22 Aug 2016 Last revised: 13 May 2017

See all articles by Kiel Brennan-Marquez

Kiel Brennan-Marquez

University of Connecticut - School of Law

Date Written: September 5, 2016

Abstract

The Fourth Amendment’s probable cause requirement is not about numbers or statistics. It is about requiring the police to account for their decisions. For a theory of wrongdoing to satisfy probable cause — and warrant a search or seizure — it must be plausible. The police must be able to explain why the observed facts invite an inference of wrongdoing, and judges must have an opportunity to scrutinize that explanation.

Until recently, the explanatory aspect of Fourth Amendment suspicion — “plausible cause” — has been uncontroversial, and central to the Supreme Court’s jurisprudence, for a simple reason: explanations have served, in practice, as a guarantor of statistical likelihood. In other words, forcing police to articulate theories of wrongdoing is the means by which courts have traditionally ensured that (roughly) the right “persons, houses, papers, and effects” are targeted for intrusion. Going forward, however, technological change promises to disrupt the harmony between explanatory standards and statistical accuracy. Powerful machines enable a previously impossible combination: accurate predictions, unaccompanied by explanations. As that change takes hold, we will need to think carefully about why explanation-giving matters. When judges assess the sufficiency of explanations offered by police (and other officials), what are they doing? If the answer comes back to error-reduction — if the point of judicial oversight is simply to maximize the overall number of accurate decisions — machines could theoretically do the job as well as, if not better than, humans. But if the answer involves normative goals beyond error-reduction, automated tools — no matter their power — will remain, at best, partial substitutes for judicial scrutiny.

This Article defends the latter view. I argue that statistical accuracy, though important, is not the crux of explanation-giving. Rather, explanatory standards — like probable cause — hold officials accountable to a plurality of sometimes-conflicting constitutional and rule-of-law values that, in our legal system, bound the scope of legitimate authority. Error-reduction is one such value. But there are many others, and sometimes the values work at cross purposes. When judges assess explanations, they navigate a space of value-pluralism: they identify which values are at stake in a given decisional environment and ask, where necessary, if those values have been properly balanced. Unexplained decisions render this process impossible and, in so doing, hobble the judicial role. Ultimately, that role has less to do with analytic power than practiced wisdom. A common argument against replacing judges, and other human experts, with intelligent machines is that machines are not (yet) intelligent enough to take up the mantle. In the age of powerful algorithms, however, this turns out to be a weak — and temporally limited — claim. The better argument, I suggest in closing, is that judging is not solely, or even primarily, about intelligence. It is about prudence.

Suggested Citation

Brennan-Marquez, Kiel, Plausible Cause: Explanatory Standards in the Age of Powerful Machines (September 5, 2016). Vanderbilt Law Review, Vol. 70 (2017), Available at SSRN: https://ssrn.com/abstract=2827733 or http://dx.doi.org/10.2139/ssrn.2827733

Kiel Brennan-Marquez (Contact Author)

University of Connecticut - School of Law ( email )

65 Elizabeth Street
Hartford, CT 06105
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
890
Abstract Views
6,044
Rank
49,855
PlumX Metrics