NewYorkUniversity
LawReview

Topics

Judicial Process

Results

Evidence-Based Judicial Discretion: Promoting Public Safety Through State Sentencing Reform

The Honorable Michael A. Wolff

Brennan Lecture

In this speech delivered for the annual Justice William J. Brennan, Jr., Lecture on State Courts and Social Justice, the Honorable Michael Wolff offers a new way of thinking about sentencing. Instead of attempting to limit judicial discretion and increase incarceration, states should aim to reduce recidivism in order to make our communities safer. Judge Wolff uses the example of Missouri’s sentencing reforms to argue that states should adopt evidence-based sentencing, in which the effectiveness of different sentences and treatment programs are regularly evaluated. In pre-sentencing investigative reports, probation officers should attempt to quantify—based on historical data—the risk the offender poses to the community and the specific treatment that would be most likely to prevent reoffending. Judges, on their own, lack the resources to implement all of these recommendations; probation officers and others involved in sentencing should receive the same information—risk assessment data—and their recommendations should become more influential as they gain expertise.

Qualified Immunity in Limbo: Rights, Procedure, and the Social Costs of Damages Litigation Against Public Officials

David L. Noll

Damages litigation against public officials implicates social costs that ordinary civil litigation between private parties does not. Litigation against public officials costs taxpayers money, may inhibit officials in the performance of their duties, and has the potential to reveal privileged information and decisionmaking processes. The doctrine of qualified immunity—that public officials are generally immune from civil liability for their official actions unless they have unreasonably violated a clearly established federal right—is designed to address these risks. The doctrine, however, demands an application of law to facts that, as a practical matter, requires substantial pretrial discovery. Federal courts have responded with a variety of novel procedural devices. This Note critiques those devices and suggests that courts confronted with a claim of qualified immunity should view their principal task as narrowing the universe of the plaintiff’s claims, thus facilitating a discovery process structured around dispositive legal issues.

Choosing Interpretive Methods: A Positive Theory of Judges and Everyone Else

Alexander Volokh

In this Article, I propose a theory of how rational, ideologically motivated judges might choose interpretive methods, and how rational, ideologically motivated laymen—legislators, litigation organizations, lobbyists, scholars, and citizens—might respond. I assume, first, that judges not only have ideological preferences but also want to write plausible opinions. Second, I assume that every method of statutory or constitutional interpretation has a “most plausible point” along a spectrum of possible decisions in a given case. As a result, if a judge decides to use any particular interpretive method, that method will pull him towards its “most plausible point,” possibly making him deviate from his own ideal point.

When a judge can choose an interpretive method, he selects the one that (taking these deviations into account), among other things, allows him to stay as close as possible to his favored outcome. Thus, any given method is chosen only by judges whose ideal points, roughly speaking, are not too distant from that method’s most plausible point. This behavior creates a selection bias. An interpretive method’s political valence under a regime of free interpretive choice thus differs systematically from what it would look like if that method were mandatory. As a result, one might favor mandating an interpretive method even though one is politically closer to the current practitioners of a different method.

A judge can choose not only which interpretive method to use but also whether to use the same method from case to case. This Article argues that an individual judge’s choice of interpretive method does not usually substantially affect the methods that other judges use. Therefore, even though ideologically motivated judges (or litigation groups) might want to make the method they prefer in most cases mandatory for everyone, it can often be rational for these judges to deviate from that preferred method in instances where a different method would produce a more appealing outcome.

Toward One America: A Vision in Law

The Honorable J. Harvie Wilkinson III

Madison Lecture

In his Madison Lecture, Judge Wilkinson urges a new purpose for American law: the explicit promotion of a stronger sense of national cohesion and unity. He argues that the judicial branch should actively seek to promote this nationalizing purpose and suggests seven different ways for federal courts to do so. He contends further that a nationalizing mission for law is needed at this moment in American history to counteract the demographic divisions and polarizing tendencies of our polity. This purpose need not entail the abdication of traditional values of judicial restraint, should not mean the abandonment of the traditional American credo of unity through pluralism, and must not require the sacrifice of the law’s historic commitment to the preservation of order and the protection of liberty. But the need for a judicial commitment to foster a stronger American identity is clear. The day when courts and judges could be indifferent to the dangers of national fragmentation and disunion is long gone.

Accuracy Counts: Illegal Votes in Contested Elections and the Case for Complete Proportionate Deduction

Kevin J. Hickey

Contested elections in which the number of illegal votes exceeds the purported winner’s margin of victory present courts with difficult choices. Simply certifying the result risks denying the true winner his victory, while ordering a new election leaves the choice to a changed electorate. Adjusting the results is also problematic, as it may create a perception that judges, and not voters, have decided the election. This Note argues that courts should be more willing to use statistical techniques to resolve this type of election dispute. It critiques the various remedial measures that courts have employed, as well as the rejection of statistical methods in existing case law and legal commentary. The author concludes that a statistics-based remedy—termed “complete proportionate deduction”—best balances the values of accuracy, finality, and public faith in the democratic process.

The Costs of “Discernible and Manageable Standards” in Vieth and Beyond

Joshua S. Stillman

This Note argues against the use of the prudential political question doctrine (PPQD), as exemplified by the Vieth v. Jubelirer plurality opinion. In Vieth, the Supreme Court avoided formulating a standard for adjudicating the constitutionality of partisan gerrymandering due to a claimed lack of a “discernible and manageable standard.” This meant, according to the plurality, that no proposed doctrinal test was both concrete enough to be workably deployed by lower courts and discernible enough in the constitutional text, history, and structure, inter alia. Although the Vieth plurality opinion presents itself as based on universally applicable metadoctrine determining what is and is not a discernible and manageable doctrinal test, this Note argues the Court’s use of the PPQD is ultimately based on a gestalt prudential judgment about the wisdom of intervention in the particular area of partisan gerrymandering.

This Note then argues that the PPQD leads to negative consequences for future litigants and judicial legitimacy. The PPQD sends litigants on a wild goose chase for a perfect doctrinal standard, when it seems clear that no standard will satisfy the Vieth plurality. It also invites litigants to argue about what a discernible and manageable doctrinal test is in the abstract, rather than to address the particular legal issue at hand. These diversions insulate the judiciary from legitimate criticism of the grounds of its decisions. This Note then compares the PPQD to another option for judicial avoidance: a merits standard that is almost impossible for plaintiffs to meet in practice, such as rational basis review. This Note concludes that a stringent merits standard is a superior mechanism for judicial avoidance because it does not carry the same high costs for litigants and judicial legitimacy as the PPQD. Additionally, it allows the Court to exit from active adjudication of an issue while still preserving its ability to intervene in egregious cases.

Securing Fragile Foundations: Affirmative Constitutional Adjudication in Federal Courts

The Honorable Marsha S. Berzon

Madison Lecture

In this speech, delivered as the annual James Madison Lecture, Judge Marsha Berzon discusses the availability of judicial remedies for violations of the Constitution. Judge Berzon reflects on the federal courts’ tradition of allowing litigants to proceed directly under the Constitution—that is, without a statutorily based cause of action. This is a tradition that extends much further than the mid-twentieth century cases most commonly associated with affirmative constitutional litigation— Brown, Bolling, & Bivens, for example—and has its roots in cases from the nineteenth and early twentieth centuries. Against this long historical tradition of courts recognizing nonexpress causes of action for violations of the Constitution, Judge Berzon surveys the modern Supreme Court’s jurisprudence, a jurisprudence that sometimes requires constitutional litigants to base their claims on the same sort of clear congressional intent to permit judicial redress now required before courts will recognize so-called “implied” statutory causes of action. Judge Berzon suggests that requiring litigants seeking to enforce constitutional norms to point to evidence of congressional intent regarding the availability of judicial redress misapplies separation-of-powers concerns.

Categoricalism and Balancing in First and Second Amendment Analysis

Joseph Blocher

The least discussed element of District of Columbia v. Heller might ultimately be the most important: the battle between the majority and dissent over the use of categoricalism and balancing in the construction of constitutional doctrine. In Heller, Justice Scalia’s categoricalism essentially prevailed over Justice Breyer’s balancing approach. But as the opinion itself demonstrates, Second Amendment categoricalism raises extremely difficult and still-unanswered questions about how to draw and justify the lines between protected and unprotected “Arms,” people, and arms-bearing purposes. At least until balancing tests appear in Second Amendment doctrine—as they almost inevitably will—the future of the Amendment will depend almost entirely on the placement and clarity of these categories. And unless the Court better identifies the core values of the Second Amendment, it will be difficult to give the categories any principled justification.

Heller is not the first time the Court has debated the merits of categorization and balancing, nor are Justices Scalia and Breyer the tests’ most famous champions. Decades ago, Justices Black and Frankfurter waged a similar battle in the First Amendment context, and the echoes of their struggle continue to reverberate in free speech doctrine. But whereas the categorical view triumphed in Heller, Justice Frankfurter and the First Amendment balancers won most of their battles. As a result, modern First Amendment doctrine is a patchwork of categorical and balancing tests, with a tendency toward the latter. The First and Second Amendments are often presumed to be close cousins, and courts, litigants, and scholars will almost certainly continue to turn to the First Amendment for guidance in developing a Second Amendment standard of review. But while free speech doctrine may be instructive, it also tells a cautionary tale: Above all, it suggests that unless the Court better identifies the core values of the Second Amendment, the Second Amendment’s future will be even murkier than the First Amendment’s past.

This Article draws the Amendments together, using the development of categoricalism and balancing tests in First Amendment doctrine to describe and predict what Heller’s categoricalism means for the present and future of Second Amendment doctrine. It argues that the Court’s categorical line drawing in Heller creates intractable difficulties for Second Amendment doctrine and theory and that the majority’s categoricalism neither reflects nor enables a clear view of the Amendment’s core values, whatever they may be.

In Goodridge’s Wake: Reflections on the Political, Public, and Personal Repercussions of the Massachusetts Same-Sex Marriage Cases

The Honorable Roderick L. Ireland

Brennan Lecture

In the Sixteenth Annual Justice William J. Brennan, Jr. Lecture on State Courts and Social Justice, Roderick L. Ireland, Senior Associate Justice of the Massachusetts Supreme Judicial Court, discusses the seminal case Goodridge v. Department of Public Health and a judge’s role in controversial decisions. Justice Ireland explains
the rationale behind his majority vote in Goodridge, as well as his dissent in Cote-Whitacre v. Department of Public Health, and the extreme public backlash that followed the same-sex marriage cases. Through the personal lens of his own experience dealing with the extreme reaction to Goodridge, Justice Ireland addresses how judges should handle such controversial cases while remaining true to the role of the judiciary.

Safety in Numbers? Deciding when DNA Alone is Enough to Convict

Andrea Roth

Fueled by police reliance on offender databases and advances in crime scene recovery, a new type of prosecution has emerged in which the government’s case turns on a match statistic explaining the significance of a “cold hit” between the defendant’s DNA profile and the crime-scene evidence. Such cases are unique in that the strength of the match depends on evidence that is almost entirely quantifiable. Despite the growing number of these cases, the critical jurisprudential questions they raise about the proper role of probabilistic evidence, and courts’ routine misapprehension of match statistics, no framework—including a workable standard of proof—currently exists for determining sufficiency of the evidence in such a case. This Article is the first to interrogate the relationship between “reasonable doubt” and statistical certainty in the context of cold hit DNA matches. Examining the concepts of “actual belief” and “moral certainty” underlying the “reasonable doubt” test, I argue that astronomically high source probabilities, while fallible, are capable of meeting the standard for conviction. Nevertheless, the starkly numerical nature of “pure cold hit” evidence raises unique issues that require courts to apply a quantified threshold for sufficiency purposes. I suggest as a starting point—citing recent juror studies and the need for uniformity and systemic legitimacy—that the threshold should be no less favorable to the defendant than a 99.9% source probability.

1 3 4 5 6 7 10