NewYorkUniversity
LawReview

Topics

Empirical Legal Studies

Results

Math Symbols in the Tax Code

Will Danielson Lanier

Our tax code is stuck in the Middle Ages. The Internal Revenue Code (“the Code”), codified at 26 U.S.C., uses the concepts of addition, subtraction, multiplication, and division, as one might expect of a tax code. But, disdaining the 1500s invention of the elementary math symbols ‘+,’ ‘–,’ ‘×,’ and ‘÷,’ the Code instead uses complicated English constructions such as “any amount of X which bears the same ratio to that amount as Y bears to Z.”

I propose that we use these elementary math symbols in our tax laws. To see whether this would increase the laws’ legibility, I conducted a preregistered, randomized, controlled trial involving 161 participants. One group received the actual Code, the other, a translation using math symbols. Both groups were asked to solve the same two Code-based tax problems. For the first problem, use of the translation with math symbols increased answer accuracy from 25% to 70%. For the second problem, answer accuracy increased from 11% to 50%.

This result, I argue, can be extrapolated to the broader population and to the Code as a whole, confirming the plausible intuition that math symbols would increase the understandability of the Code. I then argue that this would be a good thing, answering various objections along the way, with a particular appeal to the rule of law and the spirit of democracy. People ought to be able to understand the laws that govern them.

Constitutional Consequences

Netta Barak-Corren, Tamir Berkman

For over two hundred years of Supreme Court doctrine, judges and scholars have tried to figure out how the Court’s rulings impact ordinary citizens. Yet the answers often seem to depend on whose opinion or even which press releases you read. How can we actually measure the consequences of constitutional decisions?

This Article provides a new methodological inroad to this thicket—one which triangulates a nationwide field experiment, a longitudinal public opinion survey, and litigation-outcome analysis. We do so while focusing on a recent set of developments at the intersection of religious freedom and anti-discrimination law that transpired in Fulton v. City of Philadelphia (2021).

We find that Supreme Court decisions can have substantial behavioral and legal effects beyond a seemingly narrow holding. In Fulton, the Court avoided deciding the equality-religion conflict at the heart of the case for a fact-specific decision that should have been easy to circumvent. Yet our results suggest that the Court’s audience focused on the bottom-line message of the decision rather than the holding. Across the nation, foster care agencies became less responsive to same-sex couples. The public became more supportive of religious service refusals. And courts and litigants resolved all open disputes between equality-seeking governments and refusing religious agencies in favor of the agencies.

Our findings contribute to the development of an empirical approach to constitutional doctrine. Constitutional questions often require determining whether the harm to, or burden on, an individual or group is justified by a compelling state interest— and whether the means are narrowly tailored to that end. These tests often hinge on evidence, yet the Court rarely offers parties guidelines for substantiating their interests at the right level of precision. Our work provides both data and empirical tools that inform the application of this test in the realm of free exercise doctrine, equality law, and beyond.

Legislative Statutory Interpretation

Alexander Zhang

We like to think that courts are, and have always been, the primary and final interpreters of statutes. As the conventional separation-of-powers wisdom goes, legislatures “make” statutes while judges “interpret” them. In fact, however, legislatures across centuries of American history have thought of themselves as the primary interpreters. They blurred the line between “making” and “interpreting” by embracing a type of legislation that remains overlooked and little understood: “expository” legislation—enactments that specifically interpreted or construed previous enactments.

In the most exhaustive historical study of the subject to date, this Article—the first in a series of Articles—unearths and explains that lost tradition of legislative statutory interpretation from an institutional perspective. To do so, it draws on an original dataset of 2,497 pieces of expository legislation passed from 1665 to 2020 at the colonial, territorial, state, and federal levels—the first effort of its kind. It shows how expository legislation originated as a colonial-era British import that Americans came to rely on beyond the creation of new constitutions. Lawmakers used expository statutes to supervise administrative statutory interpretation and to negotiate interpretation in the shadows of courts. Judges accepted and even encouraged legislative statutory interpretation. In the mid-nineteenth century, judges increasingly fought back, emboldened by growing calls for judicial independence. Yet even as the backlash entered into treatises, and even as some lawmakers began to balk, legislatures and judges continued to accept and use legislative interpretations of statutes well into the nineteenth century.

Separation of Powers by Contract: How Collective Bargaining Reshapes Presidential Power

Nicholas Handler

This Article demonstrates for the first time how civil servants check and restrain presidential power through collective bargaining. The executive branch is typically depicted as a top-down hierarchy. The President, as chief executive, issues policy directives, and the tenured bureaucracy of civil servants below him follow them. This presumed top-down structure shapes many influential critiques of the modern administrative state. Proponents of a strong President decry civil servants as an unelected “deep state” usurping popular will. Skeptics of presidential power fear the growth of an imperial presidency, held in check by an impartial bureaucracy.

Federal sector labor rights, which play an increasingly central role in structuring the modern executive branch, complicate each of these critiques. Under federal law, civil servants have the right to enter into binding contracts with administrative agencies governing the conditions of their employment. These agreements restrain and reshape the President’s power to manage the federal bureaucracy and impact nearly every area of executive branch policymaking, from how administrative law judges decide cases to how immigration agents and prison guards enforce federal law. Bureaucratic power arrangements are neither imposed from above by an “imperial” presidency nor subverted from below by an “unaccountable” bureaucracy. Rather, the President and the civil service bargain over the contours of executive authority and litigate their disputes before arbitrators and courts. Bargaining thus encourages a form of government-wide civil servant “resistance” that is legalistic rather than lawless, and highly structured and transparent rather than opaque and inchoate.

Despite the increasingly intense judicial and scholarly battles over the administrative state and its legitimacy, civil servant labor rights have gone largely unnoticed and unstudied. This Article shows for the first time how these labor rights restructure and legitimize the modern executive branch. First, using a novel dataset of almost 1,000 contract disputes spanning forty years, as well as in-depth case studies of multiple agencies, it documents the myriad ways in which collective bargaining reshapes bureaucratic relationships within the executive branch. Second, this Article draws on primary source material and academic literature to illuminate the history and theoretical foundations of bargaining as a basis for bureaucratic government. What emerges from this history is a picture of modern bureaucracy that is more mutualistic, legally ordered, and politically responsive than modern observers appreciate.

The Small Agency Problem in American Policing

Maria Ponomarenko

Although legal scholars have over the years developed an increasingly sophisticated account of policing in the largest cities, they have largely overlooked the thousands of small departments that serve rural areas and small towns. As this Article makes clear, small departments are hardly immune from the various problems that plague modern policing. But their sheer number—and relative obscurity—has made it difficult to get a handle on the magnitude of the difficulties they present, or the ways in which familiar reform proposals might need to look different in America’s small towns.

This Article begins to fill this gap. It does so by blending together empirical analysis of various dimensions of small-agency policing, with in-depth case studies that add much-needed texture to the patterns that the data reveal. It argues that the problems of small-town and rural policing differ in important ways from those that plague big-city police, and that there are predictable patterns that explain when and why small agencies are likely to go astray. In particular, it shows that small agencies are susceptible to two types of systemic failures—those that reflect the inherent limitations of small-town political processes and those that are driven by the capacity constraints that some small governments face. It then draws on the data and case studies to provide a preliminary sense of how prevalent these problems are likely to be.

This Article concludes with the policy implications that follow from this richer and more nuanced account of small-town and rural police. It begins with the oft- made suggestion that small agencies be made to “consolidate” with one another or simply dissolve, and it explains why consolidation is not only highly unlikely, but also potentially counter-productive. It argues that states should instead pursue two parallel sets of reforms, the first aimed at equalizing the dramatic disparities in police funding across municipalities, and the second focused on a set of regulatory measures designed to address specific small agency harms.

Whose Data, Whose Value? Simple Exercises in Data and Modeling Evaluation with Implications for Technology Law and Policy

Aileen Nielsen

Scholarship on the phenomena of big data and algorithmically-driven digital environments has largely studied these technological and economic phenomena as monolithic practices, with little interest in the varied quality of contributions by data subjects and data processors. Taking a pragmatic, industry-inspired approach to measuring the quality of contributions, this work finds evidence for a wide range of relative value contributions by data subjects. In some cases, a very small proportion of data from a few data subjects is sufficient to achieve the same performance on a given task as would be achieved with a much larger data set. Likewise, algorithmic models generated by different data processors for the same task and with the same data resources show a wide range in quality of contribution, even in highly performance-incentivized conditions. In short, contrary to the trope of data as the new oil, data subjects, and indeed individual data points within the same data set, are neither equal nor fungible. Moreover, the role of talent and skill in algorithmic development is significant, as with other forms of innovation. Both of these observations have received little, if any, attention in discussions of data governance. In this essay, I present evidence that both data subjects and data controllers exhibit significant variations in the measured value of their contributions to the standard Big Data pipeline. I then establish that such variations are worth considering in technology policy for privacy, competition, and innovation.

The observation of substantial variation among data subjects and data processors could be important in crafting appropriate law for the Big Data economy. Heterogeneity in value contribution is undertheorized in tech law scholarship and implications for privacy law, competition policy, and innovation. The work concludes by highlighting some of these implications and posing an empirical research agenda to fill in information needed to realize policies sensitive to the wide range of talent and skill exhibited by data subjects and data processors alike.

The Politics of Legislative Drafting: A Congressional Case Study

Victoria F. Nourse, Jane S. Schacter

In judicial opinions construing statutes, it is common for judges to make a set of assumptions about the legislative process that generated the statute under review. For example, judges regularly impute to legislators highly detailed knowledge about both judicial rules of interpretation and the substantive area of law of which the statute is a part. Little empirical research has been done to test this picture of the legislative process. In this Article, Professors Nourse and Schacter take a step toward filling this gap with a case study of legislative drafting in the Senate Judiciary Committee. Their results stand in sharp contrast to the traditional judicial story of the drafting process. The interviews conducted by the authors suggest that the drafting process is highly variable and contextual; that staffers, lobbyists, and professional drafters write laws rather than elected representatives; and that although drafters are generally familiar with judicial rules of construction, these rules are not systematically integrated into the drafting process. The case study suggests not only that the judicial story of the legislative process is inaccurate but also that there might be important differences between what the legislature and judiciary value in the drafting process: While courts tend to prize what the authors call the “interpretive” virtues of textual clarity and interpretive awareness, legislators are oriented more toward “constitutive” virtues of action and agreement. Professors Nourse and Schacter argue that the results they report, if reflective of the drafting process generally, raise important challenges for originalist and textualist theories of statutory interpretation, as well as Justice Scalia’s critique of legislative history. Even if the assumptions about legislative drafting made in the traditional judicial story are merely fictions, they nonetheless play a role in allocating normative responsibility for creating statutory law. The authors conclude that their case study raises the need for future empirical research to develop a better understanding of the legislative process.

Creating Markets for Ecosystem Services: Notes from the Field

James Salzman

Ecosystem services are created by the interactions of living organisms with their environment, and they support our society by providing clean air and water, decomposing waste, pollinating flowers, regulating climate, and supplying a host of other benefits. Yet, with rare exception, ecosystem services are neither prized by markets nor explicitly protected by the law. In recent years, an increasing number of initiatives around the world have sought to create markets for services, some dependent on government intervention and some created by entirely private ventures. These experiences have demonstrated that investing in natural capital rather than built capital can make both economic and policy sense. Informed by the author’s recent experiences establishing a market for water quality in Australia, this Article examines the challenges and opportunities of an ecosystem services approach to environmental protection. This Article reviews the range of current payment schemes and identifies the key requirements for instrument design. Building off these insights, the piece then examines the fundamental policy challenge of payments for environmental improvements. Despite their poor reputation among policy analysts as wasteful or inefficient subsidies, payment schemes are found throughout environmental law and policy, both in the U.S. and abroad. This Article takes such payments seriously, demonstrating that they should be favored over the more traditional regulatory and tax-based approaches in far more settings than commonly assumed.

Unintended Consequences of Medical Malpractice Damages Caps

Catherine M. Sharkey

Previous empirical studies have examined various aspects of medical malpractice damages caps, focusing primarily upon their overall effect in reducing insurance premium rates and plaintiffs’ recoveries, and(to a lesser degree) upon other effects such as physicians’ geographic choice of where to practice and the “anchoring” effect of caps that might inadvertently increase award amounts. This Article is the first to explore an unintended crossover effect that may be dampening the intended effects of caps. It posits that, where noneconomic damages are limited by caps, plaintiffs’ attorneys will more vigorously pursue, and juries will award, larger economic damages, which are often unbounded. Implicit in such a crossover effect is the malleability of various components of medical malpractice damages, which often are considered categorically distinct, particularly in the tort reform context. This Article challenges this conventional wisdom.

My original empirical analysis, using a comprehensive dataset of jury verdicts from 1992, 1996, and 2001, in counties located in twenty-two states, collected by the National Center for State Courts, concludes that the imposition of caps on noneconomic damages has no statistically significant effect on overall compensatory damages in medical malpractice jury verdicts or trial court judgments. This result is consistent with the crossover theory. Given the promulgation of noneconomic damages caps, the crossover effect may also partially explain the recently documented trend of rising economic (as opposed to noneconomic) damages in medical malpractice cases.

The Supreme Court During Crisis

Lee Epstein, Daniel E. Ho, Gary King, Jeffrey A. Segal

How War Affects Only Non-War Cases

Does the U.S. Supreme Court curtail rights and liberties when the nation’s security is under threat? In hundreds of articles and books, and with renewed fervor since September 11, 2001, members of the legal community have warred over this question. Yet, not a single large-scale, quantitative study exists on the subject. Using the best data available on the causes and outcomes of every civil rights and liberties case decided by the Supreme Court over the past six decades and employing methods chosen and tuned especially for this problem, our analyses demonstrate that when crises threaten the nation’s security, the justices are substantially more likely to curtail rights and liberties than when peace prevails. Yet paradoxically, and in contradiction to virtually every theory of crisis jurisprudence, war appears to affect only cases that are unrelated to the war. For these cases, the effect of war and other international crises is so substantial, persistent, and consistent that it may surprise even those commentators who long have argued that the Court rallies around the flag in times of crisis. On the other hand, we find no evidence that cases most directly related to the war are affected.

We attempt to explain this seemingly paradoxical evidence with one unifying conjecture. Instead of balancing rights and security in high stakes cases directly related to the war, the justices retreat to ensuring the institutional checks of the democratic branches. Since rights-oriented and process-oriented dimensions seem to operate in different domains and at different times, and often suggest different outcomes, the predictive factors that work for cases unrelated to the war fail for cases related to the war. If this conjecture is correct, federal judges should consider giving less weight to legal principles established during wartime for ordinary cases, and attorneys should see it as their responsibility to distinguish cases along these lines.

1 2 3 5