NewYorkUniversity
LawReview

Topics

Technology

Results

Big Data and Brady Disclosures

Brian Chen

Data makes the world go round. Now more than ever, routine police work depends on the collection and analysis of digital information. Law enforcement agencies possess vast sums of intel on who we are, where we go, and what we do. The proliferation of digital technology has transformed federal criminal procedure—from how police investigate crimes to how prosecutors prove them at trial. Courts and commentators have written much about the first part, but less so about the second. Together, they represent two sides of the same problem: constitutional doctrine lagging behind new technology, leading to suboptimal constraints on law enforcement conduct.

This Note explores the effects of digital technology on the nature and scope of federal prosecutors’ disclosure obligations under Brady v. Maryland. As police pass along more data to prosecutors—usually terabytes at a time—prosecutors face the difficult task of sifting through mountains of evidence to determine what is exculpatory or otherwise favorable to the defense. Often, prosecutors turn over their entire case file, knowing full well that defense counsel will fare no better. This state of affairs puts our adversarial system on shaky ground. This Note urges district courts to exercise greater oversight of the discovery process, requiring prosecutors to take reasonable precautions so exculpatory evidence comes to light.

Antitrust After the Coming Wave

Daniel A. Crane

A coming wave of general-purpose technologies, including artificial intelligence (“AI”), robotics, quantum computing, synthetic biology, energy expansion, and nanotechnology, is likely to fundamentally reshape the economy and erode the assumptions on which the antitrust order is predicated. First, AI-driven systems will vastly improve firms’ ability to detect (and even program) consumer preferences without the benefit of price signals, which will undermine the traditional information-producing benefit of competitive markets. Similarly, these systems will be able to determine comparative producer efficiency without relying on competitive signals. Second, AI systems will invert the salient characteristics of human managers, whose intentions are opaque but actions discernible. An AI’s “intentions”—its programmed objective functions—are easily discernible, but its actions or processing steps are a black box. Third, the near-infinite scalability of the technologies in the coming wave will likely result in extreme market concentration, with a few megafirms dominating. Finally, AI and related productive systems will be able to avoid traditional prohibitions on both collusion and exclusion, with the consequence that antitrust law’s core prohibitions will become ineffective. The cumulative effect of these tendencies of the coming wave likely will be to retire the economic order based on mandated competition. As in past cases of natural monopoly, some form of regulation will probably replace antitrust, but the forms of regulation are likely to look quite different. Rather than attempting to set a regulated firm’s prices by determining its costs and revenues, the regulatory future is more likely to involve direct regulation of an AI’s objective functions, for example by directing the AI to maximize social welfare and allocate the surplus created among different stakeholders of the firm.

Private Law in Unregulated Spaces

Elizabeth A. Rowe

This Essay expounds on the outsized role of private law in governing ownership of new technologies and data. As scholars lament gaps between law and technology, and the need for government regulation in these various spaces, private law has quietly intervened to essentially regulate key features related to ownership, control, and access. Whether such intervention is welcome, efficient, or effective probably depends on the context and is subject to debate. Nevertheless, this Essay provides an excellent illustration of the organic development of private ordering to occupy spaces left open by public law, and posits that the significance of this phenomenon, whether for better or worse, cannot be lost in the weeds.

More specifically, the way in which contract law and intellectual property law have coalesced to define and control data ownership is striking. As a threshold matter, it is property ownership that allocates control of and access to data resources and ultimately enables monetization and value in the marketplace. This control extends to both the public and private spheres, and the attendant implications are far reaching.

Building on my recent work, this Essay will provide three exemplar contexts in which ‘private law creep’ has occurred, especially with respect to trade secrecy—the area of intellectual property law most likely to govern data transactions. By scrutinizing implantable medical devices, facial recognition technology, and algorithmic models in the criminal justice system, one observation remains salient and pervasive: contracts rule. Despite the strong public interests that are implicated in these domains, none of them are regulated on a federal level. Instead, rights of access and ownership are governed by private law.

The Tort of Moving Fast and Breaking Things: A/B Testing’s Crucial Role in Social Media Litigation

Maya Konstantino

Social media has created an unregulated public health crisis. For a long time, social platforms have remained unchecked, mostly due to Section 230 of the Communications Decency Act, a controversial law which insulates online service providers from actions based on third party content. The general consensus was that suing these companies would “break the internet.” Recently, however, as empirical evidence piles up showing the negative effects of the platform, this dogma is coming under fire. Forty-one states and the District of Columbia have come together to sue Meta, and a large-scale MDL has made it past a motion to dismiss in the Northern District of California. This essay argues that traditional product liability law is the most viable framework for holding social media platforms accountable. Looking at function over form, Meta manufactures a product, which it meticulously designs and markets to consumers. Further, the essay argues that focusing on the platform’s use of A/B testing to tweak their addictive design will be imperative to the upcoming litigation. A/B tests can be used to demonstrate a platform’s knowledge of the harmful effects of its design choices. Further, internal results of A/B tests could provide proof of causation. Building on this knowledge, the article provides a roadmap for litigating future claims against social media companies.

Generative Interpretation

Yonathan Arbel, David A. Hoffman

We introduce generative interpretation, a new approach to estimating contractual
meaning using large language models. As AI triumphalism is the order of the day,
we proceed by way of grounded case studies, each illustrating the capabilities of these
novel tools in distinct ways. Taking well-known contracts opinions, and sourcing the
actual agreements that they adjudicated, we show that AI models can help factfinders
ascertain ordinary meaning in context, quantify ambiguity, and fill gaps in parties’
agreements. We also illustrate how models can calculate the probative value of
individual pieces of extrinsic evidence.

After offering best practices for the use of these models given their limitations, we
consider their implications for judicial practice and contract theory. Using large
language models permits courts to estimate what the parties intended cheaply and
accurately, and as such generative interpretation unsettles the current interpretative
stalemate. Their use responds to efficiency-minded textualists and justice-oriented
contextualists, who argue about whether parties will prefer cost and certainty or
accuracy and fairness. Parties—and courts—would prefer a middle path, in which
adjudicators strive to predict what the contract really meant, admitting just enough
context to approximate reality while avoiding unguided and biased assimilation of
evidence. As generative interpretation offers this possibility, we argue it can become
the new workhorse of contractual interpretation.

Beyond Social Media Analogues

Gregory M. Dickinson

The steady flow of social media cases to the Supreme Court reveals a nation reworking its fundamental relationship with technology. The cases raise a host of questions ranging from difficult to impossible: how to nurture a vibrant public square when a few tech giants dominate the flow of information, how social media can be at the same time free from conformist groupthink and protected against harmful disinformation campaigns, and how government and industry can cooperate on such problems without devolving toward censorship.

To such profound questions, this Essay offers a comparatively modest contribution— what not to do. Always the lawyer’s instinct is toward analogy, considering what has come before and how it reveals what should come next. Almost invariably, that is the right choice. The law’s cautious evolution protects society from disruptive change. But almost is not always, and with social media, disruptive change is already upon us. Using social media laws from Texas and Florida as a case study, this Essay suggests that social media’s distinct features render it poorly suited to analysis by analogy and argues that courts should instead shift their attention toward crafting legal doctrines targeted to address social media’s unique ills.

Whose Data, Whose Value? Simple Exercises in Data and Modeling Evaluation with Implications for Technology Law and Policy

Aileen Nielsen

Scholarship on the phenomena of big data and algorithmically-driven digital environments has largely studied these technological and economic phenomena as monolithic practices, with little interest in the varied quality of contributions by data subjects and data processors. Taking a pragmatic, industry-inspired approach to measuring the quality of contributions, this work finds evidence for a wide range of relative value contributions by data subjects. In some cases, a very small proportion of data from a few data subjects is sufficient to achieve the same performance on a given task as would be achieved with a much larger data set. Likewise, algorithmic models generated by different data processors for the same task and with the same data resources show a wide range in quality of contribution, even in highly performance-incentivized conditions. In short, contrary to the trope of data as the new oil, data subjects, and indeed individual data points within the same data set, are neither equal nor fungible. Moreover, the role of talent and skill in algorithmic development is significant, as with other forms of innovation. Both of these observations have received little, if any, attention in discussions of data governance. In this essay, I present evidence that both data subjects and data controllers exhibit significant variations in the measured value of their contributions to the standard Big Data pipeline. I then establish that such variations are worth considering in technology policy for privacy, competition, and innovation.

The observation of substantial variation among data subjects and data processors could be important in crafting appropriate law for the Big Data economy. Heterogeneity in value contribution is undertheorized in tech law scholarship and implications for privacy law, competition policy, and innovation. The work concludes by highlighting some of these implications and posing an empirical research agenda to fill in information needed to realize policies sensitive to the wide range of talent and skill exhibited by data subjects and data processors alike.

Regulating the Pedestrian Safety Crisis

Gregory H. Shill

In the 2010s, the United States entered a pedestrian safety crisis that is unique among wealthy nations. Deaths of people on foot surged more than 46% that decade, outpacing the increase in all other traffic deaths by nine to one. The early 2020s have seen an intensification of this trend. These fatalities magnify racial disparities, placing Black pedestrians at a two-thirds higher risk of being killed than their white counterparts. While the pedestrian safety crisis has many causes, there is growing evidence that the enlargement of the American vehicle has played a key role. Auto companies earn higher profit margins on large vehicles, and consumers prefer their greater creature comforts. But the size, height, and weight necessary for those comforts has been shown to make these vehicles far deadlier for those who have the misfortune of being struck by them. Carmakers do not disclose these risks to the car-buying public—but even if they did, individual consumers lack appropriate incentives to internalize the social costs of the vehicles they buy. Like pollution, this negative externality presents a classic case for regulation. Yet America’s vehicle safety regulator (the National Highway Transportation Safety Administration, or NHTSA), conceived in the wake of the Ralph Nader consumer revolution of the 1960s, considers the safety of pedestrians—who are third parties rather than consumers—almost completely alien to its mission.                    

This Essay presents a different model, based on NHTSA’s own statutory mandate to protect “the public” as a whole from risks posed by motor vehicles. It argues that pedestrians are, quintessentially, a group whose well-being vehicle safety regulators should prioritize—even though when acting as pedestrians they are not consumers of the regulated product. Pedestrians are maximally exposed to dangerous vehicles, and by definition they benefit from neither vehicle comforts nor most occupant-focused safety features. They may even be endangered by some of them. NHTSA should expressly incorporate the welfare of pedestrians and other non-occupants into its mission. To that end, this Essay develops four policy actions NHTSA should undertake as part of a policy update it launched in 2022: include pedestrian safety in its marquee safety evaluation program; regulate the design of vehicles to protect people outside of them; use technology to protect pedestrians; and update its safety tests so they are more representative of common fatal pedestrian crash victims and scenarios.

Disability and Design

Christopher Buccafusco

When scholars contemplate the legal tools available to policymakers for encouraging innovation, they primarily think about patents. If they are keeping up with the most recent literature, they may also consider grants, prizes, and taxes as means to increase the supply of innovation. But the innovation policy toolkit is substantially deeper than that. To demonstrate its depth, this Article explores the evolution of designs that help people with disabilities access the world around them. From artificial limbs to the modern wheelchair and the reshaping of the built environment, a variety of legal doctrines have influenced, for better and for worse, the pace and direction of innovation for accessible design.

This Article argues that two of the most important drivers of innovation for accessible design have been social welfare laws and antidiscrimination laws. Both were responsible, in part, for the revolution in accessibility that occurred in the second half of the twentieth century. Unlike standard innovation incentives, however, these laws operate on the demand side of the market. Social welfare laws and antidiscrimination laws increase the ability and willingness of parties to pay for accessible technology, ultimately leading to greater supply. But in doing so, these laws generate a different distribution of the costs and benefits of innovation than supply-side incentives. They also produce their own sets of innovation distortions by allowing third parties to make decisions about the designs that people with disabilities have to use.

The law can promote innovation, and it can hinder it. For example, the law’s relationship to the wheelchair, the most important accessibility innovation of the twentieth century, produced both results. Policymakers have choices about which legal incentives doctrines they can use and how they can use them. This Article evaluates those tools, and it provides guidelines for their use to encourage accessible technology in particular and innovation generally.

Should Law Subsidize Driving?

Gregory H. Shill

A century ago, captains of industry and their allies in government launched a social experiment in urban America: the abandonment of mass transit in favor of a new personal technology, the private automobile. Decades of investment in this shift have created a car-centric landscape with Dickensian consequences. In the United States, motor vehicles are now the leading killer of children and the top producer of greenhouse gases. Each year, they rack up trillions of dollars in direct and indirect costs and claim nearly 100,000 American lives via crashes and pollution, with the most vulnerable paying a disproportionate price. The appeal of the car’s convenience and the failure to effectively manage it has created a public health catastrophe. Many of the automobile’s social costs originate in individual preferences, but an overlooked amount is encouraged—indeed enforced—by law. Yes, the United States is car-dependent by choice. But it is also car-dependent by law. This Article conceptualizes this problem and offers a way out. It begins by identifying a submerged, disconnected system of rules that furnish indirect yet extravagant subsidies to driving. These subsidies lower the price of driving by comprehensively reassigning its costs to non-drivers and society at large. They are found in every field of law, from traffic law to land use regulation to tax, tort, and environmental law. Law’s role is not primary, and at times it is even constructive. But where it is destructive, it is uniquely so: Law not only inflames a public health crisis but legitimizes it, ensuring the continuing dominance of the car. The Article urges a reorientation of law away from this system of automobile supremacy in favor of consensus social priorities, such as health, prosperity, and equity.