NewYorkUniversity
LawReview

Topics

Technology

Results

Software Torts and Software Contracts: Reframing the Developer’s Duty

Micah R. Musser

Flawed software costs businesses and consumers millions of dollars every year, but
existing tort law does not generally require developers to compensate others for
economic injuries caused by bad code. Discontented scholars and policy analysts
have produced an array of proposals that would force developers to pay for harms
flowing from vulnerabilities that hackers exploit to injure software users. This basic
model—which would impose a duty on developers to eliminate security-related
vulnerabilities but not other types of software flaws—dominates legislative and
academic debates about reform. This Note argues that this focus is misconceived. It
is technically ambiguous, doctrinally anomalous, and would throw national security
and consumer welfare goals into conflict. Liability proponents have focused on
it because they recognize that imposing new duties on software developers must
realistically be limited in some way. Although the vulnerability-based limitation is
ultimately misguided, this Note proposes that a party-based limitation restricting
recovery to parties in near-privity is more defensible. Focusing on party-based
limitations on duty instead of a vulnerability-based limitation would require
thinking of software development not as a product, but rather as a professional
practice subject to malpractice-like standards. This reframing, I argue, better aligns
proposals for expanding software developers’ duties with existing tort doctrine
while focusing a liability evaluation on the most important aspects of the software
development process.

Crowdsourced War

Oona A. Hathaway, Inbar Pe’er, Catherine Vera

Today, civilians can participate in war as never before. Through smartphones and
the internet, civilians can now contribute directly to military operations, whether they
are in an active conflict zone or on the other side of the globe. A civilian can, for
example, use an app to help military forces intercept threats, join a virtual network of
volunteers that conduct cyberoperations against a party to an armed conflict, or use
a crowdfunding site to donate funds to provide weapons to combatants. We call this
revolution in war fighting “Crowdsourced War.” This Article identifies this growing
phenomenon, demonstrates how it creates extraordinary new risks for civilians, and
recommends critical steps that States like the United States must take to address those
risks.

In the wake of the September 11, 2001, attacks on the United States, new interpretations
of the law governing armed conflict took shape. Applying these new interpretations
to Crowdsourced War, this Article shows how civilians today may unknowingly
forfeit their protected status and be regarded as legitimate military objectives under
international law. Civilians participating in Crowdsourced War not only unwittingly
endanger themselves, they also endanger civilians living and working alongside them.
The spread of Crowdsourced War can also lead combatants to suspect all civilians of
being participants in war—and thus lawful targets.

To address these problems, we argue it is time to adopt new rules for Crowdsourced
War. States, including the United States, should revisit broad interpretations of the
law first adopted for a different kind of conflict—interpretations that now make vast
numbers of civilians newly vulnerable. States must also take greater responsibility
when they invite civilians to participate in Crowdsourced War, including by
ensuring that they do not put civilians at unnecessary risk and by informing them
of the consequences they may face. Finally, international humanitarian law must be
revised to account for this sea change in the way wars are fought. The International
Committee for the Red Cross, together with States like the United States that are
committed to the rule of law, should renew efforts to tighten standards for targeting
civilians. This is necessary to ensure that the era of Crowdsourced War does not
become the era in which the distinction between civilian and combatant completely
evaporates.

Big Data and Brady Disclosures

Brian Chen

Data makes the world go round. Now more than ever, routine police work depends on the collection and analysis of digital information. Law enforcement agencies possess vast sums of intel on who we are, where we go, and what we do. The proliferation of digital technology has transformed federal criminal procedure—from how police investigate crimes to how prosecutors prove them at trial. Courts and commentators have written much about the first part, but less so about the second. Together, they represent two sides of the same problem: constitutional doctrine lagging behind new technology, leading to suboptimal constraints on law enforcement conduct.

This Note explores the effects of digital technology on the nature and scope of federal prosecutors’ disclosure obligations under Brady v. Maryland. As police pass along more data to prosecutors—usually terabytes at a time—prosecutors face the difficult task of sifting through mountains of evidence to determine what is exculpatory or otherwise favorable to the defense. Often, prosecutors turn over their entire case file, knowing full well that defense counsel will fare no better. This state of affairs puts our adversarial system on shaky ground. This Note urges district courts to exercise greater oversight of the discovery process, requiring prosecutors to take reasonable precautions so exculpatory evidence comes to light.

Antitrust After the Coming Wave

Daniel A. Crane

A coming wave of general-purpose technologies, including artificial intelligence (“AI”), robotics, quantum computing, synthetic biology, energy expansion, and nanotechnology, is likely to fundamentally reshape the economy and erode the assumptions on which the antitrust order is predicated. First, AI-driven systems will vastly improve firms’ ability to detect (and even program) consumer preferences without the benefit of price signals, which will undermine the traditional information-producing benefit of competitive markets. Similarly, these systems will be able to determine comparative producer efficiency without relying on competitive signals. Second, AI systems will invert the salient characteristics of human managers, whose intentions are opaque but actions discernible. An AI’s “intentions”—its programmed objective functions—are easily discernible, but its actions or processing steps are a black box. Third, the near-infinite scalability of the technologies in the coming wave will likely result in extreme market concentration, with a few megafirms dominating. Finally, AI and related productive systems will be able to avoid traditional prohibitions on both collusion and exclusion, with the consequence that antitrust law’s core prohibitions will become ineffective. The cumulative effect of these tendencies of the coming wave likely will be to retire the economic order based on mandated competition. As in past cases of natural monopoly, some form of regulation will probably replace antitrust, but the forms of regulation are likely to look quite different. Rather than attempting to set a regulated firm’s prices by determining its costs and revenues, the regulatory future is more likely to involve direct regulation of an AI’s objective functions, for example by directing the AI to maximize social welfare and allocate the surplus created among different stakeholders of the firm.

Private Law in Unregulated Spaces

Elizabeth A. Rowe

This Essay expounds on the outsized role of private law in governing ownership of new technologies and data. As scholars lament gaps between law and technology, and the need for government regulation in these various spaces, private law has quietly intervened to essentially regulate key features related to ownership, control, and access. Whether such intervention is welcome, efficient, or effective probably depends on the context and is subject to debate. Nevertheless, this Essay provides an excellent illustration of the organic development of private ordering to occupy spaces left open by public law, and posits that the significance of this phenomenon, whether for better or worse, cannot be lost in the weeds.

More specifically, the way in which contract law and intellectual property law have coalesced to define and control data ownership is striking. As a threshold matter, it is property ownership that allocates control of and access to data resources and ultimately enables monetization and value in the marketplace. This control extends to both the public and private spheres, and the attendant implications are far reaching.

Building on my recent work, this Essay will provide three exemplar contexts in which ‘private law creep’ has occurred, especially with respect to trade secrecy—the area of intellectual property law most likely to govern data transactions. By scrutinizing implantable medical devices, facial recognition technology, and algorithmic models in the criminal justice system, one observation remains salient and pervasive: contracts rule. Despite the strong public interests that are implicated in these domains, none of them are regulated on a federal level. Instead, rights of access and ownership are governed by private law.

The Tort of Moving Fast and Breaking Things: A/B Testing’s Crucial Role in Social Media Litigation

Maya Konstantino

Social media has created an unregulated public health crisis. For a long time, social platforms have remained unchecked, mostly due to Section 230 of the Communications Decency Act, a controversial law which insulates online service providers from actions based on third party content. The general consensus was that suing these companies would “break the internet.” Recently, however, as empirical evidence piles up showing the negative effects of the platform, this dogma is coming under fire. Forty-one states and the District of Columbia have come together to sue Meta, and a large-scale MDL has made it past a motion to dismiss in the Northern District of California. This essay argues that traditional product liability law is the most viable framework for holding social media platforms accountable. Looking at function over form, Meta manufactures a product, which it meticulously designs and markets to consumers. Further, the essay argues that focusing on the platform’s use of A/B testing to tweak their addictive design will be imperative to the upcoming litigation. A/B tests can be used to demonstrate a platform’s knowledge of the harmful effects of its design choices. Further, internal results of A/B tests could provide proof of causation. Building on this knowledge, the article provides a roadmap for litigating future claims against social media companies.

Generative Interpretation

Yonathan Arbel, David A. Hoffman

We introduce generative interpretation, a new approach to estimating contractual
meaning using large language models. As AI triumphalism is the order of the day,
we proceed by way of grounded case studies, each illustrating the capabilities of these
novel tools in distinct ways. Taking well-known contracts opinions, and sourcing the
actual agreements that they adjudicated, we show that AI models can help factfinders
ascertain ordinary meaning in context, quantify ambiguity, and fill gaps in parties’
agreements. We also illustrate how models can calculate the probative value of
individual pieces of extrinsic evidence.

After offering best practices for the use of these models given their limitations, we
consider their implications for judicial practice and contract theory. Using large
language models permits courts to estimate what the parties intended cheaply and
accurately, and as such generative interpretation unsettles the current interpretative
stalemate. Their use responds to efficiency-minded textualists and justice-oriented
contextualists, who argue about whether parties will prefer cost and certainty or
accuracy and fairness. Parties—and courts—would prefer a middle path, in which
adjudicators strive to predict what the contract really meant, admitting just enough
context to approximate reality while avoiding unguided and biased assimilation of
evidence. As generative interpretation offers this possibility, we argue it can become
the new workhorse of contractual interpretation.

Beyond Social Media Analogues

Gregory M. Dickinson

The steady flow of social media cases to the Supreme Court reveals a nation reworking its fundamental relationship with technology. The cases raise a host of questions ranging from difficult to impossible: how to nurture a vibrant public square when a few tech giants dominate the flow of information, how social media can be at the same time free from conformist groupthink and protected against harmful disinformation campaigns, and how government and industry can cooperate on such problems without devolving toward censorship.

To such profound questions, this Essay offers a comparatively modest contribution— what not to do. Always the lawyer’s instinct is toward analogy, considering what has come before and how it reveals what should come next. Almost invariably, that is the right choice. The law’s cautious evolution protects society from disruptive change. But almost is not always, and with social media, disruptive change is already upon us. Using social media laws from Texas and Florida as a case study, this Essay suggests that social media’s distinct features render it poorly suited to analysis by analogy and argues that courts should instead shift their attention toward crafting legal doctrines targeted to address social media’s unique ills.

Whose Data, Whose Value? Simple Exercises in Data and Modeling Evaluation with Implications for Technology Law and Policy

Aileen Nielsen

Scholarship on the phenomena of big data and algorithmically-driven digital environments has largely studied these technological and economic phenomena as monolithic practices, with little interest in the varied quality of contributions by data subjects and data processors. Taking a pragmatic, industry-inspired approach to measuring the quality of contributions, this work finds evidence for a wide range of relative value contributions by data subjects. In some cases, a very small proportion of data from a few data subjects is sufficient to achieve the same performance on a given task as would be achieved with a much larger data set. Likewise, algorithmic models generated by different data processors for the same task and with the same data resources show a wide range in quality of contribution, even in highly performance-incentivized conditions. In short, contrary to the trope of data as the new oil, data subjects, and indeed individual data points within the same data set, are neither equal nor fungible. Moreover, the role of talent and skill in algorithmic development is significant, as with other forms of innovation. Both of these observations have received little, if any, attention in discussions of data governance. In this essay, I present evidence that both data subjects and data controllers exhibit significant variations in the measured value of their contributions to the standard Big Data pipeline. I then establish that such variations are worth considering in technology policy for privacy, competition, and innovation.

The observation of substantial variation among data subjects and data processors could be important in crafting appropriate law for the Big Data economy. Heterogeneity in value contribution is undertheorized in tech law scholarship and implications for privacy law, competition policy, and innovation. The work concludes by highlighting some of these implications and posing an empirical research agenda to fill in information needed to realize policies sensitive to the wide range of talent and skill exhibited by data subjects and data processors alike.

Regulating the Pedestrian Safety Crisis

Gregory H. Shill

In the 2010s, the United States entered a pedestrian safety crisis that is unique among wealthy nations. Deaths of people on foot surged more than 46% that decade, outpacing the increase in all other traffic deaths by nine to one. The early 2020s have seen an intensification of this trend. These fatalities magnify racial disparities, placing Black pedestrians at a two-thirds higher risk of being killed than their white counterparts. While the pedestrian safety crisis has many causes, there is growing evidence that the enlargement of the American vehicle has played a key role. Auto companies earn higher profit margins on large vehicles, and consumers prefer their greater creature comforts. But the size, height, and weight necessary for those comforts has been shown to make these vehicles far deadlier for those who have the misfortune of being struck by them. Carmakers do not disclose these risks to the car-buying public—but even if they did, individual consumers lack appropriate incentives to internalize the social costs of the vehicles they buy. Like pollution, this negative externality presents a classic case for regulation. Yet America’s vehicle safety regulator (the National Highway Transportation Safety Administration, or NHTSA), conceived in the wake of the Ralph Nader consumer revolution of the 1960s, considers the safety of pedestrians—who are third parties rather than consumers—almost completely alien to its mission.                    

This Essay presents a different model, based on NHTSA’s own statutory mandate to protect “the public” as a whole from risks posed by motor vehicles. It argues that pedestrians are, quintessentially, a group whose well-being vehicle safety regulators should prioritize—even though when acting as pedestrians they are not consumers of the regulated product. Pedestrians are maximally exposed to dangerous vehicles, and by definition they benefit from neither vehicle comforts nor most occupant-focused safety features. They may even be endangered by some of them. NHTSA should expressly incorporate the welfare of pedestrians and other non-occupants into its mission. To that end, this Essay develops four policy actions NHTSA should undertake as part of a policy update it launched in 2022: include pedestrian safety in its marquee safety evaluation program; regulate the design of vehicles to protect people outside of them; use technology to protect pedestrians; and update its safety tests so they are more representative of common fatal pedestrian crash victims and scenarios.