NewYorkUniversity
LawReview

Topics

Artificial Intelligence (AI)

Results

Antitrust After the Coming Wave

Daniel A. Crane

A coming wave of general-purpose technologies, including artificial intelligence (“AI”), robotics, quantum computing, synthetic biology, energy expansion, and nanotechnology, is likely to fundamentally reshape the economy and erode the assumptions on which the antitrust order is predicated. First, AI-driven systems will vastly improve firms’ ability to detect (and even program) consumer preferences without the benefit of price signals, which will undermine the traditional information-producing benefit of competitive markets. Similarly, these systems will be able to determine comparative producer efficiency without relying on competitive signals. Second, AI systems will invert the salient characteristics of human managers, whose intentions are opaque but actions discernible. An AI’s “intentions”—its programmed objective functions—are easily discernible, but its actions or processing steps are a black box. Third, the near-infinite scalability of the technologies in the coming wave will likely result in extreme market concentration, with a few megafirms dominating. Finally, AI and related productive systems will be able to avoid traditional prohibitions on both collusion and exclusion, with the consequence that antitrust law’s core prohibitions will become ineffective. The cumulative effect of these tendencies of the coming wave likely will be to retire the economic order based on mandated competition. As in past cases of natural monopoly, some form of regulation will probably replace antitrust, but the forms of regulation are likely to look quite different. Rather than attempting to set a regulated firm’s prices by determining its costs and revenues, the regulatory future is more likely to involve direct regulation of an AI’s objective functions, for example by directing the AI to maximize social welfare and allocate the surplus created among different stakeholders of the firm.

Generative Interpretation

Yonathan Arbel, David A. Hoffman

We introduce generative interpretation, a new approach to estimating contractual
meaning using large language models. As AI triumphalism is the order of the day,
we proceed by way of grounded case studies, each illustrating the capabilities of these
novel tools in distinct ways. Taking well-known contracts opinions, and sourcing the
actual agreements that they adjudicated, we show that AI models can help factfinders
ascertain ordinary meaning in context, quantify ambiguity, and fill gaps in parties’
agreements. We also illustrate how models can calculate the probative value of
individual pieces of extrinsic evidence.

After offering best practices for the use of these models given their limitations, we
consider their implications for judicial practice and contract theory. Using large
language models permits courts to estimate what the parties intended cheaply and
accurately, and as such generative interpretation unsettles the current interpretative
stalemate. Their use responds to efficiency-minded textualists and justice-oriented
contextualists, who argue about whether parties will prefer cost and certainty or
accuracy and fairness. Parties—and courts—would prefer a middle path, in which
adjudicators strive to predict what the contract really meant, admitting just enough
context to approximate reality while avoiding unguided and biased assimilation of
evidence. As generative interpretation offers this possibility, we argue it can become
the new workhorse of contractual interpretation.