NewYorkUniversity
LawReview

Topics

Legal Theory

Results

Automating Contract Law

George S. Geis

The study of contract law is undergoing a difficult transition as it moves from the theoretical to the empirical. Over the past few decades scholars have focused largely on developing economic theories that offer a normative approach to setting the legal rules governing voluntary exchange. The time has now come to test whether these theories provide a meaningful basis for choosing our laws—in other words, to ask whether empirical data supports the theoretical models that contracts scholars have posited. Unfortunately, this type of empirical analysis has proven exceptionally difficult to conduct, and some commentators are beginning to question whether it will ever be possible to test and revise our economic theories of contract in a meaningful manner. Yet the problem of harnessing information to support complex decisions is not unique to contract law. This Essay explores the possibility that recent technological developments from the field of organizational knowledge management—including advances in meaning-based computing algorithms—will soon make it easier to conduct empirical work in contract law on a much larger scale.

Intellectual Property for Market Experimentation

Michael Abramowicz, John F. Duffy

Intellectual property protects investments in the production of information, but the relevant literature has largely neglected one type of information that intellectual property might protect: information about the market success of goods and services. A first entrant into a market often cannot prevent other firms from free riding on the information its entry reveals about consumer demand and market feasibility. Despite the existence of some first-mover advantages, the incentives to be the first entrant into a market may sometimes be inefficiently low, thereby giving rise to a net first-mover disadvantage that discourages innovation. Intellectual property may counteract this inefficiency by providing market exclusivity, thus promoting earlier market entry and increasing the level of entrepreneurial activity in the economy. The goal of encouraging market experimentation helps to explain certain puzzling aspects of current intellectual property doctrine and provides a coherent basis for appreciating some of the current criticisms of intellectual property rights.

Toward One America: A Vision in Law

The Honorable J. Harvie Wilkinson III

Madison Lecture

In his Madison Lecture, Judge Wilkinson urges a new purpose for American law: the explicit promotion of a stronger sense of national cohesion and unity. He argues that the judicial branch should actively seek to promote this nationalizing purpose and suggests seven different ways for federal courts to do so. He contends further that a nationalizing mission for law is needed at this moment in American history to counteract the demographic divisions and polarizing tendencies of our polity. This purpose need not entail the abdication of traditional values of judicial restraint, should not mean the abandonment of the traditional American credo of unity through pluralism, and must not require the sacrifice of the law’s historic commitment to the preservation of order and the protection of liberty. But the need for a judicial commitment to foster a stronger American identity is clear. The day when courts and judges could be indifferent to the dangers of national fragmentation and disunion is long gone.

Trademark Litigation as Consumer Conflict

Michael Grynberg

Trademark litigation typically unfolds as a battle between competing sellers who argue over whether the defendant’s conduct is likely to confuse consumers. This is an unfair fight. In the traditional narrative, the plaintiff defends her trademark while simultaneously protecting consumers at risk for confusion. The defendant, relatively speaking, stands alone. The resulting “two-against-one” storyline gives short shrift to the interests of nonconfused consumers who may have a stake in the defendant’s conduct. As a result, courts are too receptive to nontraditional trade- mark claims where the case for consumer harm is questionable. Better outcomes are available by appreciating trademark litigation’s parallel status as a conflict between consumers. This view treats junior and senior trademark users as proxies for different consumer classes and recognizes that remedying likely confusion among one group of consumers may cause harm to others. Focusing on the interests of benefited and harmed consumers also minimizes the excessive weight given to moral rhetoric in adjudicating trademark cases. Consideration of trademark’s consumer-conflict dimension is therefore a useful device for critiquing trademark’s expansion and assessing future doctrinal developments.

Two and Twenty: Taxing Partnership Profits in Private Equity Funds

Victor Fleischer

Private equity fund managers take a share of the profits of the partnership as the equity portion of their compensation. The tax rules for compensating general partners create a planning opportunity for managers who receive the industry standard “two and twenty” (a two percent management fee and twenty percent profits interest). By taking a portion of their pay in the form of partnership profits, fund managers defer income derived from their labor efforts and convert it from ordinary income into long-term capital gain. This quirk in the tax law allows some of the richest workers in the country to pay tax on their labor income at a low rate. Changes in the investment world—the growth of private equity funds, the adoption of portable alpha strategies by institutional investors, and aggressive tax planning—suggest that reconsideration of the partnership profits puzzle is overdue.

While there is ample room for disagreement about the scope and mechanics of the reform alternatives, this Article establishes that the status quo is an untenable position as a matter of tax policy. Among the various alternatives, perhaps the best starting point is a baseline rule that would treat carried interest distributions as ordinary income. Alternatively, Congress could adopt a more complex “Cost-of-Capital Method” that would convert a portion of carried interest into ordinary income on an annual basis, or Congress could allow fund managers to elect into either the ordinary income or “Cost-of-Capital Method.” While this Article suggests that treating distributions as ordinary income may be the best, most flexible approach, any of these alternatives would be superior to the status quo. These alternatives would tax carried interest distributions to fund managers in a manner that more closely matches how our tax system treats other forms of compensation, thereby improving economic efficiency and discouraging wasteful regulatory gamesmanship. These changes would also reconcile private equity compensation with our progressive tax system and widely held principles of distributive justice.

Are All Legal Probabilities Created Equal?

Yuval Feldman, Doron Teichman

At the core of the economic analysis of law lies the concept of expected sanctions, which are calculated by multiplying the severity of the sanction that is applied to wrongdoers by the probability that it will be applied. This probability is the product of several sequential probabilities involving the different actors responsible for sanctioning wrongdoers (e.g., police, prosecutors, judges, jurors, etc.). Generally, legal economists treat different legal probabilities as fungible, simply multiplying them much like any other sequential probabilistic situation. This Article challenges this assumption, demonstrating that people perceive and are affected by different types of legal probabilities in distinct ways. More specifically, it shows that uncertainty associated with the substance of the law and uncertainty associated with imperfect enforcement should not be treated equivalently.

To demonstrate this point, this Article presents a series of between-subjects experimental surveys that measure and compare participants’ attitudes toward compliance in conditions of uncertainty. Study participants—several hundred students from Israel and the United States—answered questions in the context of one of several variations on the same hypothetical scenario. While the expected sanction was the same in each variation, the source of uncertainty differed. These studies confirmed that people are less likely to comply when uncertainty stems from the imprecision of law’s substance than when uncertainty stems from the imperfect enforcement of clear law.

Originalism Is Bunk

Mitchell N. Berman

Critical analysis of originalism should start by confronting a modest puzzle: Most commentators suppose that originalism is deeply controversial, while others complain that it means too many things to mean anything at all. Is one of these views false? If not, how can we square the term’s ambiguity with the sense that it captures a subject of genuine debate? Perhaps self-professed originalists champion a version of originalism that their critics don’t reject, while the critics challenge a version that proponents don’t maintain.

Contemporary originalists disagree about many things: which feature of the Constitution’s original character demands fidelity (framers’ intent, ratifiers’ understanding, or public meaning); why such fidelity is required; and whether this interpretive obligation binds judges alone or citizens, legislators, and executive officials too. But on one dimension of potential variability—the dimension of strength—originalists are mostly united: They believe that those who follow some aspect of a provision’s original character must give that original aspect priority over all other considerations (with a possible exception for continued adherence to non- originalist judicial precedents). That is, when the original meaning (or intent, etc.) is adequately discernible, the interpreter must follow it. This is the thesis that self- professed originalists maintain and that their critics (the non-originalists) deny.

Non-originalists have challenged this thesis on varied wholesale grounds, which include: that the target of the originalist search is undiscoverable or nonexistent; that originalism is self-refuting because the framers intended that the Constitution not be interpreted in an originalist vein; and that originalism yields bad outcomes. This Article proceeds differently. Instead of mounting a global objection—one purporting to hold true regardless of the particular arguments on which proponents of originalism rely—I endeavor to catalogue and critically assess the varied arguments proffered in originalism’s defense.

Those arguments are of two broad types—hard and soft. Originalism is “hard” when grounded on reasons that purport to render it (in some sense) inescapably true; it is “soft” when predicated on contingent and contestable weighings of its costs and benefits relative to other interpretive approaches. That is, hard arguments seek to show that originalism reflects some sort of conceptual truth or follows logi- cally from premises the interlocutor already can be expected to accept; soft arguments aim to persuade others to revise their judgments of value or their empirical or predictive assessments. The most common hard arguments contend that originalism is entailed either by intentionalism or by binding constitutionalism. Soft arguments claim that originalist interpretation best serves diverse values like democracy and the rule of law. I seek to show that the hard arguments for originalism are false and that the soft arguments are implausible.

The upshot is not that constitutional interpretation should disregard framers’ intentions, ratifiers’ understandings, or original public meanings. Of course we should care about these things. But originalism is a demanding thesis. We can take the original character of the Constitution seriously without treating it as dispositive. That original intents and meanings matter is not enough to render originalism true.

The Disutility of Injustice

Paul H. Robinson, Geoffrey P. Goodwin, Michael D. Reisig

For more than half a century, the retributivists and the crime-control instrumentalists have seen themselves as being in an irresolvable conflict. Social science increasingly suggests, however, that this need not be so. Doing justice may be the most effective means of controlling crime. Perhaps partially in recognition of these developments, the American Law Institute’s recent amendment to the Model Penal Code’s “purposes” provision—the only amendment to the Model Code in the forty-eight years since its promulgation—adopts desert as the primary distributive principle for criminal liability and punishment.

That shift to desert has prompted concerns by two groups that, ironically, have been traditionally opposed to each other. The first group—those concerned with what they see as the over-punitiveness of current criminal law—worries that setting desert as the dominant distributive principle means continuing the punitive doctrines they find so objectionable, and perhaps making things worse. The second group—those concerned with ensuring effective crime control—worries that a shift to desert will create many missed crime-control opportunities and will increase avoidable crime.

The first group’s concern about over-punitiveness rests upon an assumption that the current punitive crime-control doctrines of which it disapproves are a reflection of the community’s naturally punitive intuitions of justice. However, as Study 1 makes clear, today’s popular crime-control doctrines in fact seriously conflict with people’s intuitions of justice by exaggerating the punishment deserved.

The second group’s concern that a desert principle will increase avoidable crime exemplifies the common wisdom of the past half-century that ignoring justice in pursuit of crime control through deterrence, incapacitation of the dangerous, and other such coercive crime-control programs is cost-free. However, Studies 2 and 3 suggest that doing injustice has real crime-control costs. Deviating from the community’s shared principles of justice undermines the system’s moral credibility and thereby undermines its ability to gain cooperation and compliance and to harness the powerful forces of social influence and internalized norms.

The studies reported here provide assurance to both groups. A shift to desert is not likely either to undermine the criminal justice system’s crime-control effectiveness, and indeed may enhance it, nor is it likely to increase the system’s punitiveness, and indeed may reduce it.

Debunking the Purchaser Welfare Account of Section 2 of the Sherman Act: How Harvard Brought Us a Total Welfare Standard and Why We Should Keep it

Alan J. Meese

The last several years have seen a vigorous debate among antitrust scholars and practitioners about the appropriate standard for evaluating the conduct of monopolists under section 2 of the Sherman Act. While most of the debate over possible standards has focused on the empirical question of each standard’s economic utility, this Article undertakes a somewhat different task: It examines the normative benchmark that courts have actually chosen when adjudicating section 2 cases. This Article explores three possible benchmarks—producer welfare, purchaser welfare, and total welfare—and concludes that courts have opted for a total welfare normative approach to section 2 since the formative era of antitrust law. Moreover, this Article will show that the commitment to maximizing total social wealth is not a recent phenomenon associated with Robert Bork and the Chicago School of antitrust analysis. Instead, it was the Harvard School that led the charge for a total welfare approach to antitrust generally and under section 2 in particular. The normative consensus between Chicago and Harvard and parallel case law is by no means an accident; rather, it reflects a deeply rooted desire to protect practices—
particularly “competition on the merits”—that produce significant benefits in the form of enhanced resource allocation, without regard to the ultimate impact on purchasers in the monopolized market. Those who advocate repudiation of the longstanding scholarly and judicial consensus reflected in the total welfare approach to section 2 analysis bear the heavy burden of explaining why courts should, despite considerations of stare decisis, suddenly reverse themselves and adopt such a different approach for the very first time, over a century after passage of the Act.

Secondary Considerations in Nonobviousness Analysis: The Use of Objective Indicia Following KSR v. Teleflex

Natalie A. Thomas

One of the basic requirements for patenting an invention is that the invention be
nonobvious. Following the Supreme Court’s decision in Graham v. John Deere,
secondary considerations—also known as objective indicia of nonobviousness—
have been considered when determining whether an invention is nonobvious. Secondary
considerations provide tangible evidence of the economic and motivational
issues relevant to the nonobviousness of an invention. Types of secondaryconsiderations
evidence include commercial success, long-felt but unmet need, and
copying by competitors. For many years, the Federal Circuit’s teaching, suggestion,
or motivation test often eliminated the need for the court to rely on secondary considerations
in the obviousness inquiry. Due to the Federal Circuit’s stringent application
of this test, the obviousness inquiry was generally resolved by examining the
prior art.
In 2007, the Supreme Court decided KSR v. Teleflex, which endorsed a flexible
obviousness analysis and rejected the Federal Circuit’s strict application of the
teaching, suggestion, or motivation test. Following KSR, scholars predicted that
secondary-considerations evidence would provide a critical tool for patentees
seeking to demonstrate the nonobviousness of an invention. Inspired by that prediction,
this Note evaluates how secondary-considerations evidence has been utilized
in the first few years post-KSR. It finds that the Federal Circuit has continued to
impose stringent relevancy requirements on the use of secondary-considerations
evidence, and that it remains difficult for patentees to employ secondary considerations
in favor of a nonobviousness conclusion. Specifically, secondaryconsiderations
evidence has not been used with much success outside of pharmaceutical
patent cases. More often than not, the Federal Circuit has summarily dismissed
secondary-considerations evidence as insufficient in cases involving
mechanical arts patents. This Note concludes by suggesting that the Federal
Circuit’s current practice for using secondary considerations should inform proposals
by scholars for industry-specific tailoring of the patent system and patent
law’s use of secondary considerations, and that the Federal Circuit should continue
to engage with secondary-considerations evidence in order to provide more guidance
to lower courts during the post-KSR transition period.