NewYorkUniversity
LawReview

Topics

Evidence

Results

What Remains of the “Forfeited” Right to Confrontation? Restoring Sixth Amendment Values to the Forfeiture-by-Wrongdoing Rule in Light of Crawford v. Washington and Giles v. California

Rebecca Sims Talbott

Under the forfeiture-by-wrongdoing rule, a criminal defendant loses his Sixth Amendment right to confront a government witness when he intentionally prevents that witness from testifying at trial. As the rule currently operates, any and all prior statements by the missing witness can be admitted as substantive evidence against the defendant, regardless of whether they have been subjected to any of the procedural elements of confrontation. In this Note, I argue against such a “complete forfeiture” rule and propose a more “limited” rule in its stead. I argue, contrary to most courts and scholars, that forfeiture-by-wrongdoing cannot be justified by its punitive rhetoric, rendering its sweeping “complete forfeiture” result vulnerable to criticisms based on the primary lessons of Crawford v. Washington.

Safety in Numbers? Deciding when DNA Alone is Enough to Convict

Andrea Roth

Fueled by police reliance on offender databases and advances in crime scene recovery, a new type of prosecution has emerged in which the government’s case turns on a match statistic explaining the significance of a “cold hit” between the defendant’s DNA profile and the crime-scene evidence. Such cases are unique in that the strength of the match depends on evidence that is almost entirely quantifiable. Despite the growing number of these cases, the critical jurisprudential questions they raise about the proper role of probabilistic evidence, and courts’ routine misapprehension of match statistics, no framework—including a workable standard of proof—currently exists for determining sufficiency of the evidence in such a case. This Article is the first to interrogate the relationship between “reasonable doubt” and statistical certainty in the context of cold hit DNA matches. Examining the concepts of “actual belief” and “moral certainty” underlying the “reasonable doubt” test, I argue that astronomically high source probabilities, while fallible, are capable of meeting the standard for conviction. Nevertheless, the starkly numerical nature of “pure cold hit” evidence raises unique issues that require courts to apply a quantified threshold for sufficiency purposes. I suggest as a starting point—citing recent juror studies and the need for uniformity and systemic legitimacy—that the threshold should be no less favorable to the defendant than a 99.9% source probability.

Evaluating Eyewitness Identification in the 21st Century

The Honorable Stuart Rabner

In the Eighteenth Annual Justice William J. Brennan, Jr. Lecture on State Courts and Social Justice, Stuart Rabner, Chief Justice of the New Jersey Supreme Court, discusses the court’s recent decision in State v. Henderson. In Henderson, the court revised the longstanding legal framework for testing the reliability of eyewitness identifications. Justice Rabner discusses the case law underlying the traditional framework, the social science that prompted the court’s decision, and the revised framework now in place. He concludes by emphasizing the importance of eyewitness identification in our criminal justice system and calling for continued judicial attention to accepted scientific evidence on eyewitness reliability.

Toward a Bayesian Analysis of Recanted Eyewitness Identification Testimony

Kristy L. Fields

The reliability of eyewitness identification has been increasingly questioned in recent years. Despite acknowledgment that such evidence is not only unreliable, but also overly emphasized by judicial decisionmakers, in some cases, antiquated procedural rules and lack of guidance as to how to properly weigh identification evidence produce unsettling results. Troy Anthony Davis was executed in 2011 amidst public controversy regarding the eyewitness evidence against him. At trial, nine witnesses identified Davis as the perpetrator. However, after his conviction, seven of those witnesses recanted. Bogged down by procedural restrictions and long-held judicial mistrust of recantation evidence, Davis never received a new trial and his execution produced worldwide criticism.

On the 250th anniversary of Bayes’ Theorem, this Note applies Bayesian analysis to Davis’s case to demonstrate a potential solution to this uncertainty. By using probability theory and scientific evidence of eyewitness accuracy rates, it demonstrates how a judge might have included the weight of seven recanted identifications to determine the likelihood that the initial conviction was made in error. This Note demonstrates that two identifications and seven nonidentifications results in only a 31.5% likelihood of guilt, versus the 99% likelihood represented by nine identifications. This Note argues that Bayesian analysis can, and should, be used to evaluate such evidence. Use of an objective method of analysis can ameliorate cognitive biases and implicit mistrust of recantation evidence. Furthermore, most arguments against the use of Bayesian analysis in legal settings do not apply to post-conviction hearings evaluating recantation evidence. Therefore, habeas corpus judges faced with recanted eyewitness identifications ought to consider implementing this method.

Convenient Facts: Nken v. Holder, the Solicitor General, and the Presentation of Internal Government Facts

Nancy Morawetz

In April 2012, facing a court order to disclose internal Justice Department e-mails, the Office of the Solicitor General (OSG) wrote to the United States Supreme Court to admit that it had made a factual statement to the Court three years earlier in Nken v. Holder about agency policy and practice that was not accurate. The statement had been based on e-mail communications between Justice Department and agency lawyers. In fact, the statement neither reflected the content of the e-mails nor the actual policy and practice of the relevant government agencies. The letter promised remedial measures and concluded by assuring the Court that the OSG took its responsibility of candor seriously. The underlying factual representation by the OSG in the Nken case was unusual because it attracted attention and lengthy Freedom of Information Act (FOIA) litigation that led to the disclosure of the communications that served as the basis of the statement. But it is not at all unusual as an example of unsupported factual statements by government lawyers that are used to support legal arguments. Indeed, unsupported statements appear in OSG briefs on a wide range of issues. These statements benefit from the unusual position of the government: It has access to information not available to other litigants, and it benefits from a presumption of candor that endows its statements with a claim of self-evident authority that no private litigant could match.

The Nken case provides a unique opportunity to explore the consequences of judicial acceptance of fact statements provided by the OSG. Because of FOIA litigation, we have an opportunity to examine how the OSG gathered information as well as the role played by government counsel at the Justice Department and the interested agencies. This examination shows multiple dangers with unsupported statements about internal government facts. It also demonstrates the difficulty of relying on lawyers representing the government to seek out and offer information that will undermine the government’s litigation position. Finally, it shows that it is dangerous to rely on the party that has misled the Court to develop an appropriate remedy.

Prevention of misleading statements could be pursued through greater self-regulation, prohibition of extra-record factual statements, or through a model of disclosure and rebuttal. This Article argues that the experience in Nken reflects the grave danger in presuming that self-regulation is an adequate safeguard against erroneous statements. It further argues that despite the appeal of a rigid rule that prohibits such statements, such an approach ignores the Court’s interest in information about real world facts that are relevant to its decisions. The Article concludes by arguing that the best proactive approach is to adopt a formal system of advance notice combined with access to the basis of government representations of fact. It further argues that courts should refuse to honor statements in court decisions that are based on untested and erroneous statements of fact by the government.

The Evidentiary Rules of Engagement in the War Against Domestic Violence

Erin R. Collins

Our criminal justice system promises defendants a fair and just adjudication of guilt, regardless of the character of the alleged offense. Yet, from mandatory arrest to “no-drop” prosecution policies, the system’s front-end response to domestic violence reflects the belief that it differs from other crimes in ways that permit or require the adaptation of criminal justice response mechanisms. Although scholars debate whether these differential responses are effective or normatively sound, the scholarship leaves untouched the presumption that, once the adjudicatory phase is underway, the system treats domestic violence offenses like any other crime. This Article reveals that this presumption is false. It demonstrates that many jurisdictions have adopted specialized evidence rules that authorize admission of highly persuasive evidence of guilt in domestic violence prosecutions that would be inadmissible in other criminal cases. These jurisdictions unmoor evidence rules from their justificatory principles to accommodate the same iteration of domestic violence exceptionalism that underlies specialized front-end criminal justice policies. The Article argues that even though such evidentiary manipulation may be effective in securing convictions, enlisting different evidence rules in our war on domestic violence is unfair to defendants charged with such offenses and undermines the integrity of the criminal justice system. It also harms some of the people the system seeks to protect by both reducing the efficacy of the criminal justice intervention and discrediting those complainants who do not support prosecution.

Trial Judges and the Forensic Science Problem

Stephanie L. Damon-Moore

In the last decade, many fields within forensic science have been discredited by scientists, judges, legal commentators, and even the FBI. Many different factors have been cited as the cause of forensic science’s unreliability. Commentators have gestured toward forensic science’s unique development as an investigative tool, cited the structural incentives created when laboratories are either literally or functionally an arm of the district attorney’s office, accused prosecutors of being overzealous, and attributed the problem to criminal defense attorneys’ lack of funding, organization, or access to forensic experts.

But none of these arguments explain why trial judges, who have an independent obligation to screen expert testimony presented in their courts, would routinely admit evidence devoid of scientific integrity. The project of this Note is to understand why judges, who effectively screen evidence proffered by criminal defendants and civil parties, fail to uphold their gatekeeping obligation when it comes to prosecutors’ forensic evidence, and how judges can overcome the obstacles in the path to keeping bad forensic evidence out of court.