Author: Rebecca Sandefur

0

What was the question? Or, scholarly conventions and how they matter.

Different fields of scholarship have different conventions. Those of us who participate in multiple scholarly worlds have likely had experiences leading us to believe that some conventions are useful and worthwhile, while others are pointless or actively harmful. Whether we like specific conventions or not, though, we have to play along with them if we want to contribute to the scholarly conversations where these conventions rule.

Professor Greiner and Ms. Pattanayak (hereinafter G&P) elected to publish their empirical research in a top traditional law review. Law reviews have their own peculiar conventions, that differ sharply from the peculiar conventions of peer-reviewed journals in fields like statistics, sociology, law and society, or political science. Because G&P made this choice, their article is different than it would have been had they been writing for a different kind of publication venue. I would like to focus on one convention of writing for peer-reviewed social science journals that law reviews typically disregard and draw out one consequence of this disregard.

By convention, a social scientific article starts with a literature review covering the prior work on the topic of study. The point of this exercise is to explain to the reader the significance to the field of the new empirical research that is about to be presented. Good literature reviews act as a wind up for the paper’s own research. A good literature review gets the reader interested and motivates the paper by showing the reader that the study she is about to read fills a big intellectual gap, or resolves an important puzzle, or is incredibly innovative and cool. Thus primed, the reader then eagerly consumes the study’s findings with a contextualized understanding of their significance.

G&P’s paper inverts this usual ordering, presenting their study first, and then following with a literature review that motivates their call for more studies like their own. Does this reversal of order matter? I think so: it results in an important confusion about the differences between G&P’s empirical question and the empirical question at the center of much of the extant research literature and the policy debates about the impact of counsel.

G&P’s study investigates the impact of offers of representation by law students. The research literature has been trying to answer a slightly but importantly different question, What is the impact of representation by advocates?

As I show in an article creeping slowly through peer review, 40 years of empirical studies try to uncover evidence of whether and how different kinds of representatives affect the conduct and outcomes of trials and hearings. Some of the studies in this literature are able to compare the outcomes received by people represented by fully qualified attorneys to those received by lay people appearing unrepresented, while other studies compare the work of lawyers to other kinds of advocates who are not legally qualified (including law students). Another group of these studies lumps all sorts of advocates together, comparing groups of unrepresented lay people to groups of people represented by lawyers, social workers, union representatives, and other kinds of advocates permitted to appear in particular fora.

G&P rightly criticize these older studies for what we would today call methodological flaws, and I heartily endorse their call for better empirical research into the impact of counsel. But, not only are they and the older participants in the scholarly conversation using different methods, they are asking different questions. As G&P tell us themselves, they can’t answer the question that motivated 40 years of research, as they can come to “no firm conclusion on the actual use of representation on win/loss” (2). If their article had reviewed the literature before it presented their findings, they likely would have had a harder time asserting to the reader that “the effect of the actual use of representation is the less interesting question” (39-40).

G&P’s empirical question is also slightly to the side of the empirical question arguably at the center of contemporary policy discussions. These often turn on when lawyers specifically are necessary, and when people can receive similar outcomes with non-lawyer advocates or with different forms of “self-help” (information and assistance short of representation, sometimes including and sometimes excluding legal advice). The comparative effectiveness of alternative potential services is a central question in evidence-based policy, and the way the access to justice discussion is conducted today places at the center the question of when attorneys are necessary advocates.

G&P are absolutely right that, if we wish to fully understand any program’s impact on the public, we need information about uptake by that public. Randomizing offers of law students’ services tells us something useful and important, but something different than randomizing the actual use of lawyer representation. As a matter of research design, randomizing use is a more challenging task.  Identifying the impact of use turns out to be quite hard to do, but it is still interesting and important.  We learn a lot from this article, and we stand to learn more, as the present piece is the first in a series of randomized trials.