Archive for the ‘Symposium (What Difference Representation)’ Category
posted by Jeffrey Selbin
In a draft essay, Service Delivery, Resource Allocation and Access to Justice: Greiner and Pattanayak and the Research Imperative, Tony Alfieri, Jeanne Charn, Steve Wizner, and I reflect on Jim Greiner and Cassandra Pattanayak’s provocative article reporting the results of a randomized controlled trial evaluating legal assistance to low-income clients at the Harvard Legal Aid Bureau. (The Greiner and Pattanayak article was the subject of a Concurring Opinions symposium last March.) Studying the outcomes of appeals from initial denials of unemployment insurance benefit claims, Greiner and Pattanayak asked, what difference does legal representation make? Their answer is that “an offer of HLAB representation had no statistically significant effect on the probability that a claimant would prevail, but that the offer did delay the adjudicatory process.” That is, not only was an offer of legal assistance immaterial to the case outcome, it may have harmed clients’ interests.
The Greiner and Pattanayak findings challenge our intuition, experience and deeply-held professional belief that lawyer representation of indigent clients in civil matters is fundamental to the pursuit of justice. Our first reaction is that the study must have fatal conceptual or methodological flaws – the researchers studied the wrong thing in the wrong way. Even when we learn that the study is credible and well designed, we doubt that this kind of research is a worthwhile use of our time or money relative to serving needy clients. Finally, and perhaps most importantly, we worry that the published results will only serve as fodder for the decades-long political assault on legal services for the poor.
If replicated across venues, however, studies like Greiner and Pattanayak’s can tell us a great deal about individual representation, program design and systemic access to justice questions. In fact, we cannot make genuine progress in any of these areas – much less marshal the case for more robust legal aid investments and the right to counsel in some civil cases – without better evidence of when, where and for whom representation makes a difference. Fortunately, developments in law schools, the professions and a growing demand for evidence-driven policymaking provide support, infrastructure and incentive for such research. For these reasons, we urge legal services lawyers and clinical law professors to collaborate in an expansive, empirical research agenda.
February 22, 2012 at 9:56 am Tags: Symposium (What Difference Representation) Posted in: Civil Rights, Empirical Analysis of Law, Law Practice, Symposium (What Difference Representation) Print This Post No Comments
posted by Dave Hoffman
Thanks to all our participants for an amazing symposium. You can see all twenty-two (!) posts here. Thanks especially to Jim and Cassandra for being fantastically good sports in subjecting their work to public scrutiny and to replying so conscientiously to everyone’s comments. Richard Zorza put it best: “[T]his study, regardless of, or perhaps because of, its controversial nature, will be looked back at as a critical event in the history of access to justice.” If he’s right, I really proud that CoOp could serve as a platform to debate the article, and the shape of things to come.
posted by Jeanne Charn
I write as a legal services lawyer and clinician who practiced in a neighborhood general law practice for low and moderate income people for over three decades and for twenty seven years directed a large clinical practice site at Harvard Law School. The issue of case selection/triage, to which this paper is relevant, was always a challenge. We had inklings that we needed more rigorous approaches and even had a socical scientist on staff for a few years but were never able to conceive and carry out a serious study, let alone one as sophisticated as the G&P randomized trial. I whole-heartedly welcome this effort and the the authors challenge to engage in rigorous scrutiny of the actual workings of legal services delivery in the U. S. Serious empirical work goes on as a matter of course in peer nations, all of which have been successful in obtaining and holding substantially greater resources than in the U. S. I don’t assume a connection between the research programs and better funding (hasn’t been studied) but these programs know a lot more about what they are producing and have refined delivery approaches and policies based on what they have learned.
Regarding the posts expressing concern that data and studies may be used improperly. I agree with Richard Zorza that we can’t be sure this won’t occur , but I believe the risks are much greater if we continue with virtually no serious effort to collect good outcome data, comparatively study different approahces to service delivery, develop productivity and efficiency standards as well as good measures of quality . Because we don’t have even a decent data system, we cannot assure that we are making the best use of the resources available. What if thousands more people could be effectively helped if programs were more efficient, targeted resources more effectively, and leveraged expertise to maximize both cost and outcome effectiveness? The result would be the same as if we had substantially more resources.
Having located myself firmly in the “we need more of just this sort of high quality research,” here are some thoughts, in no particular order, about the value – I would say the necessity - of a bold, broad empirical scrutiny of “our fondest pet notions” about our work. As I edit, I see the post is getting long, but I teach until 9 pm tonight, so may just have time to get this in before the symposium closes!
- I am encouraged by evidence that claimants succeed via self-representation or with information or limited advice and assistance. Advocacy resources can be directed to matters where advice/self-help is sub-optimal. I remain attached to the early goals of client activation and empowerment and self-help may play a role – a possiblity for further study!
- I understand the offer/representation distinction in the study and recognize that win/lose at hearing is not the only measure of success but a study should be assessed on what it purports to measure not on everything it could have measured. I support a broad and long term research agenda recognizing that good studies help us frame issues for further study.
- The issue of quality of work by HLAB students comes up in several posts. My experience is that well-supervised students can produce high quality work. and that we certainly should not understand supervised students as a proxy for less than good/high quality work. We often juxtapose a “lawyer” against a student or a lay advocate but bar admission by itself does not assure quality. A lawyer just admitted to the bar with little or no experience, practicing without supervision (entirely feasible in the U. S.) might be much less effective than an HLAB student or an experienced lay advocate . In fact, a substantial UK study produced evidence that lay advocates did more high quality work than solicitors.
- My experience suggests that advocate high expertise/experience is decisive in the challenging or close call case – the ones “on the bubble” – that could go either way. If we could reliably identify these matters (almost always a subset of those that go to hearing) we could allocate expert resources accordingly. We don’t want high expertise/high cost resources on less challenging cases – these are good for the rookies – and we don’t want rookies on the really hard cases unless teamed with an expert. In other words, we need to leverage experts and maximize use of students, less experienced advocates, pro bonos who need training to achieve both outcome and cost effective service.
- On random case taking - G&P make clear that r.c.t.s can incorporate screens for merit. I find their response entirely persuasive on this issue. However, aside from no merit (frivolous claims) which must always be declined, I believe that at some point we should test the assuptions underlying screening criteria which, I assume, are based on some conception of relative merit. Can advocates acurately predict relative merit? Is the goal to screen out the strongest (don’t want to risk losing) or the weakest (too improbable to be worth the resources) or to take the middling cases? Do screens fine line to this extent? Is merit entirely a function of the legal strength of the claim or does it include some notion of the relative neediness of the claimant? I recognize the sincere convictions underlying screening criteria, but are these “pet notions” or can they be backed up by credible evidence which, I think, brings us back to r.c.t.s.
- Further on the randomness of offers of representation in an r.c.t. as compared to the case screening and offer criteria in use in various programs – my experience suggests that we avoid confronting significant randomness in the existing system. Intake hours, days of service, periodic closing of “intake” (and thus direct contact with those seeking service) shut people out of the intake stream regardless of merit however measured. Because they are anonymous to the providers, the arbitrary denial of any opportunity for assistance goes unremarked.
posted by Steve Eppler-Epstein
A few suggested additions to Richard Zorza’s proposed “best practices” for randomized study of legal services:
(1) Remember the distinction between what is measurable and what is important. Broadly speaking, legal services programs are trying to increase access to justice (make voices heard), solve individual or group problems (win cases), and change the legal environment so that poor people’s lives will be better (change laws or systems). Our work on these efforts is interwoven; for example, we take individual cases that will both provide access to the system and fix a person’s problem, and in that process we gain important information about what may be broken in the larger system and where solutions may lie.
So: a large volume practice in an area of law that has a stream of comparable cases can be studied through randomization. On the other hand, efforts to change laws or systems, and innovative start-up projects, must be evaluated through other means.
The corollary here is: Be willing to publicly state, “This is a type of work that is susceptible to this research tool; there are other valuable types of work that must be studied with other tools.”
(2) Be clear about what is (and isn’t) being studied. This may be a warning primarily aimed at the legal aid providers. Over time, we will want to learn how much impact our scarce resources can have
- in various areas of legal work
- in different jurisdictions
- for clients with different fact patterns, personal skills, age, linguistic abilities, mental health or physical characteristics
- using a variety of different intervention levels and strategies (i.e. advice vs. limited representation vs. long-term representation)
- and employing a variety of different personal advocacy skills (i.e. confrontational vs. compromising, high-level listening skills vs. high level speaking skills).
We will need patience and persistence. Over time our services will be enhanced by exploring all of these questions (and more!). But we will get garbage results if we try to do everything at once.
The Greiner and Pattanayak HLAB study, and all the commentary in this symposium, illuminates how much work we have to do. Did the Harvard students have no impact (one commentator disagrees based on the data). Could a change in client selection enhance the impact for the clients served? A change in case strategy? A change in law student advocacy style or skills?
We are so early in this learning process that for now, each study will primarily highlight the next set of questions to be asked.
(3) Be aware of the costs of measurement.
Measurement takes time. When we say “legal aid to the poor is a scarce resource,” we mean that there are nowhere near enough people-hours to do all that we know justice requires. Planning and carrying out a useful measurement (a “next step” in the learning process described above) takes time away from other activities. We will have to think through, design, set up the study. We will have to explain to staff and communities we serve and funders what we are doing and why. We will be spending that much less time serving clients or raising money to serve clients.
At certain points, measurement may arm opponents of legal services. Others have remarked on this; as someone who has done a lot work to present the case for legal services, I’ll just say both that the danger is real but also that it should not be over-emphasized. People who don’t like legal services to the poor will use data against us when they can. But our genuine effort to maximize the impact of scarce resources will encourage our supporters. And we need to remember that data is only one of the types of description we should be providing about legal services. The individual stories of our clients, the testimonials of the bar and bench and community supporters are all part of the larger message. Data is an important part – but only a part – of that broader message.
Similarly, measurement may over-emphasize aspects of the work that can be measured. This is a cost to measurement, but one that can also be countered. As discussed above, it is quite important for everyone involved in this endeavor to keep in mind that while randomized study may teach us important things about how best to serve clients, that does not mean that the only things important to clients are those which can be (or have been) measured.
(4) Be clear that even findings of “no distinction between groups” are not necessarily findings of “no effect.” Two examples to illustrate this point:
First, imagine a hypothetical study of a legal aid program – half the eligible clients are randomly turned away. Now assume that all of the clients “turned away” have on their own applied for and gotten assistance from a second legal aid program. While designed as a study of the first legal aid program, in all practical terms this has now become a comparison study of two legal aid programs. If the two programs provide identical assistance, clients in the study program would see no benefit compared to the clients turned away. But if in fact people outside the study unrepresented by either program do much worse, there remains a real effect of the study program’s services that is not measured by the study. (To be clear – this example is brought to mind by aspects of the Greiner and Pattanayak study, in which some clients turned away received other assistance, but it is not an accurate description of that study’s participants – it is just a hypothetical to illustrate that a control group is not necessarily representative of the broadest class.)
Second, take the very real world of housing courts in Connecticut. I am told by colleagues that 35 or 40 years ago, before there was a broad legal aid presence in housing courts, landlords routinely ejected poor people without following the laws. When legal aid started a high volume housing practice, legal aid lawyers stopped landlords from locking people out without process, stopped courts from evicting poor people who had a right to stay, and in some cases got money from landlords for violations of the law. Landlords are now much more unlikely to illegally eject tenants; a study conducted now might find little difference in “ability to stay” between tenants who have a lawyer and tenants who don’t, because the landlord doesn’t know who has (or will have) a lawyer. But this lack of randomized difference would not necessarily mean that the continued housing practice is not having an impact. If legal aid completely stopped representing tenants, it’s likely that illegal practices by landlords would re-emerge.
(5) Be willing to publicly and forcefully debunk misleading uses of your data. This is a plea from those “in the trenches” to those in academia: when your data is misused in a manner that could harm support for legal aid to the poor, the protestations of legal aid providers may not be believed by those hearing the debate. After all, we are not economists or statisticians, and we have a vested interest in the outcome. The academics will be the credible voice to publicly tell funders and government decision-makers, “Those opponents of legal services are misrepresenting truth when they say that this study suggests that poor people don’t need, or shouldn’t get a lawyer. Indeed, we engage in this research because we believe that by studying legal services to the poor we can help this small and dedicated group be as effective as it can be, for people who are desperately in need of that help.”
posted by Dave Hoffman
Jim and Cassandra write:
“To Dave, we say that our enthusiasm for randomized studies is high, but perhaps not high enough to consider a duty to randomize among law school clinics or among legal services providers. We provided an example in the paper of practice in which randomization was inappropriate because collecting outcomes might have exposed study subjects to deportation proceedings. We also highlighted in the paper that in the case of a practice (including possibly a law school clinic) that focuses principally on systemic change, randomization of that practice is not constructive. Instead, what should be done is a series of randomized studies of an alternative service provider’s practice in that same adjudicatory system; these alternative provider studies can help to assess whether the first provider’s efforts at systemic change have been successful.”
I meant to cabin my argument to law school clinics. And I do understand that there may be very rare cases where collecting outcomes will hurt clients (such as deportation). But what about a clinic that focuses on “systemic change.” Let’s assume that subsidizing such a clinic would be a good thing for a law school to do (or, put it another way, we think it is a good idea for current law students to incur more debt so that society gets the benefit of the clinics’ social agitation). Obviously, randomization of client outcomes would be a terrible fit for measuring the success of such a clinic. It would be precisely the kind of lamppost/data problem that Brian Leiter thinks characterizes much empirical work.
But that doesn’t mean that randomization couldn’t be useful in measuring other kinds of clinic outcomes. What about randomization in the allocation of law student “employees” to the clinic as a way to measure student satisfaction in the “learning outcomes“? Or randomization of intake and utilizing different client contact techniques as a way of measuring client satisfaction with their representation (or feelings about the legitimacy of the system?) One thing that the commentators in this symposium have tried to emphasize is that winning & losing aren’t the only outputs of the market for indigent legal services. Controlled study of the actors in the system needn’t be constrained in the way that Jim and Cassandra’s reply to my modest proposal to mandate randomization suggest.
posted by Jaya Ramji-Nogales
Thanks to Jim and Cassandra for their carefully constructed study of the impact of an offer from the Harvard Legal Aid Bureau for representation before the Massachusetts Division of Unemployment Assistance, and to all of the participants in the symposium for their thoughtful contributions. What Difference Representation? continues to provoke much thought, and as others have noted, will have a great impact on the access to justice debate. I’d like to focus on the last question posed in the paper — where do we go from here? — and tie this in with questions about triage raised by Richard Zorza and questions about intake processes raised by Margaret Monsell. The discussion below is informed by my experience as a legal service provider in the asylum system, a legal arena that the authors note is strikingly different from the unemployment benefits appeals process described in the article.
My first point is that intake processes vary significantly between different service providers offering representation in similar and different areas of the law. In my experience selecting cases for the asylum clinics at Georgetown and Yale, for example, we declined only cases that were frivolous, and at least some intake folks (yours truly included) preferred to select the more difficult cases, believing that high-quality student representation could make the most difference in these cases. Surely other legal services providers select for the cases that are most likely to win, under different theories about the most effective use of resources. WDR does not discuss which approach HLAB takes in normal practice (that is, outside the randomization study). On page twenty, the study states that information on financial eligibility and “certain additional facts regarding the caller and the case” are put to the vote of HLAB’s intake committee. On what grounds does this committee vote to accept or reject a case? In other words, does HLAB normally seek the hard cases, the more straightforward cases, some combination, or does it not take the merits into account at all?
March 28, 2011 at 9:14 pm Posted in: Behavioral Law and Economics, Civil Rights, Empirical Analysis of Law, Immigration, Law Practice, Law Rev (Yale), Symposium (What Difference Representation), Uncategorized Print This Post One Comment
posted by Jim Greiner
We thank Kevin Quinn and David Hoffman for taking the time to comment in our paper. Again, these are two authors whose work we have read and admired in the past.
Both Dave and Kevin offer thoughts about the levelof enthusiasm legal empiricists, legal services providers, and clinicians should have for randomized studies. We find ourselves in much but not total agreement with both. To Kevin, we suggest that there is more at stake than just finding out whether legal assistance helps potential clients. In an era of scarce legal resources, providers and funders have to make allocation decisions across legal practice areas (i.e., should we fund representation for SSI/SSDI appeals or for unemployment appeals or for summary eviction defense). That requires more precise knowledge about how large representation (offer or actual use) effects are, how much bang for the buck. Perhaps even more importantly, scarcity requires that we learn how to triage well; see Richard Zorza’s posts here and the numerous entries in his own blog on this subject. That means studying the effects of limited interventions. Randomized trials provide critical information on these questions, even if one agrees (as we do) that in some settings, asking whether representation (offer or actual use) helps clients is like asking whether parachutes are useful.
Thus, perhaps the parachute analogy is inapt, or better, it requires clarification: we are in a world in which not all who could benefit from full-service parachutes can receive them. Some will have to be provided with rickety parachutes, and some with little more than large blankets. We all should try to change this situation as much as possible (thus the fervent hope we expressed in the paper that funding for legal services be increased). But the oversubscription problem is simply enormous. When there isn’t enough to go around, we need to know what we need to know to allocate well. Meanwhile, randomized studies can also provide critical information on the pro se accessibility of an adjudicatory system, which can lay the groundwork for reform.
To Dave, we say that our enthusiasm for randomized studies is high, but perhaps not high enough to consider a duty to randomize among law school clinics or among legal services providers. We provided an example in the paper of practice in which randomization was inappropriate because collecting outcomes might have exposed study subjects to deportation proceedings. We also highlighted in the paper that in the case of a practice (including possibly a law school clinic) that focuses principally on systemic change, randomization of that practice is not constructive. Instead, what should be done is a series of randomized studies of an alternative service provider’s practice in that same adjudicatory system; these alternative provider studies can help to assess whether the first provider’s efforts at systemic change have been successful.
Our great thanks to both Kevin and Dave for writing, and (obviously) to Dave (and Jaya) for organizing this symposium.
posted by Kevin Quinn
In What Difference Representation: Offers, Actual Use, and the Need for Randomization Jim Greiner and Cassandra Wolos Pattanayak present the results from a randomized controlled study that was designed to assess the efficacy of an offer of representation from the Harvard Legal Aid Bureau (HLAB)−a student-run provider of legal services that is part of the clinical education program at Harvard. There is a great deal to like about this article: it is methodologically rigorous, the data analysis is careful and transparent, its main points are clearly argued and explained, and both the specific results dealing with the efficacy of an offer of HLAB representation in unemployment benefits cases and the broader argument about the need for randomized controlled trials to better understand the effects of representation in civil proceedings are provocative and relevant. The authors are clear about what they are doing and what they are not doing and they offer good advice about how one might design additional studies to assess interesting questions that are outside the scope of their own study.
It would be easy to spend more time and space than this blog post permits to discuss everything that I like about this article. I am not going to do that.
Instead, I’d like to use this post to briefly mention some issues that arise when one attempts to seriously evaluate the efficacy of legal representation−particularly free legal assistance such as that offered by law school clinical education programs. Read the rest of this post »
posted by Richard Zorza
Lurking behind much of the debate in this symposium is anxiety that negative findings about access to justice services will strengthen and facilitate attempts to reduce resources for access to justice services.
While it would be impossible to rebut the claim that this might happen, that can not be an argument against conducting or reporting research. On the contrary, it has to be an argument for more and better research. Having been involved in access to justice for decades, I am all too aware that it never seems to be the right time to make ourselves vulnerable — so the answer has to be always, because that is the only way to gain credibilit..
But this does all raise the question as to how research should be structured, analyzed, and particulalry, reported in order to minimize the risk of results inconsistent with the ultimate findings of the research, in all their complexity and subtlety.
- Generality of Reporting. The headings, abstract, etc, must be structured to accurately reflect the generlity of the research
- The Reporting of Context. Studies should be very careful about describing accurately the context in which treatment is provided.
- Randomization/Observation. Where studies are not randomized, that must be very clearly reported, and selection bias must be loudly proclaimed, not the subject of a footnote.
- Explanation of Statistical Significance. The issues of statistical significance must, to the extent possible, be explained as clearly as possible, in lay terms. The failure to do so, when it occurs, makes both overstatement an unfair critique easier.
- Lay Version. Research should be made available in a lay summary version, without complexities but with the detail and cautions. This will reduce the risk of the results being oversimplified by the media and/or others
- Vigilance as to Over/Under Generalization. The text should not only be accurate as to the level of generality of the research, but should be explicit as to the kinds of generalizations that might erroneously be drawn from the research. (This makes it easier to rebut overdrawn conclusions made by legal or political opponents.)
I would very much appreciate additions to such best practices. There are surely many more in the social sciences. Those suggested here, however, are less about avoiding error, and more about avoiding error in the reporting or use by others.
posted by Margaret Monsell
Thanks for the invitation to participate in this interesting and provocative symposium.
I’m a legal services attorney in Boston. My employer, Massachusetts Law Reform Institute (MLRI), has as one of its primary tasks to connect the state’s field programs, where individual client representation occurs, with larger political bodies, including legislatures and administrative agencies, where the systemic changes affecting our clients most often take place. (The legal services programs in many states include organizations comparable to MLRI; we are sometimes known by the somewhat infelicitous name “backup centers.”) Among the programs with which MLRI is in communication is the Harvard Legal Assistance Bureau, and I would take this moment to acknowledge the high regard in which I and my colleagues regard their work.
The substantive area of my work is employment law. It is no surprise that during the past three years of our country’s Great Recession, the importance of the unemployment insurance system for our clients has increased enormously and, consequently, it has occupied a greater portion of my time than might otherwise have been the case.
I’m not a statistician nor do I work in a field program representing individual clients, so my comments will not address in any detail the validity of the HLAB study or the conclusions that may properly be drawn from it. As one member of the community of Massachusetts legal services attorneys, however, I have an obvious interest in the way the study portrays us: we are variously described as self-protective, emotional, distrustful of being evaluated, and reluctant to the point of perverseness in participating in randomized studies of the kind the authors wish to conduct. Our resistance in this regard has itself already been the subject of comment here. Happily, it is not often that one looks into what seems to be a mirror and sees the personage looking back wearing a black hat and a snarl. But when it does happen, it’s hard to look away without some effort at clarification. So I will devote my contribution to the symposium to the topic of the perceived reluctance of the legal services community to cooperate in randomized trials. It goes without saying, but the following thoughts are those of only one member of a larger community.
My understanding is that in the HLAB study, no significant case evaluation occurred prior to randomization. Many of us in legal services view with trepidation the idea of ceding control over case selection to the randomization process. Others have more sanguine views, either because they assume that randomization is already taking place or that it ought to be. For example, in his comments from a few months ago, Dave Hoffman was working under the assumption that to randomize client selection would not change an agency’s representation practices at all, and on that basis, he criticized resistance to randomized control trials as “trying to prevent research from happening.”
The authors of the study are enthusiastic about randomization not only because of its scientific value in statistical research but also because it can help to solve one of the thorniest problems facing legal services programs – the scarcity of resources as compared to the demand. As long as the demand for legal assistance outstrips the supply, Professor Greiner has said, randomization – a roll of the dice or the flip of a coin — is an easy and appropriate way to decide who gets representation and who does not.
I believe it’s erroneous to assume that randomization would not change representation practices, at least in the area of legal services in which I work. I also acknowledge that it is possible, at least theoretically, for all the cases in a randomized control trial to have met the provider’s standards for representation. This would provide some measure of reassurance. However, in one area of law, immigration asylum cases, the authors have concluded that time constraints make such an effort unworkable.
posted by Richard Zorza
Let me suggest that this study, regardless of, or perhaps because of, its controversial nature, will be looked back at as a critical event in the history of access to justice. Context is, of course, all, not only in understanding the data reported in the study, but also in assessing its overall meaning and impact, and in discussing the future directions it should lead to.
For me, the key context is that this is a time in which there is a broad national consensus, at least among national constituency organizations, about what is needed to achieve access (court simplification and services, bar flexibility, and legal aid efficiency and resources), but also a lack of political will to move that consensus forward in the broader political arena.
Part of the lack of political will comes from a deep fear of financial consequences. Whatever the intellectual achievements of the Civil Gideon movement, the fact remains that litigative efforts have largely failed. Indeed, the oral argument in the Supreme Court last week on the civil contempt / child support counsel issue again illustrated the inevitable impact of financial concerns. (Transcript here). E.g Transcript at 38 (“massive change.”)
I believe that we are only going to make truly significant progress on access to justice in these tough times (which are likely to go on for a long time, for state courts at least, given the changes in the structures of state budgets) is to convince decision makers that we can provide access to justice while controlling costs. This is very hard to do, given the entitlement model that ground so much of the advocacy.
However, I see in this paper — as well as the discussion about it, and others that are in the pipeline, the beginning of the analysis that can give us the cost estimates, and the cost controls that will make access to justice politically unassailable.
To be concrete, my own view, having actually been an unemployment advocate in Massachusetts, before law school in the 1970s, and being familiar with the advaocy structures that have grown up, is that these results are best understood as the product of the (relatively) accessible nature of the agency, the high benefits win rate, the lack of experience with the system that second year law students suffer, the fact that the non-treatment group so often got representation — which should surely have been better than that provided by students. I should add that my own experience with the agency was that winning was not a matter of legal or forensic skill, but rather a matter of internalizing, and communicating to clients one simple cultural message “I want to work so hard it hurts”. But most of all, I think that a large portion of these cases were doomed to win or lose, regardless of what kind of assistance they got. In other words, representation of any quality might make less difference than it should — with one importance caveat. Much of the impact of advocacy in this area depends on working with the ultimate UI claimant long before the claim is filed, and ideally long before the employment is terminated.
Why does this matter? It matters because in a cost effect access to justice system, we need to find a way to provide resources only in cases in which they have a significant chance of making a significant difference, and even in those cases to provide only the cheapest help that will acheive that goal.
I think that this study highlights the ultimate possibility of making these determinations. This is because we here see one form of treatment, delivered in one context, and we all agree, I think that we need to understand both the treatment and the context better to understand the meaning of the study. This is the first piece of randomized mosaic that will ultimately produce a multi-dimensional picture of what makes and difference and when. When we know that, we will be able to figure out what systems will allow us to decide who gets what in terms of help, and how such systems can be grounded on broadly legitimate factors. In other words we need a triage system that has wide intellectual and political legitimacy, and that considers how to leverage recent innovations in court and bar services to minimize the number of situations that need the most expensive forms of access services. The most interesting randomized studies of all will be those that compare different systems of triage, including both different criteria, and different decision-makers. I would very much appreciate thoughts on how this work might be advanced.
Those interested in the possible scope of the access to justice consensus, including its relationship to triage, can read my recent judicature article here. Those interested in parsing the recent Supreme Court argument can look at my recent blog here.
posted by Jim Greiner
We very much appreciate the time Rebecca Sandefur, Andrew Martin, Michael Heise, and Ted Eisenberg have taken to comment on our paper. We are particularly excited by comments from these authors because for each, we have read and admired his/her work in the past.
We believe that much of the criticism expressed in these comments is well-taken, and we will react accordingly.
posted by Steve Eppler-Epstein
When legal aid providers read “What Difference Representation? Offers, Actual Use, and the Need for Randomization,” we immediately start to raise questions. Appropriately, we note that there’s a vast difference between a busy law student handling what may be their first case and a professional experienced legal aid lawyer. We note that, apparently, some significant number of the people randomly turned away by the Harvard law school clinic were then advised or represented by Greater Boston Legal Aid.
There is also a broader question, which I will explore in a subsequent post: What is the broader context for randomized study of the impact of legal aid — what kinds of things can we learn from randomized study, and what impact questions can’t be answered through randomization?
As others have written, Greiner and Pattanayak may not be right, or their conclusions may be overstated or unfounded. But legal aid providers can have important conversations that start here: “What if Greiner and Pattanayak are right?” What would it mean if Harvard law students offering representation to random low-income applicants for unemployment compensation are not increasing the number of people getting benefits, and may even be slowing down receipt of benefits for those who win?
Another way to ask this question is this: What does it mean that under some sets of circumstances, offers of legal aid don’t help people?
Here are my answers:
(1) Outreach, client-friendly intake, and supportive client services are crucial to maximizing impact of legal aid to the poor.
Of the low-income people who might seek help from the Harvard Legal Aid Bureau (which is a student clinic), or any of the professional legal-aid agencies, it is very likely that some people could handle their legal problem adequately or even well, without a law student (or lawyer).
On the other hand, there certainly is a large set of of people who cannot possibly handle their cases adequately on their own. There are many, many low-income people who cannot read or write or speak coherently, who live with severe mental health problems, whose only language is not supported in the relevant adjudicative setting, whose mental or physical health or destitution prevents them from being able even to appear at the adjudicative setting, or who face other barriers to successful litigation without representation.
Right or wrong, the Greiner and Pattanayak article reminds me that it is crucial for legal aid agencies to:
- Identify which, of the millions of low-income people in crisis, are least able to resolve their legal issues on their own (and yes, this is a question ripe for further study);
- Ensure that these “most-in-need” people know how to access our services (or that social service agency staff or others in contact with them know how to reach us);
- Ensure that our intake systems (intended to be “triage” systems) effectively identify the “most-in-need” clients
- Ensure that our services include, or are integrated with, support systems for clients who without support cannot take advantage of the legal help we are offering (people who, alone, cannot take advantage of our offers of help because they are afraid, confused, overwhelmed, or otherwise hard to serve).
(2) We need continued research, training and supervision to maximize use of best (most effective) practices.
The fact that Greiner and Pattanayak studied offers of services by law students provides a sharp reminder that there can be a wide range of effectiveness between different providers of legal help. Anyone who has watched a series of cases in court has seen that some lawyers have more impact on the judge than others. Similarly there is variance in how well lawyers organize their work, gather facts, research and present their cases.
In the world of elementary school teaching, the documenting and debating of best practices is well underway. Teach Like A Champion, by Doug Lemov, is an attempt to turn research into set of best practices for teachers. The criticisms of the research will be familiar, including questions about whether the research asked the right questions or included the right samples. But the fundamental effort is right — in any area of legal work, our effectiveness will be driven in part by whether we use the right strategies and techniques. The legal aid community works hard to deploy experience-based training towards best practices. But there has been only limited formal study comparing available techniques and strategies for serving clients. Perhaps further randomized or other outcome research can help us better identify the strategies and techniques that will maximize impact for our clients.
(3) Improving an adjudicative system can increase the number of people for whom we have little impact — and that’s a good outcome!
I have heard from colleagues in Massachusetts that some years back, the unemployment compensation system was complicated and near-impossible for non-lawyers to navigate. Reform efforts by lawyers at Greater Boston Legal Services, Massachusetts Law Reform and others took lessons learned from individual representation in the unemployment system and turned that into systems reform advocacy. Over the years, the system has become more and more accessible to people representing themselves, without a lawyer.
Efforts like this, in various areas of client legal need, have been repeated by legal aid programs across the country. We fervently hope that some people can achieve justice without a lawyer, because we know that the very limited number of legal aid lawyers in the country is inadequate to serve more than a fraction of those in need. Systems advocacy is an essential task, because its success will expand the number of people who truly can achieve equal justice without the offer of a lawyer.
The Centrality of Abstracts? A Response to Bob Sable’s and David Udell’s Comments on “What Difference Representation? Offers, Actual Use, and the Need for Randomization”
posted by Jim Greiner
Our great thanks to David Udell and to Bob Sable for taking the time to comment (separately) on our paper, “What Difference Representation? Offers, Actual Use, and the Need for Randomization.” We very much appreciate the comments they have made, and as we hope is clear from the introduction and elsewhere in our paper, we have the greatest respect for the work that they do, the work that their organizations do, and the legal services community to which they belong does.
Some uncomfortable aspects of writing this paper are that we find ourselves sometimes disagreeing with persons and organizations we greatly admire, being held responsible for what an advocate did not include in a legal brief, and having our study implicitly compared to the Gingrich Congress’ efforts to limit legal services funding and to a false expose seeking to tar legal services programs. Had David’s and Bob’s criticisms concerned what we said in our paper, we might have considerable cause for regret. As we understand them, however, the lion’s share of David’s and Bob’s comments go not to the content of the paper but to the title and abstract. There are some substantive points, to which we respond below, but the primary thrust of both comments is that we have been reckless in the title and the abstract by not including and highlighting caveats that David and Bob for the most part agree that we discuss in the paper’s text.
We wonder about the apparent centrality of titles and abstracts. But conceding that point for the moment, we also wonder whether the sins of omission and selective emphasis David and Bob accuse us of committing apply to their blog posts. By way of example, none of the following appears in either of their posts: (i) that the full title of the paper (which appears above) references the distinction between offers and actual use; (ii) that the first sentence of the abstract says that our research program is “designed to measure the effect of an offer of, and the actual use of, legal representation”; (iii) the last sentence in this paragraph in the abstract, after again referencing “the actual use of (as opposed to an offer of) representation,” reports that “we could come to no firm conclusion on the effect of actual use of representation on win/loss”; (iv) the third sentence of the paper again references “both an offer of, and actual use of, representation”; and (v) Part B of the introduction dedicates several pages discussing the distinction. We have expanded the abstract several times already in response to concerns from legal services providers (including HLAB itself), and we will consider doing so again, but perhaps the best thing to do at this point would be simply to omit the abstract entirely. We will consider that as well.
posted by Rebecca Sandefur
Different fields of scholarship have different conventions. Those of us who participate in multiple scholarly worlds have likely had experiences leading us to believe that some conventions are useful and worthwhile, while others are pointless or actively harmful. Whether we like specific conventions or not, though, we have to play along with them if we want to contribute to the scholarly conversations where these conventions rule.
Professor Greiner and Ms. Pattanayak (hereinafter G&P) elected to publish their empirical research in a top traditional law review. Law reviews have their own peculiar conventions, that differ sharply from the peculiar conventions of peer-reviewed journals in fields like statistics, sociology, law and society, or political science. Because G&P made this choice, their article is different than it would have been had they been writing for a different kind of publication venue. I would like to focus on one convention of writing for peer-reviewed social science journals that law reviews typically disregard and draw out one consequence of this disregard.
By convention, a social scientific article starts with a literature review covering the prior work on the topic of study. The point of this exercise is to explain to the reader the significance to the field of the new empirical research that is about to be presented. Good literature reviews act as a wind up for the paper’s own research. A good literature review gets the reader interested and motivates the paper by showing the reader that the study she is about to read fills a big intellectual gap, or resolves an important puzzle, or is incredibly innovative and cool. Thus primed, the reader then eagerly consumes the study’s findings with a contextualized understanding of their significance.
G&P’s paper inverts this usual ordering, presenting their study first, and then following with a literature review that motivates their call for more studies like their own. Does this reversal of order matter? I think so: it results in an important confusion about the differences between G&P’s empirical question and the empirical question at the center of much of the extant research literature and the policy debates about the impact of counsel.
G&P’s study investigates the impact of offers of representation by law students. The research literature has been trying to answer a slightly but importantly different question, What is the impact of representation by advocates?
As I show in an article creeping slowly through peer review, 40 years of empirical studies try to uncover evidence of whether and how different kinds of representatives affect the conduct and outcomes of trials and hearings. Some of the studies in this literature are able to compare the outcomes received by people represented by fully qualified attorneys to those received by lay people appearing unrepresented, while other studies compare the work of lawyers to other kinds of advocates who are not legally qualified (including law students). Another group of these studies lumps all sorts of advocates together, comparing groups of unrepresented lay people to groups of people represented by lawyers, social workers, union representatives, and other kinds of advocates permitted to appear in particular fora.
G&P rightly criticize these older studies for what we would today call methodological flaws, and I heartily endorse their call for better empirical research into the impact of counsel. But, not only are they and the older participants in the scholarly conversation using different methods, they are asking different questions. As G&P tell us themselves, they can’t answer the question that motivated 40 years of research, as they can come to “no firm conclusion on the actual use of representation on win/loss” (2). If their article had reviewed the literature before it presented their findings, they likely would have had a harder time asserting to the reader that “the effect of the actual use of representation is the less interesting question” (39-40).
G&P’s empirical question is also slightly to the side of the empirical question arguably at the center of contemporary policy discussions. These often turn on when lawyers specifically are necessary, and when people can receive similar outcomes with non-lawyer advocates or with different forms of “self-help” (information and assistance short of representation, sometimes including and sometimes excluding legal advice). The comparative effectiveness of alternative potential services is a central question in evidence-based policy, and the way the access to justice discussion is conducted today places at the center the question of when attorneys are necessary advocates.
G&P are absolutely right that, if we wish to fully understand any program’s impact on the public, we need information about uptake by that public. Randomizing offers of law students’ services tells us something useful and important, but something different than randomizing the actual use of lawyer representation. As a matter of research design, randomizing use is a more challenging task. Identifying the impact of use turns out to be quite hard to do, but it is still interesting and important. We learn a lot from this article, and we stand to learn more, as the present piece is the first in a series of randomized trials.
posted by Andrew Martin
March 28, 2011 at 10:00 am Tags: Empirical Analysis of Law, Law Practice, Law Rev (Yale), Law School, Law School (Scholarship), Symposium (What Difference Representation) Posted in: Symposium (What Difference Representation) Print This Post No Comments
posted by Ted Eisenberg
Congratulations to the authors on an excellent study that promotes and explores the importance of random assignment.
My comment supports the article’s emphasis on caution and not overgeneralizing. My focus is on the article’s Question 2: Did an offer of HLAB representation increase the probability that the claimant would prevail? My analysis of the simple frequencies (I have not delved into the regressions and ignore weights) suggests that HLAB attorneys should view the results as modest, but inconclusive, evidence that an offer of representation improves outcomes.
Based on Table 1, page 24, there are 129 No offer observations and 78 Offer observations. Ignoring weights, which I think are said not to make a huge difference, page 26 reports that .76 of claimants who received an offer prevailed in their first level appeals, and that .72 of claimants who did not receive an offer prevailed in their first level appeal.
So, those who were offered representation fared better; one measure of which is they did .04/.72 x 100, or 5.6% better. Given the high background (no offer condition) rate of prevailing, the maximum improvement (to 1.00 success rate) is .28/.72 x 100 or 38.9%. Another measure could be the proportionate reduction in defeat. The no offer group was “defeated” 28% of the time. The offer group was defeated 24% of the time. The reduction in defeat is .04/.28 x 100 is 14.3%. This measure has the sometimes attractive feature that it can range from 0% to 100%. So by this measure the offeree group did 14% better than the non-offeree group, a modest improvement for the offer condition.
A concern expressed in the paper is that the result is not statistically significant. This raises the question: given the sample size, how likely was it that a statistically significant effect would be detected? Assessing this requires hypothesizing what size effect of an offer would be of societal interest. Suppose we say that lawyers should do about 10% better and move the win rate from .72 for non offerees to .80 for offerees. This is an 11.1% improvement by the first measure and a 28.6% improvement by the second measure. Both strike me as socially meaningful but others might specify different numbers.
We can now pose the question: given the sample size and the effect of specified size, what is the probability of observing a statistically significant effect if one exists? I use the following Stata command to explore the statistical power of the study:
sampsi .72 .80,n1(129) n2(78), which yields the following output:
Estimated power for two-sample comparison of proportions
Test Ho: p1 = p2, where p1 is the proportion in population 1 and p2 is the proportion in population 2
alpha = 0.0500 (two-sided)
p1 = 0.7200
p2 = 0.8000
sample size n1 = 129
n2 = 78
n2/n1 = 0.60
power = 0.1936
A power of 0.19 is too low to conclude that the study was large enough to detect an effect of the specified size at a statistically significant level. If one concluded that an offer of representation did not make a significant difference from this study, there is a good chance the conclusion would be incorrect. To achieve power of about 0.70, one would need a sample four times as large as that in the study. If one thought that smaller effects were meaningful, the sample would be even more undersized.
I think my analysis so far underestimates the benefit of an offer by HLAB attorneys. Perhaps we can take .72 as a reasonable lower bound on success. Even folks without an offer succeeded at that rate. The realistic upper bound on success is likely not 1.00. Some cases simply cannot be won, even by the best lawyer in the world. Perhaps not more than 90% of cases are ever winnable, with the real winnable rate likely somewhere between .8 and .9. If the winnable rate was .8, then the offer got clients halfway there, from .72 to .76. If the real rate was higher, the offer was less effective but not trivial in size. At .9, the offer got the clients 22% closer to the ideal. The study just was not large enough to detect much of an effect at a statistically significant level.
So while I agree that the study provides no significant evidence that an offer increases success, my analysis (obviously incomplete) suggests that the study provides no persuasive evidence that an offer does not increase success. The study is inconclusive on this issue because of sample size.
HLAB lawyers should not feel that they have to explain away these results; the results modestly, but inconclusively, support the positive effect of an offer because they are in the right direction in a small study.
posted by Michael Heise
I assigned the Greiner & Pattanayak paper (or, more accurately, an earlier iteration of the paper) in my Empirical Legal Studies Colloquium this semester at Cornell. Among the many issues that animated my students was the paper’s title, particularly its focal point: “What Difference Representation?”
My students noted the obvious–notwithstanding the title’s tilt, the authors make clear (indeed, painfully clear) their wish to dwell on the effects of an offer of representation rather than the efficacy of actually using legal representation. Moreover, the authors assert that “the effect of actual use of representation is the less interesting question” (emphasis added)(pp. 39-40) while investing considerable energy in explaining to readers “why offers are relevant” (e.g., pp. 10-12).
To be sure, the authors are correctly mindful of and sensitive to important data and research design limitations. As they note repeatedly, “the offer, not actual use of, representation was randomized” (e.g., p. 41). Although the ‘effect of actual use of representation’ question is, as the paper makes clear, “challenging to answer” (p. 41), it does not follow that it is also, therefore, a “less interesting question” (pp. 39-40).
Simply put, the paper does not persuade on this point. If anything, the degree to which the authors felt it necessary to explain why “offers are relevant” (and, by implication, interesting) erodes their argument. Moreover, if, as the authors claim, the use of representation is the less interesting question, then why make it the clear focal point of the paper’s title? While I am not insensitive to the need to “market” one’s scholarship and understand that titles can be pressed into such service (especially if one immediate target audience includes student law review editors), my sense is that this title for this paper contributes unnecessary drag.
posted by David Udell
David Udell is the Executive Director of the National Center for Access to Justice and a Visiting Professor from Practice at Cardozo Law School.
In my line of work, I have seen many efforts in the political realm to shut down civil legal services for the poor, and have continually worked to combat such efforts. In 1996, when the Gingrich Congress barred federally funded legal services lawyers from bringing class actions on behalf of the poor, I left Legal Services for the Elderly in order to finish a lawsuit on behalf of widows and widowers who were suing to compel the United States Treasury to fix its practices for replacing stolen Social Security payments. When I later moved to the Brennan Center for Justice, I helped bring a lawsuit against the rules that barred legal services lawyers from participating in such class actions, I filed another lawsuit against similar rules that barred law school clinic students from bringing environmental justice cases in Louisiana, and I built a Justice Program at the Brennan Center dedicated to countering such attacks on the poor and on their lawyers.
In their March 3, 2011 draft report, What Difference Representation? Offers, Actual Use, and the Need for Randomization (“the Study”), authors D. James Greiner & Cassandra Wolos Pattanyak are right about the importance of developing a solid evidence base – one founded on methodologies that include randomization – to establish what works in ensuring access to justice for people with civil legal cases. They are right again that in the absence of such evidence, both the legal aid community and its critics are accustomed to relying on less solid data. And they are smart to “caution against both over- and under-generalization of these study results.” But, unfortunately, the bare exhortation to avoid over- and under-generalization is not sufficient in the highly politicized context of legal services.
While the authors obviously do not have any obligation to arrive at a particular result, they can be expected to recognize a need to avoid statements that have a high probability to mislead, especially in light of the likely inability of much of the Study’s audience to understand the authors’ methodology and findings. In fact, because of the Study’s novelty and appearance in a non-scientific journal, it will be relied on to analyze situations where it doesn’t apply, and by people who have no background in social science research, plus it will be given disproportionate weight because so few comparable studies exist to judge it against. It is these factors, in combination with the politicization of legal services, that make it crucial that the authors’ assertions, particularly in the sections most likely to be seen by lay readers (the title and the abstract), do not extend beyond what the findings justify.
March 28, 2011 at 8:04 am Posted in: Civil Rights, Empirical Analysis of Law, Law Practice, Law Rev (Yale), Law School, Law School (Law Reviews), Symposium (What Difference Representation), Uncategorized Print This Post No Comments
posted by Bob Sable
I am the Executive Director of Greater Boston Legal Services, the primary provider of civil legal services to poor people in the greater Boston area. My program and I have a great stake in assuring that our limited resources are used where they can be most effective. Indeed we are participating with Professor Greiner in a study of the impact of our staff attorneys’ representation in defense of eviction cases. My comments refer to the draft dated February 12, 2011.
It is important with any study; however, to know what it concludes and what it does not. For instance, and most importantly, the study concedes on page 43 that it could draw no conclusions about the effect on outcome for claimants actually receiving representation, as opposed to just an offer of representation. Thus, this study should be recognized for what it is: a limited analysis of the somewhat abstract concept of “offering” assistance. Indeed, the study wisely cautions against drawing any conclusions from the study about the usefulness of free legal assistance or even about the usefulness of offers of representation in unemployment cases in general (page 47).
I feel some changes are necessary to avoid much confusion about (and misuse of) this study’s conclusions (or lack thereof) as to the effect of representation itself, as opposed to just the offer. For instance, given that this study’s principal conclusions are about an offer of representation and not actual representation, a more accurate title to this study would be, “What Difference an Offer of Representation?” And the very first sentence of the Introduction on page 5 currently reads, “Particularly with respect to low-income clients in civil cases, how much of a difference does legal representation make?” It is only a footnote that explains the study looks at offers as well as effect, and much later in the study (page 32) that no conclusions were reached at all as to the effect of representation. Similarly, the conclusion (“Where Do We Go From Here?”) states, “the present study primarily concerned representation effects on legal outcomes affecting the potential client’s pecuniary interests.”
I am concerned also that the results reported in the study with respect to offers of representation by HLAB are misleading at best and of little utility at worst. This is because nearly half of the control group were represented by counsel and, more significantly, probably that many and perhaps more in the control group got an offer of free representation from my program or another providing free legal services in unemployment cases. To make an analogy to the medical world, suppose there was a Pfizer drug trial where 50% of Pfizer’s control group were offered the exact same medication from Merck. Wouldn’t that cast serious doubt on the outcome of the study? There is no mention of this 49% in either the abstract or introduction which unfortunately are all many readers will read.