Site Meter

Category: Symposium (What Difference Representation)

0

Greiner and Pattanayak: The Sequel

In a draft essay, Service Delivery, Resource Allocation and Access to Justice: Greiner and Pattanayak and the Research Imperative, Tony Alfieri, Jeanne Charn, Steve Wizner, and I reflect on Jim Greiner and Cassandra Pattanayak’s provocative article reporting the results of a randomized controlled trial evaluating legal assistance to low-income clients at the Harvard Legal Aid Bureau. (The Greiner and Pattanayak article was the subject of a Concurring Opinions symposium last March.) Studying the outcomes of appeals from initial denials of unemployment insurance benefit claims, Greiner and Pattanayak asked, what difference does legal representation make? Their answer is that “an offer of HLAB representation had no statistically significant effect on the probability that a claimant would prevail, but that the offer did delay the adjudicatory process.” That is, not only was an offer of legal assistance immaterial to the case outcome, it may have harmed clients’ interests.

The Greiner and Pattanayak findings challenge our intuition, experience and deeply-held professional belief that lawyer representation of indigent clients in civil matters is fundamental to the pursuit of justice. Our first reaction is that the study must have fatal conceptual or methodological flaws – the researchers studied the wrong thing in the wrong way. Even when we learn that the study is credible and well designed, we doubt that this kind of research is a worthwhile use of our time or money relative to serving needy clients. Finally, and perhaps most importantly, we worry that the published results will only serve as fodder for the decades-long political assault on legal services for the poor.

If replicated across venues, however, studies like Greiner and Pattanayak’s can tell us a great deal about individual representation, program design and systemic access to justice questions. In fact, we cannot make genuine progress in any of these areas – much less marshal the case for more robust legal aid investments and the right to counsel in some civil cases – without better evidence of when, where and for whom representation makes a difference. Fortunately, developments in law schools, the professions and a growing demand for evidence-driven policymaking provide support, infrastructure and incentive for such research. For these reasons, we urge legal services lawyers and clinical law professors to collaborate in an expansive, empirical research agenda.

 

0

What Difference Representation: Coda

Thanks to all our participants for an amazing symposium. You can see all twenty-two (!) posts here.  Thanks especially to Jim and Cassandra for being fantastically good sports in subjecting their work to public scrutiny and to replying so conscientiously to everyone’s comments. Richard Zorza put it best: “[T]his study, regardless of, or perhaps because of, its controversial nature, will be looked back at as a critical event in the history of access to justice.”  If he’s right, I really proud that CoOp could serve as a platform to debate the article, and the shape of things to come.

0

More on Pet Notions

I write as a legal services lawyer and clinician who practiced in a neighborhood general law practice for low and moderate income people for over three decades and for twenty seven years directed a large clinical practice site at Harvard Law School.  The issue of case selection/triage, to which this paper is relevant, was always a challenge.   We had inklings that we needed more rigorous approaches and even had a socical scientist on staff for a few years but were never able to conceive and carry out a serious study, let alone one  as sophisticated as the G&P randomized trial.  I whole-heartedly welcome this effort and the  the authors challenge to engage in rigorous scrutiny of the actual workings of legal services delivery in the U. S.  Serious empirical work goes on as a matter of course in peer nations, all of which have been successful in obtaining and holding substantially greater resources than in the U. S.    I don’t assume a connection between the research programs and better funding (hasn’t been studied) but these programs know a lot more about what they are producing and have refined delivery approaches and policies based on what they have learned. 

Regarding the posts expressing concern that data and studies may be used improperly.  I agree with Richard Zorza that we can’t be sure this won’t occur , but I believe the risks are much greater if we continue  with virtually no serious effort to collect good outcome data, comparatively study different approahces to service delivery, develop productivity and efficiency standards as well as good measures of quality .   Because we don’t have even a decent data system, we cannot assure that we are making the best use of the resources available.  What if thousands more people could be effectively helped  if programs were more efficient, targeted resources  more effectively, and  leveraged expertise to maximize both cost and outcome effectiveness?  The result would be the same as  if we had substantially more resources. 

Having located myself firmly in the “we need more of just this sort of high quality research,” here are some  thoughts, in no particular order, about the value – I would say the necessity - of a bold, broad empirical scrutiny of “our fondest pet notions” about our work.  As I edit, I see the post is getting long, but I teach until 9 pm tonight, so may just have time to get this in before the symposium closes!

  • I am encouraged by evidence that claimants succeed via self-representation or with information or limited advice and assistance.  Advocacy resources can be directed to matters where advice/self-help is sub-optimal.  I remain attached to the early goals of client activation and empowerment and self-help may play a role – a possiblity for further study! 
  • I understand the offer/representation distinction in the study and recognize that win/lose at hearing is not the only measure of success but a study should be assessed on what it purports to measure not on everything it could have measured.  I support a broad and long term research agenda recognizing that good studies help us frame issues for further study. 
  • The issue of quality of  work  by HLAB students comes up in several posts.  My experience is that  well-supervised students can produce high quality work. and that we certainly should not understand supervised students as a proxy for less than good/high quality work.  We often juxtapose a “lawyer” against  a student or a lay advocate but bar admission by itself  does not assure quality.  A lawyer just admitted to the bar with little or no experience, practicing without supervision (entirely feasible in the U. S.) might be much less effective than an HLAB student or an experienced lay advocate .  In fact, a substantial UK study produced evidence that lay advocates did more high quality work than solicitors.  
  • My experience suggests that advocate  high expertise/experience  is decisive in the challenging or close call case – the ones “on the bubble” – that could go either way.  If we could reliably identify these matters (almost always a subset of those that go to hearing) we could allocate expert resources accordingly. We don’t want high expertise/high cost resources on less challenging cases – these are good for the rookies – and we don’t want rookies on the really hard cases unless teamed with an expert.  In other words, we need to leverage experts  and maximize use of students,  less experienced advocates, pro bonos who need training to achieve both outcome and cost effective service. 
  • On random case taking -  G&P make clear that r.c.t.s can incorporate screens for merit.  I find their response entirely persuasive on this issue.  However, aside from no merit (frivolous claims) which must always be declined, I believe that at some point we should test the assuptions underlying screening criteria which, I assume, are based on some conception of relative merit.   Can advocates acurately predict  relative merit?  Is the goal to screen out the strongest (don’t want to risk losing) or the weakest (too improbable to be worth the resources) or to take the middling cases?  Do screens fine line to this extent?  Is merit entirely a function of the legal strength of the claim or does it include some notion of the relative neediness of the  claimant?  I recognize the sincere convictions underlying screening criteria, but are these “pet notions” or can they be backed up by credible evidence  which, I think, brings us back to r.c.t.s. 
  • Further on the randomness of offers of representation in an r.c.t. as compared to the case screening and offer criteria in use in various programs – my experience suggests that we avoid confronting  significant randomness in the existing system.  Intake hours, days of service, periodic  closing of “intake” (and thus direct contact with those seeking service) shut people out of the intake stream regardless of merit however measured.  Because they are anonymous to the providers, the arbitrary denial  of any opportunity for assistance goes unremarked.
4

“Best Practices” for studies of legal aid – more thoughts

A few suggested additions to Richard Zorza’s proposed “best practices” for randomized study of legal services:

(1)    Remember the distinction between what is measurable and what is important. Broadly speaking, legal services programs are trying to increase access to justice (make voices heard), solve individual or group problems (win cases), and change the legal environment so that poor people’s lives will be better (change laws or systems).  Our work on these efforts is interwoven; for example, we take individual cases that will both provide access to the system and fix a person’s problem, and in that process we gain important information about what may be broken in the larger system and where solutions may lie.

So: a large volume practice in an area of law that has a stream of comparable cases can be studied through randomization.  On the other hand, efforts to change laws or systems, and innovative start-up projects, must be evaluated through other means.

The corollary here is:  Be willing to publicly state, “This is a type of work that is susceptible to this research tool; there are other valuable types of work that must be studied with other tools.”

(2)    Be clear about what is (and isn’t) being studied. This may be a warning primarily aimed at the legal aid providers.  Over time, we will want to learn how much impact our scarce resources can have

  • in various areas of legal work
  • in different jurisdictions
  • for clients with different fact patterns, personal skills, age, linguistic abilities, mental health or physical characteristics
  • using a variety of different intervention levels and strategies (i.e. advice vs. limited representation vs. long-term representation)
  • and employing a variety of different personal advocacy skills (i.e. confrontational vs. compromising, high-level listening skills vs. high level speaking skills).

We will need patience and persistence.  Over time our services will be enhanced by exploring all of these questions (and more!).  But we will get garbage results if we try to do everything at once.

The Greiner and Pattanayak HLAB study, and all the commentary in this symposium, illuminates how much work we have to do.  Did the Harvard students have no impact (one commentator disagrees based on the data).  Could a change in client selection enhance the impact for the clients served?  A change in case strategy?  A change in law student advocacy style or skills?

We are so early in this learning process that for now, each study will primarily highlight the next set of questions to be asked.

(3)    Be aware of the costs of measurement.

Measurement takes time. When we say “legal aid to the poor is a scarce resource,” we mean that there are nowhere near enough people-hours to do all that we know justice requires.  Planning and carrying out a useful measurement (a “next step” in the learning process described above) takes time away from other activities.  We will have to think through, design, set up the study.  We will have to explain to staff and communities we serve and funders what we are doing and why.  We will be spending that much less time serving clients or raising money to serve clients.

At certain points, measurement may arm opponents of legal services. Others have remarked on this; as someone who has done a lot work to present the case for legal services, I’ll just say both that the danger is real but also that it should not be over-emphasized.  People who don’t like legal services to the poor will use data against us when they can.  But our genuine effort to maximize the impact of scarce resources will encourage our supporters.  And we need to remember that data is only one of the types of description we should be providing about legal services.  The individual stories of our clients, the testimonials of the bar and bench and community supporters are all part of the larger message.  Data is an important part – but only a part – of that broader message.

Similarly, measurement may over-emphasize aspects of the work that can be measured.  This is a cost to measurement, but one that can also be countered.  As discussed above, it is quite important for everyone involved in this endeavor to keep in mind that while randomized study may teach us important things about how best to serve clients, that does not mean that the only things important to clients are those which can be (or have been) measured.

(4)    Be clear that even findings of “no distinction between groups” are not necessarily findings of “no effect.” Two examples to illustrate this point:

First, imagine a hypothetical study of a legal aid program – half the eligible clients are randomly turned away.  Now assume that all of the clients “turned away” have on their own applied for and gotten assistance from a second legal aid program.  While designed as a study of the first legal aid program, in all practical terms this has now become a comparison study of two legal aid programs.  If the two programs provide identical assistance, clients in the study program would see no benefit compared to the clients turned away.  But if in fact people outside the study unrepresented by either program do much worse, there remains a real effect of the study program’s services that is not measured by the study.  (To be clear – this example is brought to mind by aspects of the Greiner and Pattanayak study, in which some clients turned away received other assistance, but it is not an accurate description of that study’s participants – it is just a hypothetical to illustrate that a control group is not necessarily representative of the broadest class.)

Second, take the very real world of housing courts in Connecticut.  I am told by colleagues that 35 or 40 years ago, before there was a broad legal aid presence in housing courts, landlords routinely ejected poor people without following the laws.  When legal aid started a high volume housing practice, legal aid lawyers stopped landlords from locking people out without process, stopped courts from evicting poor people who had a right to stay, and in some cases got money from landlords for violations of the law.  Landlords are now much more unlikely to illegally eject tenants; a study conducted now might find little difference in “ability to stay” between tenants who have a lawyer and tenants who don’t, because the landlord doesn’t know who has (or will have) a lawyer.  But this lack of randomized difference would not necessarily mean that the continued housing practice is not having an impact.  If legal aid completely stopped representing tenants, it’s likely that illegal practices by landlords would re-emerge.

(5)    Be willing to publicly and forcefully debunk misleading uses of your data. This is a plea from those “in the trenches” to those in academia:  when your data is misused in a manner that could harm support for legal aid to the poor, the protestations of legal aid providers may not be believed by those hearing the debate.  After all, we are not economists or statisticians, and we have a vested interest in the outcome.  The academics will be the credible voice to publicly tell funders and government decision-makers, “Those opponents of legal services are misrepresenting truth when they say that this study suggests that poor people don’t need, or shouldn’t get a lawyer.  Indeed, we engage in this research because we believe that by studying legal services to the poor we can help this small and dedicated group be as effective as it can be, for people who are desperately in need of that help.”

1

Randomization Uber Alles?

Jim and Cassandra write:

“To Dave, we say that our enthusiasm for randomized studies is high, but perhaps not high enough to consider a duty to randomize among law school clinics or among legal services providers.  We provided an example in the paper of practice in which randomization was inappropriate because collecting outcomes might have exposed study subjects to deportation proceedings.  We also highlighted in the paper that in the case of a practice (including possibly a law school clinic) that focuses principally on systemic change, randomization of that practice is not constructive.  Instead, what should be done is a series of randomized studies of an alternative service provider’s practice in that same adjudicatory system; these alternative provider studies can help to assess whether the first provider’s efforts at systemic change have been successful.”

I meant to cabin my argument to law school clinics.  And I do understand that there may be very rare cases where collecting outcomes will hurt clients (such as deportation).  But what about a clinic that focuses on “systemic change.” Let’s assume that subsidizing such a clinic would be a good thing for a law school to do (or, put it another way, we think it is a good idea for current law students to incur more debt so that society gets the benefit of the clinics’ social agitation).  Obviously, randomization of client outcomes would be a terrible fit for measuring the success of such a clinic.  It would be precisely the kind of lamppost/data problem that Brian Leiter thinks characterizes much empirical work.

But that doesn’t mean that randomization couldn’t be useful in measuring other kinds of clinic outcomes.  What about randomization in the allocation of law student “employees” to the clinic as a way to measure student satisfaction in the “learning outcomes“? Or randomization of intake and utilizing different client contact techniques as a way of measuring client satisfaction with their representation (or feelings about the legitimacy of the system?)  One thing that the commentators in this symposium have tried to emphasize is that winning & losing aren’t the only outputs of the market for indigent legal services.  Controlled study of the actors in the system needn’t be constrained in the way that Jim and Cassandra’s reply to my modest proposal to mandate randomization suggest.

1

Randomization, Intake Systems, and Triage

Thanks to Jim and Cassandra for their carefully constructed study of the impact of an offer from the Harvard Legal Aid Bureau for representation before the Massachusetts Division of Unemployment Assistance, and to all of the participants in the symposium for their thoughtful contributions.  What Difference Representation? continues to provoke much thought, and as others have noted, will have a great impact on the access to justice debate.  I’d like to focus on the last question posed in the paper — where do we go from here? — and tie this in with questions about triage raised by Richard Zorza and questions about intake processes raised by Margaret Monsell.   The discussion below is informed by my experience as a legal service provider in the asylum system, a legal arena that the authors note is  strikingly different from the unemployment benefits appeals process described in the article.

My first point is that intake processes vary significantly between different service providers offering representation in similar and different areas of the law.  In my experience selecting cases for the asylum clinics at Georgetown and Yale, for example, we declined only cases that were frivolous, and at least some intake folks (yours truly included) preferred to select the more difficult cases, believing that high-quality student representation could make the most difference in these cases.  Surely other legal services providers select for the cases that are most likely to win, under different theories about the most effective use of resources.  WDR does not discuss which approach HLAB takes in normal practice (that is, outside the randomization study).  On page twenty, the study states that information on financial eligibility and “certain additional facts regarding the caller and the case”  are put to the vote of HLAB’s intake committee.  On what grounds does this committee vote to accept or reject a case?  In other words, does HLAB normally seek the hard cases, the more straightforward cases, some combination, or does it not take the merits into account at all?

Read More

0

How Much Enthusiasm for Randomized Trials? A Response to Kevin Quinn and David Hoffman

We thank Kevin Quinn and David Hoffman for taking the time to comment in our paper.  Again, these are two authors whose work we have read and admired in the past.

Both Dave and Kevin offer  thoughts about the levelof enthusiasm legal empiricists, legal services providers, and clinicians should have for randomized studies.  We find ourselves in much but not total agreement with both.  To Kevin, we suggest that there is more at stake than just finding out whether legal assistance helps potential clients.  In an era of scarce legal resources, providers and funders have to make allocation decisions across legal practice areas (i.e., should we fund representation for SSI/SSDI appeals or for unemployment appeals or for summary eviction defense).  That requires more precise knowledge about how large representation (offer or actual use) effects are, how much bang for the buck.  Perhaps even more importantly, scarcity requires that we learn how to triage well; see Richard Zorza’s posts here and the numerous entries in his own blog on this subject.  That means studying the effects of limited interventions.  Randomized trials provide critical information on these questions, even if one agrees (as we do) that in some settings, asking whether representation (offer or actual use) helps clients is like asking whether parachutes are useful.

Thus, perhaps the parachute analogy is inapt, or better, it requires clarification:  we are in a world in which not all who could benefit from full-service parachutes can receive them.  Some will have to be provided with rickety parachutes, and some with little more than large blankets.  We all should try to change this situation as much as possible (thus the fervent hope we expressed in the paper that funding for legal services be increased).  But the oversubscription problem is simply enormous.  When there isn’t enough to go around, we need to know what we need to know to allocate well.  Meanwhile, randomized studies can also provide critical information on the pro se accessibility of an adjudicatory system, which can lay the groundwork for reform.

To Dave, we say that our enthusiasm for randomized studies is high, but perhaps not high enough to consider a duty to randomize among law school clinics or among legal services providers.  We provided an example in the paper of practice in which randomization was inappropriate because collecting outcomes might have exposed study subjects to deportation proceedings.  We also highlighted in the paper that in the case of a practice (including possibly a law school clinic) that focuses principally on systemic change, randomization of that practice is not constructive.  Instead, what should be done is a series of randomized studies of an alternative service provider’s practice in that same adjudicatory system; these alternative provider studies can help to assess whether the first provider’s efforts at systemic change have been successful.

Our great thanks to both Kevin and Dave for writing, and (obviously) to Dave (and Jaya) for organizing this symposium.

0

What Difference Representation: Clinical Trials

In What Difference Representation: Offers, Actual Use, and the Need for Randomization Jim Greiner and Cassandra Wolos Pattanayak present the results from a randomized controlled study that was designed to assess the efficacy of an offer of representation from the Harvard Legal Aid Bureau (HLAB)−a student-run provider of legal services that is part of the clinical education program at Harvard. There is a great deal to like about this article: it is methodologically rigorous, the data analysis is careful and transparent, its main points are clearly argued and explained, and both the specific results dealing with the efficacy of an offer of HLAB representation in unemployment benefits cases and the broader argument about the need for randomized controlled trials to better understand the effects of representation in civil proceedings are provocative and relevant.  The authors are clear about what they are doing and what they are not doing and they offer good advice about how one might design additional studies to assess interesting questions that are outside the scope of their own study.

It would be easy to spend more time and space than this blog post permits to discuss everything that I like about this article. I am not going to do that.

Instead, I’d like to use this post to briefly mention some issues that arise when one attempts to seriously evaluate the efficacy of legal representation−particularly free legal assistance such as that offered by law school clinical education programs. Read More

0

Avoiding the “Shut Down Effect” from Uncertain Research Results

Lurking behind much of the debate in this symposium is anxiety that negative findings about access to justice services will strengthen and facilitate attempts to reduce resources for access to justice services.

While it would be impossible to rebut the claim that this might happen, that can not be an argument against conducting or reporting research.  On the contrary, it has to be an argument for more and better research.  Having been involved in access to justice for decades, I am all too aware that it never seems to be the right time to make ourselves vulnerable — so the answer has to be always, because that is the only way to gain credibilit..

But this does all raise the question as to how research should be structured, analyzed, and particulalry, reported in order to minimize the risk of results inconsistent with the ultimate findings of the research, in all their complexity and subtlety.

Some thoughts:

  • Generality of Reporting.  The headings, abstract, etc, must be structured to accurately reflect the generlity of the research
  • The Reporting of Context. Studies should be very careful about describing accurately the context in which treatment is provided.
  • Randomization/Observation.  Where studies are not randomized, that must be very clearly reported, and selection bias must be loudly proclaimed, not the subject of a footnote.
  • Explanation of Statistical Significance. The issues of statistical significance must, to the extent possible, be explained as clearly as possible, in lay terms.  The failure to do so, when it occurs, makes  both overstatement an unfair critique easier.
  • Lay Version. Research should be made available in a lay summary version, without complexities but with the detail and cautions.  This will reduce the risk of the results being oversimplified by the media and/or others
  • Vigilance as to Over/Under Generalization. The text should not only be accurate as to the level of generality of the research, but should be explicit as to the kinds of generalizations that might erroneously be drawn from the research. (This makes it easier to rebut overdrawn conclusions made by legal or political opponents.)

I would very much appreciate additions to such best practices.  There are surely many more in the social sciences.  Those suggested here, however, are less about avoiding error, and more about avoiding error in the reporting or use by others.

0

What Difference Representation: Case Selection and Professional Responsibility

Thanks for the invitation to participate in this interesting and provocative symposium.

I’m a legal services attorney in Boston. My employer, Massachusetts Law Reform Institute (MLRI), has as one of its primary tasks to connect the state’s field programs, where individual client representation occurs, with larger political bodies, including legislatures and administrative agencies, where the systemic changes affecting our clients most often take place. (The legal services programs in many states include organizations comparable to MLRI; we are sometimes known by the somewhat infelicitous name “backup centers.”) Among the programs with which MLRI is in communication is the Harvard Legal Assistance Bureau, and I would take this moment to acknowledge the high regard in which I and my colleagues regard their work.

The substantive area of my work is employment law. It is no surprise that during the past three years of our country’s Great Recession, the importance of the unemployment insurance system for our clients has increased enormously and, consequently, it has occupied a greater portion of my time than might otherwise have been the case.

I’m not a statistician nor do I work in a field program representing individual clients, so my comments will not address in any detail the validity of the HLAB study or the conclusions that may properly be drawn from it. As one member of the community of Massachusetts legal services attorneys, however, I have an obvious interest in the way the study portrays us: we are variously described as self-protective, emotional, distrustful of being evaluated, and reluctant to the point of perverseness in participating in randomized studies of the kind the authors wish to conduct. Our resistance in this regard has itself already been the subject of comment here. Happily, it is not often that one looks into what seems to be a mirror and sees the personage looking back wearing a black hat and a snarl. But when it does happen, it’s hard to look away without some effort at clarification. So I will devote my contribution to the symposium to the topic of the perceived reluctance of the legal services community to cooperate in randomized trials. It goes without saying, but the following thoughts are those of only one member of a larger community.

My understanding is that in the HLAB study, no significant case evaluation occurred prior to randomization. Many of us in legal services view with trepidation the idea of ceding control over case selection to the randomization process. Others have more sanguine views, either because they assume that randomization is already taking place or that it ought to be. For example, in his comments from a few months ago, Dave Hoffman was working under the assumption that to randomize client selection would not change an agency’s representation practices at all, and on that basis, he criticized resistance to randomized control trials as “trying to prevent research from happening.”

The authors of the study are enthusiastic about randomization not only because of its scientific value in statistical research but also because it can help to solve one of the thorniest problems facing legal services programs – the scarcity of resources as compared to the demand. As long as the demand for legal assistance outstrips the supply, Professor Greiner has said, randomization – a roll of the dice or the flip of a coin — is an easy and appropriate way to decide who gets representation and who does not.

I believe it’s erroneous to assume that randomization would not change representation practices, at least in the area of legal services in which I work. I also acknowledge that it is possible, at least theoretically, for all the cases in a randomized control trial to have met the provider’s standards for representation. This would provide some measure of reassurance. However, in one area of law, immigration asylum cases, the authors have concluded that time constraints make such an effort unworkable.

Read More