Author: Steve Eppler-Epstein

4

“Best Practices” for studies of legal aid – more thoughts

A few suggested additions to Richard Zorza’s proposed “best practices” for randomized study of legal services:

(1)    Remember the distinction between what is measurable and what is important. Broadly speaking, legal services programs are trying to increase access to justice (make voices heard), solve individual or group problems (win cases), and change the legal environment so that poor people’s lives will be better (change laws or systems).  Our work on these efforts is interwoven; for example, we take individual cases that will both provide access to the system and fix a person’s problem, and in that process we gain important information about what may be broken in the larger system and where solutions may lie.

So: a large volume practice in an area of law that has a stream of comparable cases can be studied through randomization.  On the other hand, efforts to change laws or systems, and innovative start-up projects, must be evaluated through other means.

The corollary here is:  Be willing to publicly state, “This is a type of work that is susceptible to this research tool; there are other valuable types of work that must be studied with other tools.”

(2)    Be clear about what is (and isn’t) being studied. This may be a warning primarily aimed at the legal aid providers.  Over time, we will want to learn how much impact our scarce resources can have

  • in various areas of legal work
  • in different jurisdictions
  • for clients with different fact patterns, personal skills, age, linguistic abilities, mental health or physical characteristics
  • using a variety of different intervention levels and strategies (i.e. advice vs. limited representation vs. long-term representation)
  • and employing a variety of different personal advocacy skills (i.e. confrontational vs. compromising, high-level listening skills vs. high level speaking skills).

We will need patience and persistence.  Over time our services will be enhanced by exploring all of these questions (and more!).  But we will get garbage results if we try to do everything at once.

The Greiner and Pattanayak HLAB study, and all the commentary in this symposium, illuminates how much work we have to do.  Did the Harvard students have no impact (one commentator disagrees based on the data).  Could a change in client selection enhance the impact for the clients served?  A change in case strategy?  A change in law student advocacy style or skills?

We are so early in this learning process that for now, each study will primarily highlight the next set of questions to be asked.

(3)    Be aware of the costs of measurement.

Measurement takes time. When we say “legal aid to the poor is a scarce resource,” we mean that there are nowhere near enough people-hours to do all that we know justice requires.  Planning and carrying out a useful measurement (a “next step” in the learning process described above) takes time away from other activities.  We will have to think through, design, set up the study.  We will have to explain to staff and communities we serve and funders what we are doing and why.  We will be spending that much less time serving clients or raising money to serve clients.

At certain points, measurement may arm opponents of legal services. Others have remarked on this; as someone who has done a lot work to present the case for legal services, I’ll just say both that the danger is real but also that it should not be over-emphasized.  People who don’t like legal services to the poor will use data against us when they can.  But our genuine effort to maximize the impact of scarce resources will encourage our supporters.  And we need to remember that data is only one of the types of description we should be providing about legal services.  The individual stories of our clients, the testimonials of the bar and bench and community supporters are all part of the larger message.  Data is an important part – but only a part – of that broader message.

Similarly, measurement may over-emphasize aspects of the work that can be measured.  This is a cost to measurement, but one that can also be countered.  As discussed above, it is quite important for everyone involved in this endeavor to keep in mind that while randomized study may teach us important things about how best to serve clients, that does not mean that the only things important to clients are those which can be (or have been) measured.

(4)    Be clear that even findings of “no distinction between groups” are not necessarily findings of “no effect.” Two examples to illustrate this point:

First, imagine a hypothetical study of a legal aid program – half the eligible clients are randomly turned away.  Now assume that all of the clients “turned away” have on their own applied for and gotten assistance from a second legal aid program.  While designed as a study of the first legal aid program, in all practical terms this has now become a comparison study of two legal aid programs.  If the two programs provide identical assistance, clients in the study program would see no benefit compared to the clients turned away.  But if in fact people outside the study unrepresented by either program do much worse, there remains a real effect of the study program’s services that is not measured by the study.  (To be clear – this example is brought to mind by aspects of the Greiner and Pattanayak study, in which some clients turned away received other assistance, but it is not an accurate description of that study’s participants – it is just a hypothetical to illustrate that a control group is not necessarily representative of the broadest class.)

Second, take the very real world of housing courts in Connecticut.  I am told by colleagues that 35 or 40 years ago, before there was a broad legal aid presence in housing courts, landlords routinely ejected poor people without following the laws.  When legal aid started a high volume housing practice, legal aid lawyers stopped landlords from locking people out without process, stopped courts from evicting poor people who had a right to stay, and in some cases got money from landlords for violations of the law.  Landlords are now much more unlikely to illegally eject tenants; a study conducted now might find little difference in “ability to stay” between tenants who have a lawyer and tenants who don’t, because the landlord doesn’t know who has (or will have) a lawyer.  But this lack of randomized difference would not necessarily mean that the continued housing practice is not having an impact.  If legal aid completely stopped representing tenants, it’s likely that illegal practices by landlords would re-emerge.

(5)    Be willing to publicly and forcefully debunk misleading uses of your data. This is a plea from those “in the trenches” to those in academia:  when your data is misused in a manner that could harm support for legal aid to the poor, the protestations of legal aid providers may not be believed by those hearing the debate.  After all, we are not economists or statisticians, and we have a vested interest in the outcome.  The academics will be the credible voice to publicly tell funders and government decision-makers, “Those opponents of legal services are misrepresenting truth when they say that this study suggests that poor people don’t need, or shouldn’t get a lawyer.  Indeed, we engage in this research because we believe that by studying legal services to the poor we can help this small and dedicated group be as effective as it can be, for people who are desperately in need of that help.”

0

What can we learn if we assume Greiner and Pattanayak are right?

When legal aid providers read “What Difference Representation? Offers, Actual Use, and the Need for Randomization,” we immediately start to raise questions.  Appropriately, we note that there’s a vast difference between a busy law student handling what may be their first case and a professional experienced legal aid lawyer.  We note that, apparently, some significant number of the people randomly turned away by the Harvard law school clinic were then advised or represented by Greater Boston Legal Aid.

There is also a broader question, which I will explore in a subsequent post:  What is the broader context for randomized study of the impact of legal aid — what kinds of things can we learn from randomized study, and what impact questions can’t be answered through randomization?

As others have written, Greiner and Pattanayak may not be right, or their conclusions may be overstated or unfounded.  But legal aid providers can have important conversations that start here:  “What if Greiner and Pattanayak are right?” What would it mean if Harvard law students offering representation to random low-income applicants for unemployment compensation are not increasing the number of people getting benefits, and may even be slowing down receipt of benefits for those who win?

Another way to ask this question is this:  What does it mean that under some sets of circumstances, offers of legal aid don’t help people?

Here are my answers:

(1) Outreach, client-friendly intake, and supportive client services are crucial to maximizing impact of legal aid to the poor.

Of the low-income people who might seek help from the Harvard Legal Aid Bureau (which is a student clinic), or any of the professional legal-aid agencies, it is very likely that some people could handle their legal problem adequately or even well, without a law student (or lawyer).

On the other hand, there certainly is a large set of of people who cannot possibly handle their cases adequately on their own.  There are many, many low-income people who cannot read or write or speak coherently, who live with severe mental health problems, whose only language is not supported in the relevant adjudicative setting, whose mental or physical health or destitution prevents them from being able even to appear at the adjudicative setting, or who face other barriers to successful litigation without representation.

Right or wrong, the Greiner and Pattanayak article reminds me that it is crucial for legal aid agencies to:

  • Identify which, of the millions of low-income people in crisis, are least able to resolve their legal issues on their own (and yes, this is a question ripe for further study);
  • Ensure that these “most-in-need” people know how to access our services (or that social service agency staff or others in contact with them know how to reach us);
  • Ensure that our intake systems (intended to be “triage” systems) effectively identify the “most-in-need” clients
  • Ensure that our services include, or are integrated with, support systems for clients who without support cannot take advantage of the legal help we are offering (people who, alone, cannot take advantage of our offers of help because they are afraid, confused, overwhelmed, or otherwise hard to serve).

.

(2) We need continued research, training and supervision to maximize use of best (most effective) practices.

The fact that Greiner and Pattanayak studied offers of services by law students provides a sharp reminder that there can be a wide range of effectiveness between different providers of legal help.  Anyone who has watched a series of cases in court has seen that some lawyers have more impact on the judge than others.  Similarly there is variance in how well lawyers organize their work, gather facts, research and present their cases.

In the world of elementary school teaching, the documenting and debating of best practices is well underway.  Teach Like A Champion, by Doug Lemov, is an attempt to turn research into set of best practices for teachers.  The criticisms of the research will be familiar, including questions about whether the research asked the right questions or included the right samples.  But the fundamental effort is right — in any area of legal work, our effectiveness will be driven in part by whether we use the right strategies and techniques.  The legal aid community works hard to deploy experience-based training towards best practices.  But there has been only limited formal study comparing available techniques and strategies for serving clients.  Perhaps further randomized or other outcome research can help us better identify the strategies and techniques that will maximize impact for our clients.

(3) Improving an adjudicative system can increase the number of people for whom we have little impact — and that’s a good outcome!

I have heard from colleagues in Massachusetts that some years back, the unemployment compensation system was complicated and near-impossible for non-lawyers to navigate.  Reform efforts by lawyers at Greater Boston Legal Services, Massachusetts Law Reform and others took lessons learned from individual representation in the unemployment system and turned that into systems reform advocacy.  Over the years, the system has become more and more accessible to people representing themselves, without a lawyer.

Efforts like this, in various areas of client legal need, have been repeated by legal aid programs across the country.  We fervently hope that some people can achieve justice without a lawyer, because we know that the very limited number of legal aid lawyers in the country is inadequate to serve more than a fraction of those in need.  Systems advocacy is an essential task, because its success will expand the number of people who truly can achieve equal justice without the offer of a lawyer.