Site Meter

Tagged: Law Practice

0

Stanford Law Review Online: Regulating Through Habeas

Stanford Law Review

The Stanford Law Review Online has just published a Note by Doug Lieb entitled Regulating Through Habeas: A Bad Incentive for Bad Lawyers?. The author discusses the potential pitfalls in a pending DOJ rule that provides for fast-track review of a state’s death row habeas petitions if the state implements certain sanctions for lawyers found to be legally ineffective:

The most important—and most heavily criticized—provisions of the Antiterrorism and Effective Death Penalty Act restricted federal courts’ ability to hear habeas petitions and grant relief to prisoners. But the 1996 law also included another procedural reform, now tucked away in a less-traveled corner of the federal habeas statute. It enables a state to receive fast-track review of its death row prisoners’ federal habeas petitions if the U.S. Attorney General certifies that the state provides capital prisoners with competent counsel in state postconviction proceedings.

Now, a pending Department of Justice (DOJ) rule sets forth extensive criteria for states’ certification for fast-track review. Piggybacking on a federal statute that does the same, the proposed DOJ rule encourages states to adopt a seemingly commonsense measure to weed out bad lawyers: if an attorney has been found legally ineffective, remove him or her from the list of qualified counsel eligible for appointment. Unfortunately, such removal provisions may do more harm than good by jeopardizing the interests of ineffective lawyers’ former clients. This Note explains why removal provisions can be counterproductive, argues that rewarding the implementation of these provisions with fast-track habeas review is especially unwise, and offers a few recommendations.

He concludes:

The lesson, at a minimum, is that policymakers should be wary of one-off regulatory interventions into indigent defense, considering the hydraulic pressure that a new requirement might exert elsewhere in the system. Leaders within the public defense bar might also wish to think carefully about their expressions of support for ineffective-attorney-removal provisions. And, while some scholars have considered the ethical obligations of predecessor counsel when faced with an ineffectiveness claim, rigorous empirical study of lawyers’ actual responses to allegations of ineffectiveness may be needed to develop sound policy. Do most attorneys actually understand themselves to owe continuing duties to former clients, or do most do what they can to protect their professional reputations against charges of deficient performance? (And are those with the latter attitude more likely to be ineffective in the first place?) The practical effect of regulatory interventions, including removal provisions, turns on the answer to these questions.

None of this is to suggest that it’s in any way acceptable for an ineffective lawyer, let alone an incorrigibly awful one, to represent a capital—or non-capital—defendant or prisoner. The point is the opposite. Even a well-intentioned patchwork of regulation through habeas is no substitute for an adequately funded system that trains, compensates, and screens counsel appropriately. If kicking ineffective lawyers off the list may do more harm than good, the goal should be keep them off the list to begin with.

Read the full article, Regulating Through Habeas: A Bad Incentive for Bad Lawyers? by Doug Lieb, at the Stanford Law Review Online.

0

What Difference Representation: Randomization, Power, and Replication

I’d like to thank Dave and Jaya for inviting me to participate in this symposium, and I’d also like to thank Jim and Cassandra (hereafter “the authors”) for their terrific paper.
This paper exhibits all the features of good empirical work.  It’s motivated by an important substantive question that has policy implications.  The authors use a precise research design to answer the question: to what extent does an offer of representation affect outcomes?  The statistical analysis is careful and concise, and the conclusions drawn from study are appropriately caveated.  Indeed, this law review article might just be the one with the most caveats ever published!  I’m interested to hear from the critics, and to join the dialogue about the explanation of the findings and the implications for legal services work.  In my initial comments about the paper, I’ll make three observations about the study.
First, randomization is key to successful program evaluation.  Randomization mitigates against all sorts of confounders, including those that are impossible to anticipate ex ante and or control for ex post.  This is the real strength of this study.  A corollary is that observational program evaluation studies can rarely be trusted.  Indeed, even with very fancy statistics, estimating causal effects with observational data is really difficult.  It’s important to note that different research questions will require different randomizations.
Second, the core empirical result with regard to litigation success is that there is not a statistically significant difference between those offered representation by HLAB and those that were not.  The authors write: “[a]t a minimum, any effect due to the HLAB offer is likely to be small” (p. 29).  I’d like to know how small.  Here’s why.  It’s always hard what to make from null findings.  Anytime an effect is “statistically insignificant” one of two things is true: there really isn’t a difference between the treatment and control group, or that the difference is so small that it cannot be detected with the statistical model employed.  Given the sample size and win rates around 70%, how small of a difference could the Fisher test be able to detect?  We all might not agree what makes a “big” or “small” difference, but some additional power analysis would tell us a lot about how what these tools could possibly detect.
Finally, if we truly care about legal services and the efficacy of legal representation, this study needs to be replicated, in other courts, other areas of laws, and with different legal aid organizations.  Only rigorous program evaluation of this type can allow us to answer the core research question.  Of course, the core research question isn’t the only thing of interest.  The authors spend a lot of time talking about different explanations for the findings.  Determining which of these explanations are correct will go a long way in guiding the practical take-away from the study.  Sorting out those explanations will require additional studies and different studies.  I spend a lot of time writing on judicial decisionmaking.  My money is on the idea that ALJs behave differently with pro se parties in front of them.  But this study doesn’t allow us to determine which explanation for the core findings are correct.  That doesn’t mitigate against the importance or quality of the work; it’s a known (and disclosed) limitation that leads us to the next set of studies to undertake.
14

The Merits of Merit-Based Pay

Yesterday, Bingham McCutchen announced its move to a merit-lockstep compensation scheme. Under the scheme, associates’ base salaries will be determined on a lockstep basis that considers years of experience and hours billed. So if you are a second-year associate who bills 1,900 hours or more, you make $170,000. If you are a second-year associate who bills less than 1,900 hours but 1,500 hours or more, you make $165,000. If you bill less than 1,500 hours, your salary is frozen (see here). Bingham McCutchen’s bonuses, however, will be based on a more individualized merit evaluation. In contrast, firms like Drinker Biddle, Howrey and Orrick are moving to a complete merit-based compensation structure that generally places associates in different tiers tied to individual evaluations. (For a discussion of the difficulties of transitioning to these new schemes, see here.)

In theory, merit-based compensation structures sound great. Consider Howrey’s description of its new procedures: “‘We will expect certain levels of performance and certain levels of experience, and it will be the responsibility of the law firm and the partners that oversee them to make those experiences available to them.’ . . . Associates will be assigned to partners who will be responsible for their development and their individual evaluations.” More mentoring and individualized supervision of associates would enhance not only law firm productivity but also client service, the quality of law firms’ products and the profession generally.

But will merit-based compensation really encourage more meaningful partner/associate dialogue and professional development efforts or just re-emphasize the importance of billable hours? Most firms using merit-based compensation structures consider an associate’s billable hour number as a significant factor in her evaluation. More billable hours do not, however, translate into quality products or meritorious performance (see here and here). In fact, efficiency itself may be among the best indicia of a truly talented associate. Will merit-based compensation structures account for and reward efficiency, or will they encourage greater inefficiency? The answer, I think, depends largely on firm culture and the individual partners performing the evaluations, but I have to say I have my doubts.