Category: Empirical Analysis of Law

5

Can NVivo Qualitative Empirical Software Help Manage Oceans Of Research?

One of the real challenges for a legal scholar (and probably researchers in many other social science disciplines as well) is figuring out what to do with all those interesting articles you read. Do you make notebooks organized by topic? If so, what happens when a piece has something important to say on multiple topics? Do you create index cards, or their digital equivalents, with relevant quotes? Or, like me, do you find yourself rediscovering the wheel several times – putting an article aside in stack on day one, and rediscovering it on Lexis or Westlaw four months later when you’re searching for a different issue?

I find that keeping control of existing literature, a critical process for those who publish in law reviews (which demand a footnote to support even the most mundane statements), turns out to be a burdensome and sometimes unsuccessful pursuit. As a result, I’m very intrigued by the idea of using NVivo to help.

What is NVivo? It’s a leading qualitative empirical research software. Yes, Virginia, I did say qualitative. As many folks know, one of my biggest beefs with Empirical Legal Studies is that some of its followers have marginalized qualitiative research – so much that many people with only a passing awareness of ELS believe that all empirical work is quantitative. That discussion is for another day, however. The point is that qual researchers use software to help them keep track of their data…which is to say, their texts. My understanding of NVivo – formerly known as NUD*IST – is that you can take texts (like law review articles) and drop them into the software. You can then create coding fields, and mark selected text as part of such fields. (A discussion of the capacities of qual software is here.) For example, if one were studying the way that courts discuss victims in rape cases, and had created a sample for investigation, one might load the selected cases into NVivo. As the researcher creates particular fields – for example “victim dressed provocatively”, “victim drinking”, “victim previously worked as prostitute” (as well as “circuit court”, “appellate court”, “female judge”) – she can then mark text in each case that would fit into the field. This allows her, at a later point, to do targeted searches for particular marked themes – and also allows her to subdivide by the traits of the cases. Thus, she can identify all the decisions by female judges that identify females as victims, and break them out by year.

I wonder whether many legal scholars who don’t do qualitative work could benefit from this software simply by using it as a way of containing, coding and organizing all the articles they read in the course of their literature review. I haven’t heard of anyone doing this, but it seems like it might make a lot of sense – particularly for somewhat disorganized researchers. It might not take advantage of all the power of NVivo, but it could be the equivalent of the smartest filing system ever created.

Does anyone have experience with NVivo, or other similar software (like Atlas), that might shed light on this? By the way, many schools have site licenses for this software, so many of those interested in trying this out can do so without spending a dime.

4

A minimum wage field experiment

Thanks to Dan for inviting me to guest blog!

Tyler Cowen suggests that the expected increase in the minimum wage may serve as a useful “controlled experiment,” at least if the increase applies to Northern Mariana but not to American Samoa. A commenter points out that it’s not a well controlled experiment, because the two territories are not identical. This point rehearses a familiar challenge for empirical legal analysis: Legal scholars don’t have the luxury of randomized studies. Even natural experiments rarely provide conclusive evidence of policy effects.

But we could have randomized studies (or so I will argue in a paper that I am working on). John List is a leader among those who do “field tests” rather than using laboratory experiments or relying on other econometric techniques. (I refer of course to John List the economist, not John List the family murderer, although the boundary between these occupations has allegedly blurred recently.) We could do field tests in law, if only legislatures would cooperate.

An explanation, after the jump.

Read More

Song of Jersey City

PATH Map.jpg

Rick Garnett recently wrote on “cities’ hipness competition.” According to a recent article in New York Magazine, my urban home (Jersey City) has recently won some prize:

To live [in New York now] is to endure a gnawing suspicion that somebody, somewhere, is marveling and reveling a little more successfully than you are. That they’re paying less money for a bigger apartment with more-authentic details on a nicer block closer to cuter restaurants and still-uncrowded bars and hipper galleries that host better parties with cooler bands than yours does, in an area that’s simultaneously a portal to the future (tomorrow’s hot neighborhood today!) and a throwback to an untainted past (today’s hot neighborhood yesterday!). And you know what? Someone is. And you know what else? Right now, that person just might be living in Jersey City.

It’s not just Tyler Cowen who’s rescuing New Jersey from punchline status–even the uberhip NYM is recognizing us (even if we’re shunned by NYC Bloggers). Our hospitals may be closing, but at least we’ve got a hot arts scene.

Of course, the NYM piece focuses not on all of the JC, but only on the “downtown” close to the Hudson waterfront. I live a bit further down the PATH line, in Journal Square. I think a comparison between the two areas may help us answer Rick’s question: “what law can do — e.g., zoning laws, liquor licensing, etc. — to make cities / metro areas more (or less) attractive to the young (or the old, for that matter)”? Can big urbanism work?

Read More

From the New Property to the New Responsibility

apple small.jpgJust as Charles Reich was a premier theorist of rights to government largesse, Peter Schuck and Richard Zeckhauser are leading exponents of the responsibilities it entails. In Targeting Social Programs, S&Z focus on the denial of benefits to “bad bets” and “bad apples:”

Bad bets are individuals who are likely to benefit little from social resources relative to other [beneficiaries]. . . . Bad apples are individuals whose irresponsible, immoral, or illegal behavior in the past—and predictably, in the future as well—marks them as unsuitable to receive the benefits of social programs.

This may sound a bit cold-hearted at first, but S&Z make a good case that, behind a veil of ignorance, we’d quite sensibly allocate resources to, say, the transplant recipient who is most likely to benefit, rather than the one who has been on the wait list the longest. They also show how often “bad apples’” worst effects are on the disadvantaged citizens near them. (For an example, see Kahan and Meares on anti-loitering ordinances.)

The West Virginia Medicaid program provides an interesting case study of “bad apple screening.” Consider the fate of one beneficiary who refuses to sign a “health responsibility contract:”

Mr. Johnson. . . goes to a clinic once a month for diabetes checkups. Taxpayers foot the bill through Medicaid . . . [b]ut when doctors urged him to mind his diet, “I told them I eat what I want to eat and the hell with them. . . . I’ve been smoking for 50 years — why should I stop now? . . . This is supposed to be a free world.”

Traditionally, there was little Medicaid could do to encourage compliance. But now, “[u]nder a reorganized schedule of aid, the state, hoping for savings over time, plans to reward “responsible” patients with significant extra benefits or — as critics describe it — punish those who do not join weight-loss or antismoking programs, or who miss too many appointments, by denying important services.” But as the article notes, “Somewhat incongruously, [Johnson] appears to be off the hook: as a disabled person he will be exempt under the rules.”

Critics claim the program is unduly intrusive: “What if everyone at a major corporation were told they would lose benefits if they didn’t lose weight or drink less?” asked one doctor. Certainly in some manifestations it could be; consider this 1997 proposal by Judge John Marshall Meisburg:

Congress should . . . consider legislation stipulating that no one can be granted disability by SSA if s/he continues to smoke against the advice of his physician, and smoking is a factor material to the disability, because such claimants are bringing illness and disability upon themselves. Such a law would reduce the burden of proof now needed to deny benefits to persons who fail to heed their doctors’ advice, and would dovetail with legislation just passed by Congress to abolish disability benefits for persons addicted to drug and alcohol. In many cases, smoking is akin to “contributory negligence” and the SSA law should recognize it as such. [From Federal Lawyer, 44-APR FEDRLAW 56 on Westlaw.]

I think S&Z frame the debate in a nuanced enough way to avoid this kind of draconian proposal. But I do have a few quibbles with the framing of their work, if not its substance.

Read More

10

Educated Yet Broke

Can you be too poor to file for bankruptcy, yet have the ability to repay your student loans?

When Congress amended the Bankruptcy Code in 2005, it also amended the Judicial Code to provide for the waiver of the mandatory filing fee for bankruptcy. That’s right. Prior to this statutory amendment, if you were so financially strapped that you couldn’t pay the filing fee (then, $150 for Chapters 7 and 13; now, $220 for Chapter 7 and $150 for Chapter 13), you were out of luck: Per the Supreme Court’s 1973 decision in United States v. Kras, 409 U.S. 434, in forma pauperis relief was unavailable in bankruptcy. Lest we prematurely praise Congress for changing this state of affairs, debtors today will get a waiver of the filing fee only under very narrow circumstances. A debtor must have (1) household income less than 150% of the poverty line and (2) and an inability to pay the filing fee in installments (see 28 U.S.C. § 1930(f)(1)).

Now that we have a sense of what Congress deems to be a financially dire situation, at least for purposes of filing for bankruptcy, it strikes me that we might use this measure to gauge a debtor’s inability to repay other types of debts—say, for example, student loans. In an empirical study of the discharge of student loans in bankruptcy, Michelle Lacey (mathematics, Tulane) and I documented that the financial characteristics of the great majority of debtors in our sample evidenced an inability to repay their student loans. One measure we used was the amount of the debtor’s household income in relation to the poverty line established by the U.S. Department of Health and Human Services. We had sufficient information to calculate this figure for 262 discharge determinations. For this group of debtors, half of them had household income less than 200% of the poverty line. It didn’t occur to us to run the numbers using the 150% figure applicable to the fee waiver. In light of the new statutory provision, I’ve set out to look at our data from this perspective. The numbers are sobering, to say the least.

Read More

0

Lempert on ELS

Richard Lempert, guest-blogging at the ELS Blog, has a great series of posts on empirical scholarship in law. In the first, he observed that:

Too often researchers encourage misuses of their results in conclusions that push the practical implications of their research, even when the more detailed analysis emphasizes proper cautions. While this occurs with empirical students of the law in liberal arts schools by political scientists, sociologists, economists and psychologists among others, the problem tends to be more severe in the empirical work of law professors, perhaps because most see their business not as building social or behavioral theory but as criticizing laws and legal institutions and recommending reform.

In the second, he said:

There is also the question of qualitative data. I am distant enough from the ELS movement that I do not know how its core advocates regard qualitative research, but taking down 5 volumes of the Journal of Empirical Legal Studies that happen to be close at hand I could not help but note that every article in every volume had a quantitative dimension. Each had at least a graph, table, equation or regression and most analyzed and presented results using more than one of these analytic modality. Yet qualitative research is as empirically-based as quantitative research and it can be as unbiased and as rigorous. Moreover, it is often more revealing of relationships legal scholars seek to understand, not to mention more accessible and interesting. Lawyers have done many quantitative studies I find useful and admire, but I would not elevate any of them above, for example, Bob Ellickson’s study of Shasta county when it comes to developing and sharing an understanding of the real world or, in this case, illuminating the limitations of the Coase Theorem.

And most recently, he argued for a deeper appreciation of the role of ground-tested theory:

What is plausible depends, of course, on what we know about the matter we are studying. More than occasionally empirical scholars seem to have little appreciation of context beyond the general knowledge everyone has and the specific data they have collected. Without a deep appreciation of context, even the best scholars may be misled. For example, some years ago Al Blumstein and Daniel Nagin, who were and are among the very best of our nation’s quantitative criminologists, did a study of the deterrent effects of likely sentences for draft evasion on draft evasion rates. For its time the study was in many ways exemplary – variables were carefully measured and analyzed, and it was refreshing to see an investigation into deterrence outside the street crimes and capital punishment contexts. The results of the Blumstein-Nagin research strongly confirmed deterrence theory. Resisting the draft by refusing induction was substantially higher in those jurisdictions that sentenced resisters most leniently. Yet I regarded the study as worthless.

To find out why, and to read more of this powerful (but friendly) critique of the newly dominant methodology in legal scholarship, check out the ELS blog!

3

Does familiarity breed contempt?

I have been reading some interesting articles on the factors that contribute to a court’s or judge’s reversal rate. Because I live in, and litigate cases in, Washington, D.C., where the federal district and circuit court judges occupy the same building, I began to wonder whether there is any correlation between sharing a courthouse and the frequency with which the appellate court reverses the district court. Similarly, I would be interested to know whether workplace proximity affects the frequency with which the appellate court orders a district court judge to recuse him or herself from sitting on a case. The articles I have found do not address this question.

The federal courthouse in D.C. provides district and circuit court judges with lots of opportunity to interact in the elevators, cafeteria, parking lot, gym, and at various courthouse functions (for example, at the annual chili cook off organized by Judge Sentelle, or at the holiday caroling hosted by Judge Henderson). Would these sorts of frequent, casual social interactions change the way the appellate judges review their district court colleagues? I could see it cutting either way. On the one hand, the appellate judges might give a little more deference to that district court judge who seems friendly, sensible, smart, and always remembers to ask after the kids when they run into each other in the hallways. On the other hand, the water-cooler familiarity might lead appellate judges to view some of their lower court counterparts as less reliable and trustworthy than others. Although I doubt workplace proximity is a major factor in reversal rates, I would guess that it plays in a little at the margins.

Read More

3

Solum on the Need for Opinions

opinion.jpgLarry Solum recently posted a kind response to my post on the need for judicial reasoning. Here is a taste of his analysis:

An obligation to offer justification has obvious accuracy-enhancing effects: it forces the decision maker to engage in an internal process of deliberation about explicit reasons for an action and to consider whether the reasons to be offered are “reasonable” and whether they are likely to be sustained in the event of appeal. Balancing approaches, which consider the costs of procedural rules as well as their accuracy benefits, point us in the direction of the costs associated with requiring justifications on too many occasions and of the costs of requiring justificatory effort that is disproportionate to the benefits to be obtained. Requiring reasons facilitates a right of meaningful participation as well: when a judge gives reasons, then the parties affected by the action can respond–offering counter reasons, objecting to their legal basis, and so forth. Moreover, the offering of reasons provides “legitimacy” for the decision.

Very helpful. Clearly, the procedural justice literature has much to say on whether it is illegitimate for judges to rule without explanation. It seems to me that much of Larry’s discussion would seem to foreclose the legitimacy of what our commentators have suggested as the backstop for expressed opinions: back-pocket explanations, i.e., reasons produced by litigant demands.

But I still think that much of our thinking on the problem of “why and when reasons” is driven by biases built into our legal-DNA by the law school experience. I’ll ramble a bit more on this problem below the jump.

Read More

13

Must District Judges Give Reasons?

gavel.jpgJonathan Adler highlights this astonishing Ninth Circuit opinion on the alleged misconduct of now-embattled District Judge Manuel Real. Some interesting facets of the case (previously blogged about here, here, and elsewhere). First, dissents matter. It is more than tempting to attribute the current push to impeach Judge Real to Judge Kozinski’s harsh dissent from the panel’s order exonerating him on the misconduct charge. Second, the case raises a neat issue which relates to what I’ve been writing this summer. While the overall facts of the case are well worth reading in the original, if you’ve ten or twenty minutes, I want to focus briefly on part of Judge Kozinski’s charge against Real: that he failed to explain the reasoning for a controversial order.

The basic story is that Judge Real withdrew the petition in a pending bankruptcy case and stayed a state-court judgement evicting a woman who was appearing before his court in a criminal matter. Both orders were entered apparently sua sponte, or at least without hearing the evicting party’s arguments. According to Kozinski, Judge Real “gave no reasons, cited no authority, made no reference to a motion or other petition, imposed no bond, balanced no equities. The two orders [the withdraw and stay] were a raw exercise of judicial power…” In a subsequent hearing, Kozinski continued, “we find the following unilluminating exchange”:

The Court: Defendants’ motion to dismiss is denied, and the motion for lifting of the stay is denied . . .”

Attorney for Evicting Party: May I ask the reasons, your Honor?

The Court: Just because I said it, Counsel.

Kozinski wrote:

I could stop right here and have no trouble concluding that the judge committed misconduct. [Not only was there a failure of the adversary process . . . but also] a statement of reasons for the decision, reliance on legal authority. These niceties of orderly procedure are not designed merely to ensure fairness to the litigants and a correct application of the law . . . they lend legitimacy to the judicial process by ensuring that judicial action is-and is seen to be-based on law, not the judge’s caprice . . . [And later, Kozinski exclaims] Throughout these lengthy proceedings, the judge has offered nothing at all to justify his actions-not a case, not a statute, not a bankruptcy treatise, not a law review article, not a student note, not even a blawg. [DH: Check out the order of authority!]

So here’s the issue: in the ordinary case, to what extent are judges required to explain themselves?

Read More

11

Perhaps this empirical dog does not hunt.

I have hit a . . . data analysis sticking point with some empirical work that I am doing, and I thought I’d toss the problem out there to see if any of you see something that I do not see. I am a bit embarrassed, however, to admit that I am having a problem analyzing my data, so please refrain from starting any of your comments with “Did you skip 12th grade calc., Nowicki?” or “when, if ever, have you taken a stats class?”

I have calculated the annual percentage change in pay for the CEOs of ten large, publicly traded corporations. I am then comparing those annual percentage changes to the annual percentage changes in profits for those ten corporations, to see if there is a relationship between percentage changes in pay and percentage changes in corporate profits (such as a 10% increase in annual profit being accompanied with a 10% increase in CEO pay).

My ratios of percentage change in pay as compared to percentage change in profit are not producing what I expected to get, however. I have taken my annual percentage changes in pay and divided them by my annual percentage change in profit (for each CEO, for each year).

I expected to be able to then say “A result of 1 or a number greater than 1 is a bad thing” (because it means that the percentage change in pay is GREATER than any percentage change in profit). But things get confusing when I have percentage decreases – I frequently end up with negative numbers that are sometimes indicative of a “good” relationship (a negative percentage change in CEO pay accompanied by a percentage increase in profit, for example) and sometimes indicative of a BAD relationship (a positive percentage pay change accompanied by a NEGATIVE percentage profit change).

Given that I have negative numbers that are sometimes indicating a “good” pay/profit relationship and sometimes indicating a “bad” pay/profit relationship, I am stymied. What am I not seeing? Why am I not able to say “a number greater than 1 is a BAD thing for shareholders in terms of the CEO pay/profit relationship and a number less than one is a good thing”?