Site Meter

Category: Bioethics

0

Bring on Jurassic Park!: Resurrection of Extinct Animals

Scientists have come to a “technical, not biological” problem in trying to resurrect a once extinct frog. Popular Science explains the:

gastric-brooding frog, native to tiny portions of Queensland, Australia, gave birth through its mouth, the only frog to do so (in fact, very few other animals in the entire animal kingdom do this–it’s mostly this frog and a few fish). It succumbed to extinction due to mostly non-human-related causes–parasites, loss of habitat, invasive weeds, a particular kind of fungus.

Specimens were frozen in simple deep freezers and reinserted into another frog. The embryos grew. The next step is to get them to full adulthood so they can pop out like before. Yes, these folks are talking to those interested in bringing back other species.

As for this particular animal, the process reminds me a bit too much of Alien, which still scares the heck out of me.

the gastric-brooding frog lays eggs, which are coated in a substance called prostaglandin. This substance causes the frog to stop producing gastric acid in its stomach, thus making the frog’s stomach a very nice place for eggs to be. So the frog swallows the eggs, incubates them in her gut, and when they hatch, the baby frogs crawl out her mouth.

Science. Yummy. Oh here is your law fodder. What are the ethical implications? Send in the clones! (A better title for Attack of the Clones, perhaps).

2

Jonathan Simon on Leslie Henry’s The Jurisprudence of Dignity

Over at Jotwell, Jonathan Simon has a spot-on review of my colleague Leslie Meltzer Henry’s brilliant article, The Jurisprudence of Dignity, 160 U. Penn. L. Rev. 169 (2011).  Henry’s work on dignity is as illuminating as it is ambitious.  I urge you to read the piece.  Here is Simon’s review:

Today American law, especially Eighth Amendment law, seems to be in the middle of a dignity tsunami. The United States is not alone in this regard, or even in the lead.  Indeed dignity has been an increasingly prominent value in modern legal systems internationally since the middle of the 20th century, marked in the prominence given that term in such foundational documents of the contemporary age as the Universal Declaration of Human Rights, in the reconstructed legal systems of post-war Europe (particularly Germany), and in regional human rights treaties like the European Convention on Human Rights and the more recent European Union Charter of Rights.  A stronger version of dignity seems increasingly central to reforming America’s distended and degrading penal state.  Legal historians have suggested that American history — particularly, the absence of a prolonged political struggle with the aristocracy and the extended experience with slavery — rendered dignity a less powerful norm, which may explain the relative weak influence of dignity before now. Yet its increasing salience in the Roberts Court suggests that American dignity jurisprudence may be about to spring forward.

Professor Leslie Henry’s 2011 article, The Jurisprudence of Dignity, is a must-read for anyone interested in taming our penal state.  Henry provides a comprehensive analysis of the US Supreme Court’s treatment of the term from the founding to the present.  Henry borrows from the language philosopher Ludwig Wittgenstein the concept of a “family resemblance” and suggests that dignity as a legal term is anchored in five core meanings that continue to have relevance in contemporary law and which share overlapping features (but not a single set of factors describing all of them). The five clusters are: “institutional status as dignity,” “equality as dignity,” “liberty as dignity,” “personal integrity as dignity,” and “collective virtue as dignity.” These clusters suggest there can be both considerable reach but also precision and limits to using dignity to shape constitutional doctrine.

For much of the period between the Revolution and the middle of the 20thcentury, the meaning of dignity was confined largely to the first category, “institutional status as dignity.”  Dignity by status dates from the earliest Greek and Roman conceptions, when dignity was associated with those of high status and conceptualized as anchored in that status.  The United States by the time of the Constitution renounced the power to ennoble an aristocracy but shifted that hierarchical sense of dignity to the state itself and its officials. For much of the next century and a half, dignity is discussed mostly as a property of government, especially states and courts.  This began to change in the 20th century, and the change accelerated significantly after World War II. Read More

0

Racey, Racey Neuro-Hype! Can a Pill Make You Less Racist?

Media outlets around the world reported yesterday that a pill can make people less racist.

“Heart disease drug ‘combats racism’” heralds the UK’s Telegraph.  “A Pill that Could Prevent Racism?” asks The Daily News.

Is this for real?

The answer is less racy – and less raced – but actually more interesting than the headlines suggest.

Researchers at the Oxford University Centre for Practical Ethics, led by Sylvia Terbeck, administered a common blood-pressure lowering drug, called propranolol, to half of a group of white subjects and a placebo to the other half.  (Read the study’s press release here and the research paper here)  The subjects then took a test that measures “implicit associations” – the rapid, automatic good/bad, scary/safe judgments we all make in a fraction of a second when we look at words and pictures.  The subjects who took the drug showed less of an automatic fear response to images of black people’s faces and were less likely to associate pictures of black people with negative words than the subjects who did not take the drug.  Based on the study’s design, it is likely that results would be the same in trials involving racism by and against other racial and ethnic groups.

This looks like the pill treated racism in the research subjects.  But this isn’t so.

Researchers have long known that propranolol has a range of effects that include lethargy, sedation, and reductions in several kinds of brain activity.  In high-flown medical parlance, this drug makes people really chilled out.  I know: I’ve been on propranolol myself (unsuccessfully) for migraine prevention.  When I was on the drug, my biggest fear was falling asleep at work – and even that didn’t stress me as much as it should have.

Because propranolol muffles fear generally, it reduces automatic negative responses to just about anything.  Propranolol has been used to treat everything from “uncontrolled rage” to performance anxiety and is being explored for treating PTSD.  Very recent research shows that it generally reduces activity in the brain region called the amygdala (more on that, below).

But the study remains interesting and important for a few reasons.  This is the first study to show that inhibiting activity in the amygdala, which is crucially involved in fear learning, directly reduces one measure of race bias.  This validates extensive research that has correlated race bias with heightened activity in that brain region.  (Although some contrary research also challenges the association.)  So this study helps support the idea of a causal relationship between automatic or pre-conscious race bias and conditioned fear learning.

The cure for racism born of conditioned fear learning is not to chemically dampen the brain’s response to fear generally – because fear is often useful – but to attack the causes of the conditioned associations that lead to bias in the first place.

The rest of this post will show how the fear response, claims about race, and the way the drug works all come together to point to the social nature of even “neurological” race bias – and to its economic and legal repercussions.

The fear response

When we see something that frightens or startles us, several regions of the brain become active – particularly the amygdala.  The amygdala has many functions, so a neuroimage showing activity in the amygdala does not necessarily mean that a person is experiencing fear.  But if a person has a frightening experience (loud noise!) or sees something she’s afraid of (snakes!), activity in the amygdala spikes.  This activity is pre-conscious and totally outside our control:  We startle first and then maybe stop to think about it.

The automaticity of fear serves us well in the face of real threats – but poorly in much of daily life.  Fear learning is overly easy: A single negative experience can create a lasting, automatic fear association.  Repeated, weak negative experiences can also form a strong fear association.  And, we can “catch” fear socially: If my friend tells me that she had a negative experience, I may form an automatic fear association as if I had been frightened or harmed myself.  Finally, fear lasts.  I can consciously tell myself not to be afraid of a particular thing but my automatic fear response is likely to persist.

Race bias and the fear response

In neuroimaging studies using functional magnetic resonance imaging (fMRI) on white and black Americans, research subjects on average have a greater amygdalar response to images of black faces than to images of white faces.  Researchers have interpreted this as a pre-conscious fear response.  Indeed, the more that activity in a person’s amydala increases in response to the images of black faces, the more strongly he or she makes negative associations with images of black faces and with typically African-American names (see paper here).

These automatic fear responses matter because they literally shape our perceptions of reality.  For example, a subject might be asked to rate the facial expressions on a set of white and black faces.  The facial expressions range from happy to neutral to angry.  A subject who has a strong amygdalar response to images of black faces is much more likely to misinterpret neutral or even moderately happy expressions on a black facial image as being hostile or angry.  This shows how fear changes our perceptions, which in turn changes how we react to and treat other people.  It also shows how fear alters perception to create a self-reinforcing loop.

This kind of pre-conscious or automatic racism matters economically and legally:  A majority of white people who have taken these implicit association tests demonstrate some automatic bias against black faces both associationally and neurologically.  White people numerically and proportionally hold more positions as decision-makers about employment – like hiring and promotion – and about legal process and consequences – like whether to charge a suspect with a crime, the severity of the crime with which to charge him or her, and whether to offer a generous or harsh plea bargain.  A study of two hundred judges serving in jurisdictions across the United States has shown that judges, too, more readily make these automatic, negative associations about black people than they do about white people.  The implication is that automatic racial bias could play a role in pervasively tilting the scales against black people in every phase of economic life and in every phase of the legal process.

Yet, current anti-discrimination law only prohibits explicit racial bias.  An employer may not advertise a position as “whites only” nor fire nor refuse to promote a worker because the employer does not want to retain or advance a black person.  Systematic racial bias that creates unlawful “disparate impact” also rests on explicit racism: plaintiffs who claim that they are proportionally under-represented in, say, hiring and promotion by a particular employer must show that the disparate impact results from an intentional discriminatory purpose.

Automatic race bias, by contrast, takes a different form – a form not barred by law.  Automatic discrimination expresses itself when the white supervisor (or police officer, or prosecutor, or judge, or parole board member) just somehow feels that his or her black counterpart has the proverbial “bad attitude,” or doesn’t “fit” with the culture of the organization, or poses a greater risk to the public than an equivalent white offender and so should not be offered bail or a plea deal or be paroled after serving some part of his sentence.

Tying it all together

If current anti-discrimination law does not touch automatic bias, and automatic bias is pervasive, then does this point to a role for drugs?

On propranlol, an implicitly biased interviewer or boss might perceive a black candidate more fairly, unfiltered by automatic negative responses.  (She might, of course, still harbor conscious but unstated forms of bias; propranolol certainly would not touch race-biased beliefs about professionalism, competence, and the like.)  But it also would generally dampen the decision-maker’s automatic fear responses.  An overall reduction in automatic negative responses would not necessarily be a good thing:  while it might free decision-makers from some false negative judgments based on race, it also would likely impair them from picking up on real negative signals from other sources.

And the take-away …

That a fear-dampening drug reduces racial bias in subjects helps confirm that much racial bias is based in automatic negative responses, which result from conditioned fear learning.  Although this finding is hardly surprising, it is interesting and important.  Any person reading this study should ask him- or herself: How does automatic fear affect my decisions about other people?  How does it affect the judgments of important economic and legal decision-makers?  How can we make it less likely that the average white person sees the average black person through distorting fear goggles in the first place?

The problem with this study and the headlines hyping it is that they perpetuate the idea that racism is the individual racist’s problem (It’s in his brain! And we can fix it!).  A close reading of the study points to the importance of socially conditioned fear-learning about race – which then becomes neurologically represented in each of us.  Despite the headlines, racism is not a neurological problem but a cultural one, which means that the solutions are a lot more complex than popping a pill.

0

Personhood to artificial agents: Some ramifications

Thank you, Samir Chopra and Lawrence White for writing this extremely thought-provoking book! Like Sonia Katyal, I too am particularly fascinated by the last chapter – personhood for artificial agents. The authors have done a wonderful job of explaining the legal constructs that have defined, and continue to define the notion of according legal personality to artificial agents.

The authors argue that “dependent” legal personality, which has already been accorded to entities such as corporations, temples and ships in some cases, could be easily extended to cover artificial agents. On the other hand,  the argument for according  “independent” legal personality to artificial agents is much more tenuous. Many (legal) arguments and theories exist which are strong  impediments to according such status. The authors categorize these impediments as competencies (being sui juris, having a sensitivity to legal obligations, susceptibility to punishment, capability for contract formation, and property ownership and economic capacity) and philosophical objections (i.e. artificial agents do not possess Free Will, do not enjoy autonomy, or possess a moral sense, and  do not have clearly defined identities), and then argue how they might be overcome legally.

Notwithstanding their conclusion that the courts may be unable or unwilling to take more than a piecemeal approach to extending constitutional protections to artificial agents, it seems clear to me the accordance of legal personality – both dependent and, to a lesser extent  independent, is not too far into the future. In fact, the aftermath of  Gillick v West Norfolk and Wisbech Area Health Authority has shown that various courts have gradually come to accept that dependent minors “gradually develop their mental faculties,” and thus can be entitled to make certain “decisions in the medical sphere.”

We can extend this argument to artificial agents which are no longer just programmed expert systems, but have gradually evolved into being self-correcting, learning and reasoning systems, much like children and some animals. We already know that even small children exhibit these notions. So do chimpanzees and other primates. Stephen Wise has argued that some animals meet the “legal personhood” criteria, and should therefore be accorded rights and protections. The Nonhuman Rights Project  founded by Wise is actively fighting for legal rights for non-human species. As these legal moves evolve and shape common law, the question arises as to when (not if)  artificial agents will develop notions of “self,” “morals” and “fairness,” and thus on that basis be accorded legal personhood status?

And  when that situation arrives, what are the ramifications that we should further consider? I believe that three main “rights” that would have to be considered are: Reproduction, Representation, and Termination. We already know that artificial agents (and Artificial Life) can replicate themselves and “teach” the newly created agents. Self-perpetuation can also be considered to be a form of representation. We also know that under certain well defined conditions, these entities can self-destruct or cease to operate. But will these aspects gain the status of rights accorded to artificial agents?

These questions lead me to the issues which I personally find fascinating: end-of-life decisions extended to artificial agents. For instance, what would be the role of aging agents of inferior capabilities that nevertheless exist in a vast global network?  What about malevolent agents? When, for instance, would it be appropriate to terminate an artificial agent?  What would be the laws that would handle situations like this, and how would such laws be framed? While these questions seem far-fetched, we are already at a point where numerous viruses and “bots” pervade the global information networks, learn, perpetuate, “reason,” make decisions, and continue to extend their lives and their capacity to affect our existence as we know it. So who would be the final arbiter of end-of-life decisions in such cases? In fact, once artificial agents evolve and gain personhood rights, would it not be conceivable that we would have non-human judges in the courts?

Are these scenarios too far away for us to worry about, or close enough? I wonder…

-Ramesh Subramanian

1

Neuroscience at Trial: Society for Neuroethics Convenes Panel of Front-Line Practitioners

Is psychopathy a birth defect that should exclude a convicted serial killer and rapist from the death penalty?  Are the results of fMRI lie-detection tests reliable enough to be admitted in court? And if a giant brain tumor suddenly turns a law-abiding professional into a hypersexual who indiscriminately solicits females from ages 8 to 80, is he criminally responsible for his conduct?  These were the questions on the table when the International Neuroethics Society convened a fascinating panel last week at the Carnegie Institution for Science last week on the uses of neuroscience evidence in criminal and civil trials.

Moderated and organized by Hank Greely of Stanford Law School, the panel brought together:

  • Steven Greenberg, whose efforts to introduce neuroscience on psychopathic disorder (psychopathy) in capital sentencing in Illinois of Brian Dugan has garnered attention from Nature to The Chicago Tribune;
  • Houston Gordon (an old-school trial attorney successful enough not to need his own website, hence no hyperlink), who has made the most assertive arguments so far to admit fMRI lie-detection evidence in a civil case, United States v. Semrau, and
  • Russell Swerdlow, a research and clinical professor of neurology (and three other sciences!).  Swerdlow’s brilliant diagnostic work detected the tumor in the newly-hypersexual patient, whom others had dismissed as a creep and a criminal.

 

In three upcoming short posts, I will feature the comments of each of these panelists and present for you, dear reader, some of the thornier issues raised by their talks.  These cases have been reported on in publications ranging from the Archives of Neurology to USA Today, but Concurring Opinions brings to you, direct and uncensored, the statements of the lawyers and scientists who made these cases happen … Can I say “stay tuned” on a blog?

Auditing Studies of Anti-Depressants

Marcia Angell has kicked off another set of controversies for the pharmaceutical sector in two recent review essays in the New York Review of Books. She favorably reviews meta-research that calls into question the effectiveness of many antidepressant drugs:

Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. . . .Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.

Angell discusses other research that indicates that placebos can often be nearly as effective as drugs for conditions like depression. Psychiatrist Peter Kramer, a long-time advocate of anti-depressant therapy, responded to her last Sunday. He admits that “placebo responses . . . have been steadily on the rise” in FDA data; “in some studies, 40 percent of subjects not receiving medication get better.” But he believes that is only because the studies focus on the mildly depressed:

The problem is so big that entrepreneurs have founded businesses promising to identify genuinely ill research subjects. The companies use video links to screen patients at central locations where (contrary to the practice at centers where trials are run) reviewers have no incentives for enrolling subjects. In early comparisons, off-site raters rejected about 40 percent of subjects who had been accepted locally — on the ground that those subjects did not have severe enough symptoms to qualify for treatment. If this result is typical, many subjects labeled mildly depressed in the F.D.A. data don’t have depression and might well respond to placebos as readily as to antidepressants.

Yves Smith finds Kramer’s response unconvincing:

The research is clear: the efficacy of antidepressants is (contrary to what [Kramer's] article suggests) lower than most drugs (70% is a typical efficacy rate; for antidepressants, it’s about 50%. The placebo rate is 20% to 30% for antidepressants). And since most antidepressants produce side effects, patients in trials can often guess successfully as to whether they are getting real drugs. If a placebo is chosen that produces a symptom, say dry mouth, the efficacy of antidepressants v. placebos is almost indistinguishable. The argument made in [Kramer's] article to try to deal with this inconvenient fact, that many of the people chosen for clinical trials really weren’t depressed (thus contending that the placebo effect was simply bad sampling) is utter[ly wrong]. You’d see the mildly/short-term depressed people getting both placebos and real drugs. You would therefore expect to see the efficacy rate of both the placebo and the real drug boosted by the inclusion of people who just happened to get better anyhow.

Felix Salmon also challenges Kramer’s logic:

[Kramer's view is that] lots of people were diagnosed with depression and put onto a trial of antidepressant drugs, even when they were perfectly healthy. Which sounds very much like the kind of thing that Angell is complaining about: the way in which, for instance, the number of children so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) was 35 times higher in 2007 than it was in 1987. And it’s getting worse: the editors of DSM-V, to be published in 2013, have written that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”

Those who would defend psychopharmacology, then, seem to want to have their cake and eat it: on the one hand it seems that serious mental health disorders have reached pandemic proportions, but on the other hand we’re told that a lot of people diagnosed with those disorders never really had them in the first place.

That is a very challenging point for the industry to consider as it responds to concerns like Angell’s. The diagnosis of mental illness will always have ineradicably economic dimensions and politically contestable aims. But doctors and researchers should insulate professional expertise and the interpretation of maladies as much as possible from inappropriate pressures.

How can they maintain that kind of independent clinical judgment? I think one key is to assure that data from all trials is open to all researchers. Consider, for instance, these findings from a NEJM study on “selective publication:”

We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. . . . Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. (emphasis added).

Melander, et al. also worried (in 2003) that, since “The degree of multiple publication, selective publication, and selective reporting differed between products,” “any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.” Without clearer “best practices” for data publication, clinical judgment may be impaired.

Full disclosure of study funding should also be mandatory and conspicuous, wherever results are published. Ernest R. House has reported that, “In a study of 370 ‘randomized’ drug trials, studies recommended the experimental drug as the ‘treatment of choice’ in 51% of trials sponsored by for-profit organizations compared to 16% sponsored by nonprofits.” The commodification of research has made it too easy to manipulate results, as Bartlett & Steele have argued:

One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. Ketek was developed in the 1990s by Aventis Pharmaceuticals, now Sanofi-Aventis. In 2004 . . . the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey.

The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data. . . . As the months ticked by, and the number of people taking the drug climbed steadily, the F.D.A. began to get reports of adverse reactions, including serious liver damage that sometimes led to death. . . . [C]ritics were especially concerned about an ongoing trial in which 4,000 infants and children, some as young as six months, were recruited in more than a dozen countries for an experiment to assess Ketek’s effectiveness in treating ear infections and tonsillitis. The trial had been sanctioned over the objections of the F.D.A.’s own reviewers. . . . In 2006, after inquiries from Congress, the F.D.A. asked Sanofi-Aventis to halt the trial. Less than a year later, one day before the start of a congressional hearing on the F.D.A.’s approval of the drug, the agency suddenly slapped a so-called black-box warning on the label of Ketek, restricting its use. (A black-box warning is the most serious step the F.D.A. can take short of removing a drug from the market.) By then the F.D.A. had received 93 reports of severe adverse reactions to Ketek, resulting in 12 deaths.

The great anti-depressant debate is part of a much larger “re-think” of the validity of data. Medical claims can spread virally without much evidence. According to a notable meta-researcher, “much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” The “decline effect” dogs science generally. Statisticians are also debunking ballyhooed efforts to target cancer treatments.

Max Weber once said that “radical doubt is the father of knowledge.” Perhaps DSM-VI will include a diagnosis for such debilitating skepticism. But I think there’s much to be learned from an insistence that true science is open, inspectable, and replicable. Harvard’s program on “Digital Scholarship” and the Yale Roundtable on Data and Code Sharing* have taken up this cause, as has the work of Victoria Stodden.

We often hear that the academic sector has to become more “corporate” if it is to survive and thrive. At least when it comes to health data, the reverse is true: corporations must become much more open about the sources and limits of the studies they conduct. We can’t resolve the “great anti-depressant debate,” or prevent future questioning of pharma’s bona fides, without such commitments.

*In the spirit of full disclosure: I did participate in this roundtable.

X-Posted: Health Law Profs Blog.

Rethinking Sorrell v. IMS Health: Privacy as a First Amendment Value

The Supreme Court will soon hear oral arguments in Sorrell v. IMS Health. The case pits medical data giant IMS Health (and some other plaintiffs) against the state of Vermont, which restricted the distribution of certain “physician-identified” medical data if the doctors who generated the data failed to affirmatively permit its distribution.* I have contributed to an amicus brief submitted on behalf of the New England Journal of Medicine regarding the case, and I agree with the views expressed by brief co-author David Orentlicher in his excellent article Prescription Data Mining and the Protection of Patients’ Interests. I think he, Sean Flynn, and Kevin Outterson have, in various venues, made a compelling case for Vermont’s restrictions. But I think it is easy to “miss the forest for the trees” in this complex case, and want to make some points below about its stakes.**

Privacy Promotes Freedom of Expression

Privacy has repeatedly been subordinated to other, competing values. Priscilla Regan chronicles how efficiency has trumped privacy in U.S. legislative contexts. In campaign finance and citizen petition cases, democracy has trumped the right of donors and signers to keep their identities secret. Numerous tech law commentators chronicle a tension between privacy and innovation. And now Sorrell is billed as a case pitting privacy against the First Amendment.
Read More

After Makena: Could a Risk Corridors Approach Balance Incentives and Access?

The past few weeks have been worrying ones for expectant mothers who wanted a hormonal treatment designed to stop preterm births. As Rob Stein of the WaPo explains,

A form of progesterone known as 17P was used for years to reduce the risk of preterm birth. . . Because no companies marketed the drug, women obtained it cheaply from “compounding” pharmacies, which produced individual batches for them [at about $20 each]. Doctors and regulators had long worried about the purity and consistency of the drug and were pleased when KV won FDA’s imprimatur for a well-studied version, which the company is selling as Makena.

The list price for the drug, Makena, turned out to be a stunning $1,500 per dose. That’s for a drug that must be injected every week for about 20 weeks, meaning it will cost about $30,000 per at-risk pregnancy. . . . The approval of Makena gave the company seven years of exclusive rights, and KV immediately fired off letters to compounding pharmacies, warning that they could no longer sell their versions of drug.

A day after Stein’s article appeared, the FDA made it clear that it “does not intend to take enforcement action against pharmacies that compound” 17P, “in order to support access to this important drug, at this time and under this unique situation.”

This is a fascinating, and in some ways troubling, response to the accusations of price-gouging by KV. Nonenforcement here has some eerie parallels to the epidemic of waivers now undermining the implementation of the ACA.

Compounding pharmacists had already averred that “many of [KV's] assertions that the compounding of an FDA approved product is prohibited are not supported by the legal citations it references.” Though the FDA’s letter preserves access to 17P for now, that access could be revoked at any time. As the FDA states on its website:
Read More

Indian Supreme Court on Withdrawal of Life Support

There is a fascinating recent decision from the Indian Supreme Court on the Shanbaug case, regarding a woman who has been in a persistent vegetative state (PVS) for over 37 years. A petitioner who had written a book on Shanbaug argued for a withdrawal of life support. Shanbaug had no family to intervene, but hospital staff resisted, and the Court ultimately sided with them. While unflinchingly examining the dehumanizing aspects of PVS, the Court offers a remarkable affirmation of the good will of the staff who have taken care of Shanbaug:

[I]t is evident that the KEM Hospital staff right from the Dean, including the present Dean Dr. Sanjay Oak and down to the staff nurses and para-medical staff have been looking after Aruna for 38 years day and night. What they have done is simply marvelous. They feed Aruna, wash her, bathe her, cut her nails, and generally take care of her, and they have been doing this not on a few occasions but day and night, year after year. The whole country must learn the meaning of dedication and sacrifice from the KEM hospital staff. In 38 years Aruna has not developed one bed sore. It is thus obvious that the KEM hospital staff has developed an emotional bonding and attachment to Aruna Shanbaug, and in a sense they are her real family today.

After a scholarly survey of many countries and U.S. states’ laws on withdrawal of life support, the Court concludes:

A decision has to be taken to discontinue life support either by the parents or the spouse or other close relatives, or in the absence of any of them, such a decision can be taken even by a person or a body of persons acting as a next friend. It can also be taken by the doctors attending the patient. However, the decision should be taken bona fide in the best interest of the patient. . . .

Read More

0

On the Colloquy: The Credit Crisis, Refusal-to-Deal, Procreation & the Constitution, and Open Records vs. Death-Related Privacy Rights

NW-Colloquy-Logo.jpg

This summer started off with a three part series from Professor Olufunmilayo B. Arewa looking at the credit crisis and possible changes that would focus on averting future market failures, rather than continuing to create regulations that only address past ones.  Part I of Prof. Arewa’s looks at the failure of risk management within the financial industry.  Part II analyzes the regulatory failures that contributed to the credit crisis as well as potential reforms.  Part III concludes by addressing recent legislation and whether it will actually help solve these very real problems.

Next, Professors Alan Devlin and Michael Jacobs take on an issue at the “heart of a highly divisive, international debate over the proper application of antitrust laws” – what should be done when a dominant firm refuses to share its intellectual property, even at monopoly prices.

Professor Carter Dillard then discussed the circumstances in which it may be morally permissible, and possibly even legally permissible, for a state to intervene and prohibit procreation.

Rounding out the summer was Professor Clay Calvert’s article looking at journalists’ use of open record laws and death-related privacy rights.  Calvert questions whether journalists have a responsibility beyond simply reporting dying words and graphic images.  He concludes that, at the very least, journalists should listen to the impact their reporting has on surviving family members.