Archive for the ‘Bioethics’ Category
posted by Ryan Calo
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read the rest of this post »
April 14, 2013 at 12:57 am Posted in: Bioethics, Civil Rights, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Technology, Uncategorized Print This Post One Comment
posted by Taunya Banks
As a follow up to my post last week asking about human dignity, unburied bones and ownership of human cells, here are two related issues that appeared in the Sunday news.
The first item from Sunday’s Baltimore Sun is the belated report of a Reuters story about the controversy over disposition of King Richard III’s newly discovered remains uncovered in a municipal parking lot by the University of Leicester. The long-lost remains of the King, who died in 1485, were exhumed, and the University was given permission to re-inter the remains in Leicester. But the King’s descendants objected claiming that they were not “consulted … over the exhumation and the license allowing the university to re-bury the King, and [that] this failure breached the European Convention on Human Rights.” They want the body buried in York.
The second item is an op-ed by two medical school academics, Jeffrey Rosenfeld and Christopher E. Mason, that appeared in Sunday’s Washington Post about Association for Molecular Pathology et al v. Myriad Genetics, et al, a case that will be argued in the Supreme Court on April 15th. This is important case that has been mentioned on this blog as recently as last February. SCOTUS even featured a symposium spurred by the controversy. At issue is whether, on some level, human genes are patentable. Rosenfeld and Mason oppose patenting DNA. On the other hand, much like the researchers discussing the HeLa cell, the respondents, Myriad Genetics, et al, argue that the issue is much narrower, namely whether the “human” aspect of the specific sequence of isolated human DNA is the result of the efforts of the respondent, and thus patentable. Read the rest of this post »
posted by Taunya Banks
In 1995 Gunther von Hagens presented his Body Worlds exhibit, described as a collection of real human bodies that have been “plastinated” to prevent their decay and make them more malleable. Some of these plastinated bodies were cut open to reveal their inner organs and then positioned in lifelike poses. The exhibit toured the world and was wildly popular.
Body Worlds also generated some criticism. Canadian social scientist, Lawrence Burns, argued that “some aspects of the exhibit violated human dignity.” (7(4): 12-23 Amer. J. Bioethics 2007) Although touted as an educational experience Burns and others worried that the bodies were being used as “resources to make money from the voyeurism of the general public.” A key concern was that the bodies were denied burial and that this was a dignitary affront. Burns conceded, however, that the concept of human dignity as applied to deceased individuals is unclear.
I started to think about whether there is dignity after death and, if so, what are its parameters, when I read a news article from the New Haven Register, about the skeleton of an enslaved man that was being studied by the anthropology faculty and students at Quinnipiac University prior to burial.
The enslaved man who died in the 1798 (slavery was not abolished in Connecticut until 1848), was named Fortune. At the time of his death Fortune was the human chattel of a Waterbury Connecticut physician who upon Fortune’s death boiled his body to remove the flesh keeping his skeleton to study human anatomy. Fortune’s body remained unburied and was on display as late as 1970 at the Mattatuck Museum where until recently it was still housed. Read the rest of this post »
posted by Deven Desai
Scientists have come to a “technical, not biological” problem in trying to resurrect a once extinct frog. Popular Science explains the:
gastric-brooding frog, native to tiny portions of Queensland, Australia, gave birth through its mouth, the only frog to do so (in fact, very few other animals in the entire animal kingdom do this–it’s mostly this frog and a few fish). It succumbed to extinction due to mostly non-human-related causes–parasites, loss of habitat, invasive weeds, a particular kind of fungus.
Specimens were frozen in simple deep freezers and reinserted into another frog. The embryos grew. The next step is to get them to full adulthood so they can pop out like before. Yes, these folks are talking to those interested in bringing back other species.
As for this particular animal, the process reminds me a bit too much of Alien, which still scares the heck out of me.
the gastric-brooding frog lays eggs, which are coated in a substance called prostaglandin. This substance causes the frog to stop producing gastric acid in its stomach, thus making the frog’s stomach a very nice place for eggs to be. So the frog swallows the eggs, incubates them in her gut, and when they hatch, the baby frogs crawl out her mouth.
Science. Yummy. Oh here is your law fodder. What are the ethical implications? Send in the clones! (A better title for Attack of the Clones, perhaps).
posted by Danielle Citron
Over at Jotwell, Jonathan Simon has a spot-on review of my colleague Leslie Meltzer Henry’s brilliant article, The Jurisprudence of Dignity, 160 U. Penn. L. Rev. 169 (2011). Henry’s work on dignity is as illuminating as it is ambitious. I urge you to read the piece. Here is Simon’s review:
Today American law, especially Eighth Amendment law, seems to be in the middle of a dignity tsunami. The United States is not alone in this regard, or even in the lead. Indeed dignity has been an increasingly prominent value in modern legal systems internationally since the middle of the 20th century, marked in the prominence given that term in such foundational documents of the contemporary age as the Universal Declaration of Human Rights, in the reconstructed legal systems of post-war Europe (particularly Germany), and in regional human rights treaties like the European Convention on Human Rights and the more recent European Union Charter of Rights. A stronger version of dignity seems increasingly central to reforming America’s distended and degrading penal state. Legal historians have suggested that American history — particularly, the absence of a prolonged political struggle with the aristocracy and the extended experience with slavery — rendered dignity a less powerful norm, which may explain the relative weak influence of dignity before now. Yet its increasing salience in the Roberts Court suggests that American dignity jurisprudence may be about to spring forward.
Professor Leslie Henry’s 2011 article, The Jurisprudence of Dignity, is a must-read for anyone interested in taming our penal state. Henry provides a comprehensive analysis of the US Supreme Court’s treatment of the term from the founding to the present. Henry borrows from the language philosopher Ludwig Wittgenstein the concept of a “family resemblance” and suggests that dignity as a legal term is anchored in five core meanings that continue to have relevance in contemporary law and which share overlapping features (but not a single set of factors describing all of them). The five clusters are: “institutional status as dignity,” “equality as dignity,” “liberty as dignity,” “personal integrity as dignity,” and “collective virtue as dignity.” These clusters suggest there can be both considerable reach but also precision and limits to using dignity to shape constitutional doctrine.
For much of the period between the Revolution and the middle of the 20thcentury, the meaning of dignity was confined largely to the first category, “institutional status as dignity.” Dignity by status dates from the earliest Greek and Roman conceptions, when dignity was associated with those of high status and conceptualized as anchored in that status. The United States by the time of the Constitution renounced the power to ennoble an aristocracy but shifted that hierarchical sense of dignity to the state itself and its officials. For much of the next century and a half, dignity is discussed mostly as a property of government, especially states and courts. This began to change in the 20th century, and the change accelerated significantly after World War II. Read the rest of this post »
posted by Amanda Pustilnik
Media outlets around the world reported yesterday that a pill can make people less racist.
Is this for real?
The answer is less racy – and less raced – but actually more interesting than the headlines suggest.
Researchers at the Oxford University Centre for Practical Ethics, led by Sylvia Terbeck, administered a common blood-pressure lowering drug, called propranolol, to half of a group of white subjects and a placebo to the other half. (Read the study’s press release here and the research paper here) The subjects then took a test that measures “implicit associations” – the rapid, automatic good/bad, scary/safe judgments we all make in a fraction of a second when we look at words and pictures. The subjects who took the drug showed less of an automatic fear response to images of black people’s faces and were less likely to associate pictures of black people with negative words than the subjects who did not take the drug. Based on the study’s design, it is likely that results would be the same in trials involving racism by and against other racial and ethnic groups.
This looks like the pill treated racism in the research subjects. But this isn’t so.
Researchers have long known that propranolol has a range of effects that include lethargy, sedation, and reductions in several kinds of brain activity. In high-flown medical parlance, this drug makes people really chilled out. I know: I’ve been on propranolol myself (unsuccessfully) for migraine prevention. When I was on the drug, my biggest fear was falling asleep at work – and even that didn’t stress me as much as it should have.
Because propranolol muffles fear generally, it reduces automatic negative responses to just about anything. Propranolol has been used to treat everything from “uncontrolled rage” to performance anxiety and is being explored for treating PTSD. Very recent research shows that it generally reduces activity in the brain region called the amygdala (more on that, below).
But the study remains interesting and important for a few reasons. This is the first study to show that inhibiting activity in the amygdala, which is crucially involved in fear learning, directly reduces one measure of race bias. This validates extensive research that has correlated race bias with heightened activity in that brain region. (Although some contrary research also challenges the association.) So this study helps support the idea of a causal relationship between automatic or pre-conscious race bias and conditioned fear learning.
The cure for racism born of conditioned fear learning is not to chemically dampen the brain’s response to fear generally – because fear is often useful – but to attack the causes of the conditioned associations that lead to bias in the first place.
The rest of this post will show how the fear response, claims about race, and the way the drug works all come together to point to the social nature of even “neurological” race bias – and to its economic and legal repercussions.
The fear response
When we see something that frightens or startles us, several regions of the brain become active – particularly the amygdala. The amygdala has many functions, so a neuroimage showing activity in the amygdala does not necessarily mean that a person is experiencing fear. But if a person has a frightening experience (loud noise!) or sees something she’s afraid of (snakes!), activity in the amygdala spikes. This activity is pre-conscious and totally outside our control: We startle first and then maybe stop to think about it.
The automaticity of fear serves us well in the face of real threats – but poorly in much of daily life. Fear learning is overly easy: A single negative experience can create a lasting, automatic fear association. Repeated, weak negative experiences can also form a strong fear association. And, we can “catch” fear socially: If my friend tells me that she had a negative experience, I may form an automatic fear association as if I had been frightened or harmed myself. Finally, fear lasts. I can consciously tell myself not to be afraid of a particular thing but my automatic fear response is likely to persist.
Race bias and the fear response
In neuroimaging studies using functional magnetic resonance imaging (fMRI) on white and black Americans, research subjects on average have a greater amygdalar response to images of black faces than to images of white faces. Researchers have interpreted this as a pre-conscious fear response. Indeed, the more that activity in a person’s amydala increases in response to the images of black faces, the more strongly he or she makes negative associations with images of black faces and with typically African-American names (see paper here).
These automatic fear responses matter because they literally shape our perceptions of reality. For example, a subject might be asked to rate the facial expressions on a set of white and black faces. The facial expressions range from happy to neutral to angry. A subject who has a strong amygdalar response to images of black faces is much more likely to misinterpret neutral or even moderately happy expressions on a black facial image as being hostile or angry. This shows how fear changes our perceptions, which in turn changes how we react to and treat other people. It also shows how fear alters perception to create a self-reinforcing loop.
This kind of pre-conscious or automatic racism matters economically and legally: A majority of white people who have taken these implicit association tests demonstrate some automatic bias against black faces both associationally and neurologically. White people numerically and proportionally hold more positions as decision-makers about employment – like hiring and promotion – and about legal process and consequences – like whether to charge a suspect with a crime, the severity of the crime with which to charge him or her, and whether to offer a generous or harsh plea bargain. A study of two hundred judges serving in jurisdictions across the United States has shown that judges, too, more readily make these automatic, negative associations about black people than they do about white people. The implication is that automatic racial bias could play a role in pervasively tilting the scales against black people in every phase of economic life and in every phase of the legal process.
Yet, current anti-discrimination law only prohibits explicit racial bias. An employer may not advertise a position as “whites only” nor fire nor refuse to promote a worker because the employer does not want to retain or advance a black person. Systematic racial bias that creates unlawful “disparate impact” also rests on explicit racism: plaintiffs who claim that they are proportionally under-represented in, say, hiring and promotion by a particular employer must show that the disparate impact results from an intentional discriminatory purpose.
Automatic race bias, by contrast, takes a different form – a form not barred by law. Automatic discrimination expresses itself when the white supervisor (or police officer, or prosecutor, or judge, or parole board member) just somehow feels that his or her black counterpart has the proverbial “bad attitude,” or doesn’t “fit” with the culture of the organization, or poses a greater risk to the public than an equivalent white offender and so should not be offered bail or a plea deal or be paroled after serving some part of his sentence.
Tying it all together
If current anti-discrimination law does not touch automatic bias, and automatic bias is pervasive, then does this point to a role for drugs?
On propranlol, an implicitly biased interviewer or boss might perceive a black candidate more fairly, unfiltered by automatic negative responses. (She might, of course, still harbor conscious but unstated forms of bias; propranolol certainly would not touch race-biased beliefs about professionalism, competence, and the like.) But it also would generally dampen the decision-maker’s automatic fear responses. An overall reduction in automatic negative responses would not necessarily be a good thing: while it might free decision-makers from some false negative judgments based on race, it also would likely impair them from picking up on real negative signals from other sources.
And the take-away …
That a fear-dampening drug reduces racial bias in subjects helps confirm that much racial bias is based in automatic negative responses, which result from conditioned fear learning. Although this finding is hardly surprising, it is interesting and important. Any person reading this study should ask him- or herself: How does automatic fear affect my decisions about other people? How does it affect the judgments of important economic and legal decision-makers? How can we make it less likely that the average white person sees the average black person through distorting fear goggles in the first place?
The problem with this study and the headlines hyping it is that they perpetuate the idea that racism is the individual racist’s problem (It’s in his brain! And we can fix it!). A close reading of the study points to the importance of socially conditioned fear-learning about race – which then becomes neurologically represented in each of us. Despite the headlines, racism is not a neurological problem but a cultural one, which means that the solutions are a lot more complex than popping a pill.
posted by Ramesh Subramanian
Thank you, Samir Chopra and Lawrence White for writing this extremely thought-provoking book! Like Sonia Katyal, I too am particularly fascinated by the last chapter – personhood for artificial agents. The authors have done a wonderful job of explaining the legal constructs that have defined, and continue to define the notion of according legal personality to artificial agents.
The authors argue that “dependent” legal personality, which has already been accorded to entities such as corporations, temples and ships in some cases, could be easily extended to cover artificial agents. On the other hand, the argument for according “independent” legal personality to artificial agents is much more tenuous. Many (legal) arguments and theories exist which are strong impediments to according such status. The authors categorize these impediments as competencies (being sui juris, having a sensitivity to legal obligations, susceptibility to punishment, capability for contract formation, and property ownership and economic capacity) and philosophical objections (i.e. artificial agents do not possess Free Will, do not enjoy autonomy, or possess a moral sense, and do not have clearly defined identities), and then argue how they might be overcome legally.
Notwithstanding their conclusion that the courts may be unable or unwilling to take more than a piecemeal approach to extending constitutional protections to artificial agents, it seems clear to me the accordance of legal personality – both dependent and, to a lesser extent independent, is not too far into the future. In fact, the aftermath of Gillick v West Norfolk and Wisbech Area Health Authority has shown that various courts have gradually come to accept that dependent minors “gradually develop their mental faculties,” and thus can be entitled to make certain “decisions in the medical sphere.”
We can extend this argument to artificial agents which are no longer just programmed expert systems, but have gradually evolved into being self-correcting, learning and reasoning systems, much like children and some animals. We already know that even small children exhibit these notions. So do chimpanzees and other primates. Stephen Wise has argued that some animals meet the “legal personhood” criteria, and should therefore be accorded rights and protections. The Nonhuman Rights Project founded by Wise is actively fighting for legal rights for non-human species. As these legal moves evolve and shape common law, the question arises as to when (not if) artificial agents will develop notions of “self,” “morals” and “fairness,” and thus on that basis be accorded legal personhood status?
And when that situation arrives, what are the ramifications that we should further consider? I believe that three main “rights” that would have to be considered are: Reproduction, Representation, and Termination. We already know that artificial agents (and Artificial Life) can replicate themselves and “teach” the newly created agents. Self-perpetuation can also be considered to be a form of representation. We also know that under certain well defined conditions, these entities can self-destruct or cease to operate. But will these aspects gain the status of rights accorded to artificial agents?
These questions lead me to the issues which I personally find fascinating: end-of-life decisions extended to artificial agents. For instance, what would be the role of aging agents of inferior capabilities that nevertheless exist in a vast global network? What about malevolent agents? When, for instance, would it be appropriate to terminate an artificial agent? What would be the laws that would handle situations like this, and how would such laws be framed? While these questions seem far-fetched, we are already at a point where numerous viruses and “bots” pervade the global information networks, learn, perpetuate, “reason,” make decisions, and continue to extend their lives and their capacity to affect our existence as we know it. So who would be the final arbiter of end-of-life decisions in such cases? In fact, once artificial agents evolve and gain personhood rights, would it not be conceivable that we would have non-human judges in the courts?
Are these scenarios too far away for us to worry about, or close enough? I wonder…
February 14, 2012 at 6:00 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Bioethics, Civil Rights, Courts, Sociology of Law, Symposium (Autonomous Artificial Agents), Technology, Uncategorized Print This Post No Comments
posted by Amanda Pustilnik
Is psychopathy a birth defect that should exclude a convicted serial killer and rapist from the death penalty? Are the results of fMRI lie-detection tests reliable enough to be admitted in court? And if a giant brain tumor suddenly turns a law-abiding professional into a hypersexual who indiscriminately solicits females from ages 8 to 80, is he criminally responsible for his conduct? These were the questions on the table when the International Neuroethics Society convened a fascinating panel last week at the Carnegie Institution for Science last week on the uses of neuroscience evidence in criminal and civil trials.
Moderated and organized by Hank Greely of Stanford Law School, the panel brought together:
- Steven Greenberg, whose efforts to introduce neuroscience on psychopathic disorder (psychopathy) in capital sentencing in Illinois of Brian Dugan has garnered attention from Nature to The Chicago Tribune;
- Houston Gordon (an old-school trial attorney successful enough not to need his own website, hence no hyperlink), who has made the most assertive arguments so far to admit fMRI lie-detection evidence in a civil case, United States v. Semrau, and
- Russell Swerdlow, a research and clinical professor of neurology (and three other sciences!). Swerdlow’s brilliant diagnostic work detected the tumor in the newly-hypersexual patient, whom others had dismissed as a creep and a criminal.
In three upcoming short posts, I will feature the comments of each of these panelists and present for you, dear reader, some of the thornier issues raised by their talks. These cases have been reported on in publications ranging from the Archives of Neurology to USA Today, but Concurring Opinions brings to you, direct and uncensored, the statements of the lawyers and scientists who made these cases happen … Can I say “stay tuned” on a blog?
November 20, 2011 at 12:39 pm Tags: law & neuroscience, neuroethics Posted in: Bioethics, Capital Punishment, Criminal Law, Evidence Law, Health Law, Psychology and Behavior, Uncategorized Print This Post One Comment
posted by Frank Pasquale
Marcia Angell has kicked off another set of controversies for the pharmaceutical sector in two recent review essays in the New York Review of Books. She favorably reviews meta-research that calls into question the effectiveness of many antidepressant drugs:
Kirsch and his colleagues used the Freedom of Information Act to obtain FDA reviews of all placebo-controlled clinical trials, whether positive or negative, submitted for the initial approval of the six most widely used antidepressant drugs approved between 1987 and 1999—Prozac, Paxil, Zoloft, Celexa, Serzone, and Effexor. . . .Altogether, there were forty-two trials of the six drugs. Most of them were negative. Overall, placebos were 82 percent as effective as the drugs, as measured by the Hamilton Depression Scale (HAM-D), a widely used score of symptoms of depression. The average difference between drug and placebo was only 1.8 points on the HAM-D, a difference that, while statistically significant, was clinically meaningless. The results were much the same for all six drugs: they were all equally unimpressive. Yet because the positive studies were extensively publicized, while the negative ones were hidden, the public and the medical profession came to believe that these drugs were highly effective antidepressants.
Angell discusses other research that indicates that placebos can often be nearly as effective as drugs for conditions like depression. Psychiatrist Peter Kramer, a long-time advocate of anti-depressant therapy, responded to her last Sunday. He admits that “placebo responses . . . have been steadily on the rise” in FDA data; “in some studies, 40 percent of subjects not receiving medication get better.” But he believes that is only because the studies focus on the mildly depressed:
The problem is so big that entrepreneurs have founded businesses promising to identify genuinely ill research subjects. The companies use video links to screen patients at central locations where (contrary to the practice at centers where trials are run) reviewers have no incentives for enrolling subjects. In early comparisons, off-site raters rejected about 40 percent of subjects who had been accepted locally — on the ground that those subjects did not have severe enough symptoms to qualify for treatment. If this result is typical, many subjects labeled mildly depressed in the F.D.A. data don’t have depression and might well respond to placebos as readily as to antidepressants.
Yves Smith finds Kramer’s response unconvincing:
The research is clear: the efficacy of antidepressants is (contrary to what [Kramer's] article suggests) lower than most drugs (70% is a typical efficacy rate; for antidepressants, it’s about 50%. The placebo rate is 20% to 30% for antidepressants). And since most antidepressants produce side effects, patients in trials can often guess successfully as to whether they are getting real drugs. If a placebo is chosen that produces a symptom, say dry mouth, the efficacy of antidepressants v. placebos is almost indistinguishable. The argument made in [Kramer's] article to try to deal with this inconvenient fact, that many of the people chosen for clinical trials really weren’t depressed (thus contending that the placebo effect was simply bad sampling) is utter[ly wrong]. You’d see the mildly/short-term depressed people getting both placebos and real drugs. You would therefore expect to see the efficacy rate of both the placebo and the real drug boosted by the inclusion of people who just happened to get better anyhow.
Felix Salmon also challenges Kramer’s logic:
[Kramer's view is that] lots of people were diagnosed with depression and put onto a trial of antidepressant drugs, even when they were perfectly healthy. Which sounds very much like the kind of thing that Angell is complaining about: the way in which, for instance, the number of children so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) was 35 times higher in 2007 than it was in 1987. And it’s getting worse: the editors of DSM-V, to be published in 2013, have written that “in primary care settings, approximately 30 percent to 50 percent of patients have prominent mental health symptoms or identifiable mental disorders, which have significant adverse consequences if left untreated.”
Those who would defend psychopharmacology, then, seem to want to have their cake and eat it: on the one hand it seems that serious mental health disorders have reached pandemic proportions, but on the other hand we’re told that a lot of people diagnosed with those disorders never really had them in the first place.
That is a very challenging point for the industry to consider as it responds to concerns like Angell’s. The diagnosis of mental illness will always have ineradicably economic dimensions and politically contestable aims. But doctors and researchers should insulate professional expertise and the interpretation of maladies as much as possible from inappropriate pressures.
How can they maintain that kind of independent clinical judgment? I think one key is to assure that data from all trials is open to all researchers. Consider, for instance, these findings from a NEJM study on “selective publication:”
We obtained reviews from the Food and Drug Administration (FDA) for studies of 12 antidepressant agents involving 12,564 patients. . . . Among 74 FDA-registered studies, 31%, accounting for 3449 study participants, were not published. Whether and how the studies were published were associated with the study outcome. A total of 37 studies viewed by the FDA as having positive results were published; 1 study viewed as positive was not published. Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall. (emphasis added).
Melander, et al. also worried (in 2003) that, since “The degree of multiple publication, selective publication, and selective reporting differed between products,” “any attempt to recommend a specific selective serotonin reuptake inhibitor from the publicly available data only is likely to be based on biased evidence.” Without clearer “best practices” for data publication, clinical judgment may be impaired.
Full disclosure of study funding should also be mandatory and conspicuous, wherever results are published. Ernest R. House has reported that, “In a study of 370 ‘randomized’ drug trials, studies recommended the experimental drug as the ‘treatment of choice’ in 51% of trials sponsored by for-profit organizations compared to 16% sponsored by nonprofits.” The commodification of research has made it too easy to manipulate results, as Bartlett & Steele have argued:
One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval. There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called “rescue countries.” Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections. Ketek was developed in the 1990s by Aventis Pharmaceuticals, now Sanofi-Aventis. In 2004 . . . the F.D.A. certified Ketek as safe and effective. The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey.
The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data. . . . As the months ticked by, and the number of people taking the drug climbed steadily, the F.D.A. began to get reports of adverse reactions, including serious liver damage that sometimes led to death. . . . [C]ritics were especially concerned about an ongoing trial in which 4,000 infants and children, some as young as six months, were recruited in more than a dozen countries for an experiment to assess Ketek’s effectiveness in treating ear infections and tonsillitis. The trial had been sanctioned over the objections of the F.D.A.’s own reviewers. . . . In 2006, after inquiries from Congress, the F.D.A. asked Sanofi-Aventis to halt the trial. Less than a year later, one day before the start of a congressional hearing on the F.D.A.’s approval of the drug, the agency suddenly slapped a so-called black-box warning on the label of Ketek, restricting its use. (A black-box warning is the most serious step the F.D.A. can take short of removing a drug from the market.) By then the F.D.A. had received 93 reports of severe adverse reactions to Ketek, resulting in 12 deaths.
The great anti-depressant debate is part of a much larger “re-think” of the validity of data. Medical claims can spread virally without much evidence. According to a notable meta-researcher, “much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” The “decline effect” dogs science generally. Statisticians are also debunking ballyhooed efforts to target cancer treatments.
Max Weber once said that “radical doubt is the father of knowledge.” Perhaps DSM-VI will include a diagnosis for such debilitating skepticism. But I think there’s much to be learned from an insistence that true science is open, inspectable, and replicable. Harvard’s program on “Digital Scholarship” and the Yale Roundtable on Data and Code Sharing* have taken up this cause, as has the work of Victoria Stodden.
We often hear that the academic sector has to become more “corporate” if it is to survive and thrive. At least when it comes to health data, the reverse is true: corporations must become much more open about the sources and limits of the studies they conduct. We can’t resolve the “great anti-depressant debate,” or prevent future questioning of pharma’s bona fides, without such commitments.
*In the spirit of full disclosure: I did participate in this roundtable.
X-Posted: Health Law Profs Blog.
posted by Frank Pasquale
The Supreme Court will soon hear oral arguments in Sorrell v. IMS Health. The case pits medical data giant IMS Health (and some other plaintiffs) against the state of Vermont, which restricted the distribution of certain “physician-identified” medical data if the doctors who generated the data failed to affirmatively permit its distribution.* I have contributed to an amicus brief submitted on behalf of the New England Journal of Medicine regarding the case, and I agree with the views expressed by brief co-author David Orentlicher in his excellent article Prescription Data Mining and the Protection of Patients’ Interests. I think he, Sean Flynn, and Kevin Outterson have, in various venues, made a compelling case for Vermont’s restrictions. But I think it is easy to “miss the forest for the trees” in this complex case, and want to make some points below about its stakes.**
Privacy Promotes Freedom of Expression
Privacy has repeatedly been subordinated to other, competing values. Priscilla Regan chronicles how efficiency has trumped privacy in U.S. legislative contexts. In campaign finance and citizen petition cases, democracy has trumped the right of donors and signers to keep their identities secret. Numerous tech law commentators chronicle a tension between privacy and innovation. And now Sorrell is billed as a case pitting privacy against the First Amendment.
Read the rest of this post »
posted by Frank Pasquale
A form of progesterone known as 17P was used for years to reduce the risk of preterm birth. . . Because no companies marketed the drug, women obtained it cheaply from “compounding” pharmacies, which produced individual batches for them [at about $20 each]. Doctors and regulators had long worried about the purity and consistency of the drug and were pleased when KV won FDA’s imprimatur for a well-studied version, which the company is selling as Makena.
The list price for the drug, Makena, turned out to be a stunning $1,500 per dose. That’s for a drug that must be injected every week for about 20 weeks, meaning it will cost about $30,000 per at-risk pregnancy. . . . The approval of Makena gave the company seven years of exclusive rights, and KV immediately fired off letters to compounding pharmacies, warning that they could no longer sell their versions of drug.
A day after Stein’s article appeared, the FDA made it clear that it “does not intend to take enforcement action against pharmacies that compound” 17P, “in order to support access to this important drug, at this time and under this unique situation.”
This is a fascinating, and in some ways troubling, response to the accusations of price-gouging by KV. Nonenforcement here has some eerie parallels to the epidemic of waivers now undermining the implementation of the ACA.
Compounding pharmacists had already averred that “many of [KV's] assertions that the compounding of an FDA approved product is prohibited are not supported by the legal citations it references.” Though the FDA’s letter preserves access to 17P for now, that access could be revoked at any time. As the FDA states on its website:
Read the rest of this post »
posted by Frank Pasquale
There is a fascinating recent decision from the Indian Supreme Court on the Shanbaug case, regarding a woman who has been in a persistent vegetative state (PVS) for over 37 years. A petitioner who had written a book on Shanbaug argued for a withdrawal of life support. Shanbaug had no family to intervene, but hospital staff resisted, and the Court ultimately sided with them. While unflinchingly examining the dehumanizing aspects of PVS, the Court offers a remarkable affirmation of the good will of the staff who have taken care of Shanbaug:
[I]t is evident that the KEM Hospital staff right from the Dean, including the present Dean Dr. Sanjay Oak and down to the staff nurses and para-medical staff have been looking after Aruna for 38 years day and night. What they have done is simply marvelous. They feed Aruna, wash her, bathe her, cut her nails, and generally take care of her, and they have been doing this not on a few occasions but day and night, year after year. The whole country must learn the meaning of dedication and sacrifice from the KEM hospital staff. In 38 years Aruna has not developed one bed sore. It is thus obvious that the KEM hospital staff has developed an emotional bonding and attachment to Aruna Shanbaug, and in a sense they are her real family today.
After a scholarly survey of many countries and U.S. states’ laws on withdrawal of life support, the Court concludes:
A decision has to be taken to discontinue life support either by the parents or the spouse or other close relatives, or in the absence of any of them, such a decision can be taken even by a person or a body of persons acting as a next friend. It can also be taken by the doctors attending the patient. However, the decision should be taken bona fide in the best interest of the patient. . . .
On the Colloquy: The Credit Crisis, Refusal-to-Deal, Procreation & the Constitution, and Open Records vs. Death-Related Privacy Rights
posted by Northwestern University Law Review
This summer started off with a three part series from Professor Olufunmilayo B. Arewa looking at the credit crisis and possible changes that would focus on averting future market failures, rather than continuing to create regulations that only address past ones. Part I of Prof. Arewa’s looks at the failure of risk management within the financial industry. Part II analyzes the regulatory failures that contributed to the credit crisis as well as potential reforms. Part III concludes by addressing recent legislation and whether it will actually help solve these very real problems.
Next, Professors Alan Devlin and Michael Jacobs take on an issue at the “heart of a highly divisive, international debate over the proper application of antitrust laws” – what should be done when a dominant firm refuses to share its intellectual property, even at monopoly prices.
Professor Carter Dillard then discussed the circumstances in which it may be morally permissible, and possibly even legally permissible, for a state to intervene and prohibit procreation.
Rounding out the summer was Professor Clay Calvert’s article looking at journalists’ use of open record laws and death-related privacy rights. Calvert questions whether journalists have a responsibility beyond simply reporting dying words and graphic images. He concludes that, at the very least, journalists should listen to the impact their reporting has on surviving family members.
September 5, 2010 at 1:15 pm Tags: Antitrust, Constitutional Law, copyright, discrimination, financial crisis, free speech, Intellectual Property, Privacy, trademark Posted in: Antitrust, Bioethics, Civil Rights, Constitutional Law, Corporate Finance, First Amendment, Intellectual Property, Privacy, Securities, Securities Regulation Print This Post No Comments
posted by Dave Hoffman
Legal archaeology is a term sometimes used to refer to scholarship that brings a rich context to famous cases. If you were a legal researcher seeking to enrich a modern classic – e.g., Pepsico [contracts], Lawrence [con law], Liebeck [torts], Twombly [civ pro] – you might proceed by interviewing the parties and their attorneys, examining prior and related cases, and boning up on the briefs and exhibits. It seems pretty clear to me that before undertaking such research, a prudent professor would check in with their IRB. The interviewing of the parties and their attorneys in particular doesn’t appear to be clearly covered by any exemption, and I imagine that at least expedited review would be indicated.
But how about simply writing about living parties – or judges – in modern cases? It would seem inconceivable to go to the IRB before writing about, say, Yaser Hamdi. Well, you never know how your local IRB will deal with novelty. So let’s go back to the basics. Is this research under Section 46.102? Arguably: it is a “systemic investigation . . . designed to contribute to generalizable knowledge.” Is it research regarding human subjects? Well, under 46.102(f), human subjects are people you collect data from through actual contact or those who you collect data that is otherwise private. Private information “includes information about behavior that occurs in a context in which an individual can reasonably expect that no observation or recording is taking place, and information which has been provided for specific purposes by an individual and which the individual can reasonably expect will not be made public (for example, a medical record).” Are their facts about behavior disclosed in judicial opinions which fit this definition? I can think of many: disclosure of facts from police reports, medical records, taxes, etc. Indeed, most opinions disclose facts about individuals that they’d never, ever, want told to the public, and were forced to disclose only through contentious discovery. Quite often, the discovery contained stipulations of confidentiality that bind the parties, but not the court.
Nevertheless, it’s clear that writing about such personal facts in released opinions is in fact exempt from IRB review, since a judicial opinion is, under 46.101(b)(4), a public record. So you might think that this entire exercise is academic. And for some IRBs, it would be. But most IRBs would take the position – if asked – that researchers must submit an application to them, so that the board can evaluate the claim for exemption. This is a slam dunk case for exemption, but that doesn’t mean that the professor gets to decide for herself that no application is necessary. Of course, I’ve never heard of a law professor submitting to an IRB before writing an article about a recent case of interest, even when discussing the most personal facts relating to the parties or the judge. In fact, some articles about particular judges have created political scandals of some note. Unless I’m mistaken about any of the previous analysis, I think that means that most law professors, some of the time, are not in technical compliance with a set of (very silly and possibly unconstitutional as applied) regulations. Ironically, it is probably constitutional law professors, who write about recent cases involving individual parties most often, who are the prime violators. If your law school has not reached a general understanding with your local IRB about how to proceed, it should.
posted by Glenn Cohen
A few months back Jessie Hill had a blog post entitled “My so-called right to procreate” asking about the scope of procreative liberty protected by the Constitution. I wrote about this issue in passing in a paper devoted to the opposite question, whether the constitution protect a right NOT to procreate (or what I prefer to think of as rights not to procreate, separable sticks in a bundle encompassing the right not to be a legal, gestational, or genetic parent – indeed as I pointed out there, I think the right to procreate should be similarly unbundled). In a new paper entitled Well, What About the Children?: Best Interests Reasoning, the New Eugenics, and the Regulation of Reproduction, as part of a larger project on the justifications for the regulation of reproduction I briefly address a slightly narrower issue than the one in Jessie’s post, whether there is a negative liberty fundamental right to non-interference with reproductive technology use. I thought I would set out and expand on that discussion here and see what other readers thought.
My own view is that the constitutional status of state interventions preventing access to reproductive technologies (either directly, e.g., prohibitions on access to reproductive technology for women over age 50 or through regulation, or indirectly, e.g., parental fitness screening for surrogacy users) is deeply under-determined by the existing doctrine. The only U.S. Supreme Court decision to consider whether there is a fundamental right to become a genetic parent, Skinner v. Oklahoma, 316 U.S. 535, 536-39 (1942) (finding a fundamental right that was violated by physical sterilization of individuals convicted three or more times of crimes of moral turpitude but not embezzlement) is subject to a myriad of possible interpretations especially as applied to reproductive technologies.
Here are a few:
Skinner protects as a fundamental right any use of reproductive technologies that simulates that which would be achievable by coital reproduction in the fertile individual (not, therefore, something like genetic engineering). John Robertson is the person I most closely associate with this view (although his view has considerably more nuance that I can get across here).
On the other extreme, one might argue that because Skinner itself was premised on an Equal Protection claim not a substantive Due Process one and thus there is no substantive Due Process right to Procreate at all. Cf. VICTORIA F. NOURSE, IN RECKLESS HANDS: SKINNER V. OKLAHOMA AND THE NEAR-TRIUMPH OF AMERICAN EUGENICS 165 (2008) (concluding that “both liberals and conservatives have made a mistake” in their reading of Skinner because the case was “neither argued nor decided as a case about rights in the sense that we use the term ‘fundamental right’ today).” That said, over the years the Court has lumped Skinner in with its substantive Due Process jurisprudence so often that the time may have passed for hewing to this distinction.
In between there are several other positions:
posted by Glenn Cohen
I’ve found both in published work and in classroom and workshop discourse that people often mean different things when they talk about commodification concerns as an argument for blocked exchanges – e.g., forbidding the sale of kidneys from live donors, prostitution, the sale of surrogacy services, etc.
I thought it might be useful to try and sort out some of these different meanings (for those looking for a more formal discussion with citations, this old paper of mine may be useful). This is my own classification (though it builds off work by my colleague Michael Sandel among others). I will be interested to see if others think one should add to or reformulate the taxonomy. It is also worth emphasizing at the threshold that while money is the focus of most anti-commodificationist arguments that for each version barter can also give rise to the same objections.
At the top-level we can divide commodification into three large categories (the 3 C’s if you will): Coercion, Corruption, and Crowding-Out. For the purposes of this post my goal is not to evaluate these arguments, just to parse them better.
(a) Voluntariness. This concern, also known as exploitation, is framed as concern about the voluntariness of the transaction in a way that demands more than minimal notions of consent. It is the fear that only the poor will sell organs or that only destitute women will consent to act as commercial surrogates, and argues for blocking the exchange to protect those populations. It thus depends on some empirical facts about the population the argument seeks to protect; one occasionally seeks proposals to limit organ or surrogacy services sales to people above a certain income bracket to blunt the concern. It also depends on views about the validity of blocking an exchange due to these somewhat paternalistic concerns. Thus, sometimes it is argued that it is hypocritical to block an exchange preventing a badly-off person from improving their station in life unless we are also committed to a redistributive plan that makes them as well-off as they would be if the exchange was permitted. It is important to understand that this objection is not focused on a claim that the buyer and seller are giving up unequally (in amount, see below regarding mismatches of type) valued things, the “raw deal” problem that parallels one strand of substantive unconscionability doctrine in contracts; instead, it is about the seller’s poverty and their susceptibility towards “an offer you can’t refuse” even if the good is valued fairly. While one solution to some forms of unconscionability may be to re-write the terms to be more favorable to the seller, adding extra compensation here would worsen not improve the exchange from the point of view of this objection.
(b) Access: Somewhat less frequently the objection is made almost in reverse. While the voluntariness version treats the exchange as representing a “bad” that the poorer party in the exchange suffers in one respect involuntarily, the access variant instead views the exchange as representing a “good” that only the better-off party has access to because of the existence of the market. For example, the sale of “premium” eggs is something only the wealthy will have access to, or the during Civil War the practice of commutation where one could pay three hundred dollars to avoid serving in the draft was only available to wealthier stratas of society. This objection also depends on notions of background unjust inequalities in resource distribution to get going.
Price caps may be a partial solution to either form of the coercion objection because they will lower the price to make it not-so-attractive as to make us question voluntariness (the “offer you can’t refuse”) and also move the purchase of the good into the range of access for more of the population. It is only a partial solution because it usually results in shortages. One could also imagine “mixed” systems that do better at addressing one concern than the other — so the state could be the only permitted buyer of organs and then distribute them through the current transplant system rather than willingness to pay — this would go a long way to blunting the access concern, but not necessarily the voluntariness one (and indeed might make the corruption objection below even worse).
(2) Corruption: A second version of the objection is that a market exchange “corrupts,” “taints,” or “denigrates” the things being exchanged — for instance, the argument that prostitution devalues women’s bodies by attaching a price tag to their sexuality. Cass Sunstein offers a good starting formulation of the corruption argument: an exchange is corrupting when “the relevant goods cannot be aligned along a single metric without doing violence to our considered judgments about how these goods are best characterized.” Incommensurability and Kinds of Valuation: Some Applications in Law, in INCOMMENSURABILITY, INCOMPARABILITY, AND PRACTICAL REASON 234, 238 (Ruth Chang ed., 1997). More specifically, one might suggest that there are various “spheres” (sometimes called “modes”) of valuation, and an exchange is corrupting when it ignores the differences between these spheres of valuation and forces us to value all goods in the same way. For example, exchanging children for money corrupts the value of children because money and children belong in different spheres of valuation.
As I have described in depth, that requires both a theory of sphere differentiation and a theory of what it is about exchanges that “does violence,” neither of which are that easy to articulate. For present purposes, though, I want to merely distinguish versions of the argument along two dimensions.
August 17, 2010 at 8:53 am Posted in: Bioethics, Culture, Family Law, Feminism and Gender, Health Law, Jurisprudence, Law and Humanities, Law and Inequality, Legal Theory, Uncategorized Print This Post 6 Comments
posted by Glenn Cohen
Over the summer at the annual health law professors’ conference organized by ASLME, I saw a wonderful presentation on Flynn v. Holder from John Robertson, which I think John will be publishing soon. The case is a challenge to the National Organ Transplant Act (NOTA) of 1984’s ban on selling bone marrow filed in the U.S. District Court, Central District of California, and you can view the complaint here.
My main interest in the case is how it will compare to Abigail Alliance v. Eschenbach, a case I helped litigate at the D.C. Circuit en banc stage when I was at the DOJ. Abigail Alliance involved a challenge by terminally ill patients to have access to drugs that had cleared Phase 1 Clinical Testing but had not gone further in the testing process. There, the plaintiffs succeeded in getting a panel of the D.C. Circuit to to hold that a fundamental right of theirs was being violated by the FDA policy, with a remand for consideration of whether the government could make its showing on strict scrutiny. On rehearing en banc, however, the full D.C. Circuit reversed gears finding no fundamental right (there was no serious argument in the case that the government would not prevail on rational basis review).
In many ways, Flynn is a beautifully set up test case. The primary plaintiff is very sympathetic — a “single mother of five with three daughters who suffer from a deadly bone marrow disease.” Because bone marrow is renewable, and many other renewable “organs” (think sperm and egg) explicitly fall outside of NOTA’s prohibition, there is an air of arbitrariness here. The plaintiffs do not want to buy bone marrow in crass commercial terms, but instead to “create a pilot program that would encourage more bone marrow donations by offering nominal compensation—such as a scholarship or housing allowance.” While I do not think this fact actually allows us to avoid the the corruption form of the anti-commodificationist argument (I may blog more on that topic soon), on a superficial level it does seem to reduce the strength of at least one talking point. The fact that we already tolerate altruistic bone marrow donation suggests that the risk-prevention rationale that was central in Abigail Alliance faces some problems here. Indeed as I , Lori Andrews, and others have argued in the context of reproductive services, in some ways the “coercion” or “exploitation” concerns that are sometimes raised in anti-commodificationist arguments may be more worrisome in the altruistic and familial setting than in arm’s length market arrangements. The case also seems to compare favorably on crowding-out concerns. Although the Abigail Alliance court did not reach the issue (because whether a fundamental right was present dominated the analysis) the government offered a somewhat attenuated crowding out argument: that the availability of experimental drugs outside of clinical trials would reduce the enrollment in clinical trials, and therefore slow either approval of these drugs (and widespread availability) or a demonstration that they were unsafe or ineffective. Though attenuated, this was a concern that many took quite seriously in the run-up and aftermath of the case. Here, by contrast, I think the crowding out argument is more straightforward and is similar to one that people associate with Richard Titmuss’ work as to blood sale, that adding commercial elements will drive altruistic donation out of the market. To be sure that is an empirical claim, but one that seems less plausible to me than the parallel claim in Abigail Alliance, and I think here again the charitable/foundation approach may blunt some concerns about the transformation of the social meaning of bone marrow donation.
posted by Glenn Cohen
A recent faculty workshop by my witty and brilliant colleague Jonathan Zittrain on “ubiquitous human computing,” (this youtube video captures in a different form what he was talking about ), prompted me to thinking about some ways in which platforms like Amazon’s Mechanical Turk, interface with university research and research ethics in interesting ways.
For those unfamiliar, Mechanical Turk allows you to farm out a variety of small tasks (label this image, enter date of this .pdf to a spreadsheet, take a photo of yourself with the sign “will turk for food,” etc) at a price per unit you set. Millions of anonymous users can then do the task for you and collect the bounty, a form of microwork.
As Jonathan detailed, this raises a host of fascinating issues, but I want to focus on two that are closer to bioethics.
First, I have begun to see some legal academics recruiting populations for experimental work using Mechanical Turk, and there is an emerging literature on the pros and cons of subject recruitment from these populations. Are Mechanical Turkers “research subjects” within the legal (primarily the Common Rule if one receives federal funding) or broader ethical sense of the term? Should they be? Take as a tangible example the implicit bias research of the kind Mahzarin R. Banarji has made famous, and imagine it was done over something like Mechanical Turk. How (if at all) should the anonymity of the subject, the lack of subject-experimenter relationship of any sort, the piecemeal nature of the task, etc, change the way an institutional review board reviews the research? It is a mantra in the research ethics community that informed consent is supposed to be a “process” not a document, but how can that process take place in this anonymous static cyberspace environment?
Second, consider research assistance.
August 3, 2010 at 9:49 am Posted in: Amazon, Anonymity, Bioethics, Bright Ideas, Google & Search Engines, Law and Psychology, Law School, Law School (Scholarship), Technology, Web 2.0 Print This Post 5 Comments
posted by Frank Pasquale
“I don’t want to achieve immortality through my work,’ Woody Allen said, “I want to achieve it through not dying.” The “Singularity University” is attracting Silicon Valley glitterati who think along the same lines:
[T]he Singularity — a time, possibly just a couple decades from now, when a superior intelligence will dominate and life will take on an altered form that we can’t predict or comprehend in our current, limited state . . . [will lead to a world where] human beings and machines . . . so effortlessly and elegantly merge that poor health, the ravages of old age and even death itself will all be things of the past.
Some of Silicon Valley’s smartest and wealthiest people have embraced the Singularity. They believe that technology may be the only way to solve the world’s ills, while also allowing people to seize control of the evolutionary process. For those who haven’t noticed, the Valley’s most-celebrated company — Google — works daily on building a giant brain that harnesses the thinking power of humans in order to surpass the thinking power of humans.
Ezra Klein skewers the techno-utopianism, toying with the idea that we may well be robotized before we get electronic medical records:
Right now, one of the top stories on the New York Times site is about how human beings are going to become people-computer hybrids and live forever and that vision actually seems semi-plausible until you realize that all the information about the operation to download your memories into a Macintosh will probably be kept in a manila folder in a large filing cabinet, and then it doesn’t seem so likely.
But Klein neglects the trends toward tiering in the medical system, which may well continue forking into “upper decks” where anything is possible and nether realms of penury. As Andrew Orlowski comments, “The Singularity is . . . . rich people building a lifeboat and getting off the ship.” I think that progress in bioethics depends on a rejection of that kind of thinking in favor of a more solidaristic orientation toward the needs of the worst off. As I stated in 2002,
We are all disturbed by hypothetical dystopias like Huxley’s Brave New World. But their most important flaws – the inequality, degradation, and moral irresponsibility of their inhabitants – are already apparent in [some aspects of life in the] world’s wealthiest nations[, which] spend hundreds of millions of dollars on elaborate technologies of life-extension, while contributing much less to efforts to assure basic medical care to the poorest. Public debate on regenerative medicine must acknowledge this inequality. Societies and individuals can invest in it in good conscience only if they are seriously committed to extending extant medicine to all.
If “Singularity University” turns out to be a prime philanthropic initiative of the Google guys, while the Bill and Melinda Gates Foundation sticks to “progress in fighting hunger and poverty,” I know which tech company I’ll be rooting for.
posted by Dave Hoffman
From the hyper-civilized French comes a new game show:
Game show contestants turn torturers in a new psychological experiment for French television, zapping a man with electricity until he cries for mercy — then zapping him again until he seems to drop dead.
“The Game of Death” has all the trappings of a traditional television quiz show, with a roaring crowd and a glamorous and well-known hostess urging the players on under gaudy studio lights.
But the contestants did not know they were taking part in an experiment to find out whether television could push them to outrageous lengths, and which has prompted comparisons with the atrocities of Nazi Germany.
The better analogy is Stanley Milgram’s Yale experiments, which were the direct inspiration for this show. Though the article blames television’s “absolutely terrifying power” to compel obedience here, I think the result can be explained much more simply as depending on the power of authority itself.
Maybe we need an IRB for reality show producers.