Category: Sociology of Law

4

Cognitive Biases, the Legal Academy, and the Judiciary

It’s a pleasure to be here at Concurring Opinions.  I would like to thank Dan, Sarah, and Ron for inviting me.  During my visit, I hope to talk a bit about my core research areas of land use and local government law (including why you, who are statistically unlikely to be interested in either land use or local government law, should be interested), but also about other issues such as the current state of the legal academy and the legal profession, often using land use or local government law to examine these broader issues.

On Cognitive Biases

On that last note, Slate.com recently ran a great piece by Katy Waldman regarding how the human brain processes information, observing that people have a predilection to believe factual claims that we find easy to process.  Waldman synthesizes the results of several interesting studies, including one eye-opening study that identifies three persistent cognitive biases that humans possess.  As Waldman summarizes these biases: “First, we reflexively attribute people’s behavior to their character rather than their circumstances.” Second, “we learn more easily when knowledge is arranged hierarchically, so in a pinch we may be inclined to accept fixed status and gender roles.” And third, “we tend to assume that persisting and long-standing states are good and desirable, which stirs our faith in the status quo absent any kind of deep reflection.” The studygreen-lizard-1427838-s attributes these biases to the basic human need, rooted in the primitive recesses of our lizard brain (pictured), to manage uncertainty and risk.

While Waldman argues that there is some relationship between these biases and conservative political beliefs, what struck me about these findings is how well the biases describe judicial behavior.

Read More

Social Science in an Era of Corporate Big Data

IsaacWorkingIn my last post, I explored the characteristics of Facebook’s model (i.e., exemplary) users. Today, I want to discuss the model users in the company–i.e., the data scientists who try to build stylized versions of reality (models) based on certain data points and theories. The Facebook emotion experiment is part of a much larger reshaping of social science. To what extent will academics study data driven firms like Facebook, and to what extent will they try to join forces with its own researchers to study others?

Present incentives are clear: collaborate with (rather than develop a critical theory of) big data firms.  As Zeynep Tufekci puts it, “the most valuable datasets have become corporate and proprietary [and] top journals love publishing from them.”  “Big data” has an aura of scientific validity simply because of the velocity, volume, and variety of the phenomena it encompasses. Psychologists certainly must have learned *something* from looking at over 600,000 accounts’ activity, right?

The problem, though is that the corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility.* That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

Grant Getters and Committee Men

Why are journals so interested in this form of research? Why are academics jumping on board? Fortunately, social science has matured to the point that we now have a robust, insightful literature about the nature of social science itself. I know, this probably sounds awfully meta–exactly the type of navel-gazing Senator Coburn would excommunicate from the church of science. But it actually provides a much-needed historical perspective on how power and money shape knowledge. Consider, for instance, the opening of Joel Isaac’s article Tangled Loops, on Cold War social science:

During the first two decades of the Cold War, a new kind of academic figure became prominent in American public life: the credentialed social scientist or expert in the sciences of administration who was also, to use the parlance of the time, a “man of affairs.” Some were academic high-fliers conscripted into government roles in which their intellectual and organizational talents could be exploited. McGeorge Bundy, Walt Rostow, and Robert McNamara are the archetypes of such persons. An overlapping group of scholars became policymakers and political advisers on issues ranging from social welfare provision to nation-building in emerging postcolonial states.

Postwar leaders of the social and administrative sciences such as Talcott Parsons and Herbert Simon were skilled scientific brokers of just this sort: good “committee men,” grant-getters, proponents of interdisciplinary inquiry, and institution-builders. This hard-nosed, suit-wearing, business-like persona was connected to new, technologically refined forms of social science. . . . Antediluvian “social science” was eschewed in favour of mathematical, behavioural, and systems-based approaches to “human relations” such as operations research, behavioral science, game theory, systems theory, and cognitive science.

One of Isaac’s major contributions in that piece is to interpret the social science coming out of the academy (and entities like RAND) as a cultural practice: “Insofar as theories involve certain forms of practice, they are caught up in worldly, quotidian matters: performances, comportments, training regimes, and so on.” Government leveraged funding to mobilize research to specific ends. To maintain university patronage systems and research centers, leaders had to be on good terms with the grantors. The common goal of strengthening the US economy (and defeating the communist threat) cemented an ideological alliance.

Government still exerts influence in American social and behavioral sciences. But private industry controls critical data sets for the most glamorous, data-driven research. In the Cold War era, “grant getting” may have been the key to economic security, and to securing one’s voice in the university. Today, “exit” options are more important than voice, and what better place to exit to than an internet platform? Thus academic/corporate “flexians” shuttle between the two worlds. Their research cannot be too venal, lest the academy disdain it. But neither can it indulge in, say, critical theory (what would nonprofit social networks look like), just as Cold War social scientists were ill-advised to, say, develop Myrdal’s or Leontief’s theories. There was a lot more money available for the Friedmanite direction economics would, eventually, take.

Intensifying academic precarity also makes the blandishments of corporate data science an “offer one can’t refuse.” Tenured jobs are growing scarcer. As MOOCmongers aspire to deskill and commoditize the academy, industry’s benefits and flexibility grow ever more alluring. Academic IRBs can impose a heavy bureaucratic burden; the corporate world is far more flexible. (Consider all the defenses of the Facebook authored last week which emphasized how little review corporate research has to go through: satisfy the boss, and you’re basically done, no matter how troubling your aims or methods may be in a purely academic context.)

Creating Kinds

So why does all this matter, other than to the quantitatively gifted individuals at the cutting edge of data science? It matters because, in Isaac’s words:

Theories and classifications in the human sciences do not “discover” an independently existing reality; they help, in part, to create it. Much of this comes down to the publicity of knowledge. Insofar as scientific descriptions of people are made available to the public, they may “change how we can think of ourselves, [and] change our sense of self-worth, even how we remember our own past.

It is very hard to develop categories and kinds for internet firms, because they are so secretive about most of their operations. (And make no mistake about the current PR kerfuffle for Facebook: it will lead the company to become ever more secretive about its data science, just as Target started camouflaging its pregnancy-related ads and not talking to reporters after people appeared creeped out by the uncanny accuracy of its natal predictions.) But the data collection of the firms is creating whole new kinds of people—for marketers, for the NSA, and for anyone with the money or connections to access the information.

More likely than not, encoded in Facebook’s database is some new, milder DSM, with categories like the slightly stingy (who need to be induced to buy more); the profligate, who need frugality prompts; the creepy, who need to be hidden in newsfeeds lest they bum out the cool. Our new “Science Mart” creates these new human kinds, but also alters them, as “new sorting and theorizing induces changes in self-conception and in behavior of the people classified.” Perhaps in the future, upon being classified as “slightly depressed” by Facebook, users will see more happy posts. Perhaps the hypomanic will be brought down a bit. Or, perhaps if their state is better for business, it will be cultivated and promoted.

You may think that last possibility unfair, or a mischaracterization of the power of Facebook. But shouldn’t children have been excluded from its emotion experiment? Shouldn’t those whom it suspects may be clinically depressed? Shouldn’t some independent reviewer have asked about those possibilities? Journalists try to reassure us that Facebook is better now than it was 2 years ago. But the power imbalances in social science remain as funding cuts threaten researchers’ autonomy. Until research in general is properly valued, we can expect more psychologists, anthropologists, and data scientists to attune themselves to corporate research agendas, rather than questioning why data about users is so much more available than data about company practices.

Image Note: I’ve inserted a picture of Isaac’s book, which I highly recommend to readers interested in the history of social science.

*I suggested this was a problem in 2010.

Facebook’s Model Users

DontAnthropomorphizePeopleFacebook’s recent pscyhology experiment has raised difficult questions about the ethical standards of data-driven companies, and the universities that collaborate with them. We are still learning exactly who did what before publication. Some are wisely calling for a “People’s Terms of Service” agreement to curb further abuses. Others are more focused on the responsibility to protect research subjects. As Jack Balkin has suggested, we need these massive internet platforms to act as fiduciaries.

The experiment fiasco is just the latest in a long history of ethically troubling decisions at that firm, and several others like it. And the time is long past for serious, international action to impose some basic ethical limits on the business practices these behemoths pursue.

Unfortunately, many in Silicon Valley still barely get what the fuss is about. For them, A/B testing is simply a way of life. Using it to make people feel better or worse is a far cry from, say, manipulating video poker machines to squeeze a few extra dollars out of desperate consumers. “Casino owners do that all the time!”, one can almost hear them rejoin.

Yet there are some revealing similarities between casinos and major internet platforms. Consider this analogy from Rob Horning:

Social media platforms are engineered to be sticky — that is, addictive, as Alexis Madrigal details in [a] post about the “machine zone.” . . . Like video slots, which incite extended periods of “time-on-machine” to assure “continuous gaming productivity” (i.e. money extraction from players), social-media sites are designed to maximize time-on-site, to make their users more valuable to advertisers (Instagram, incidentally, is adding advertising) and to ratchet up user productivity in the form of data sharing and processing that social-media sites reserve the rights to.
 

That’s one reason we get headlines like “Teens Can’t Stop Using Facebook Even Though They Hate It.” There are sociobiological routes to conditioning action. The platforms are constantly shaping us, based on sophisticated psychological profiles.

For Facebook to continue to meet Wall Street’s demands for growth, its user base must grow and/or individual users must become more “productive.” Predictive analytics demands standardization: forecastable estimates of revenue-per-user. The more a person clicks on ads and buys products, the better. Secondarily, the more a person draws other potential ad-clickers in–via clicked-on content, catalyzing discussions, crying for help, whatever–the more valuable they become to the platform. The “model users” gain visibility, subtly instructing by example how to act on the network. They’ll probably never attain the notoriety of a Lei Feng, but the Republic of Facebookistan gladly pays them the currency of attention, as long as the investment pays off for top managers and shareholders.

As more people understand the implications of enjoying Facebook “for free“–i.e., that they are the product of the service–they also see that its real paying customers are advertisers. As Katherine Hayles has stated, the critical question here is: “will ubiquitous computing be coopted as a stalking horse for predatory capitalism, or can we seize the opportunity” to deploy more emancipatory uses of it?  I have expressed faith in the latter possibility, but Facebook continually validates Julie Cohen’s critique of a surveillance-innovation complex.

8

Methods of Execution and the Search for Perfection

113px-The_deathThe recent botched execution by lethal injection in Oklahoma raises a point that I often discuss with my Torts students.  The evolution of capital punishment is largely a futile search for a humane way of killing people.  I say futile because every execution method can go wrong or become stigmatized in a serious way.

Back in the day, executions were supposed to be horrible.  (Consider the Cross, burning at the stake, boiling in oil, drawing and quartering, etc.)  Once people decided that this was torture, then society moved through different options, each of which was considered as a progressive or liberal improvement at the time.

1.  Beheading:  The condemned does not see the ax falling on his head, and it was all over after one blow.  Except when it took several blows because the executioner was a klutz.  That was then a really painful death.

2.  Hanging:  No need to cut anything or shed blood.  Except if the rope was too short (then the head got ripped off).  Or if the rope was too long, people took a long time to die in agony.

3.  Firing Squad:  The condemned can wear a blindfold and it should be over quickly.  Unless the firing squad does a poor job.

4.  The Guillotine:  This was a big improvement over an ax.  It makes far fewer mistakes and is relatively painless.  Once it got associated with the Terror of the French Revolution, though, that was off the table.

5.  The Electric Chair:  When it was introduced, “Old Sparky” was supposed to be a great improvement.  After all, it was a machine and did not involve cutting.  Except when the voltage was too high and burned people, or too low and didn’t kill.

6.  The Gas Chamber:  Hitler’s Germany made this technique impossible to use again.

7.  Lethal Injection:  That was supposed to be painless and foolproof.  Except when the IV is not done correctly or the chemicals are administered in the wrong proportions.

Industrial Policy for Big Data

If you are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan, data brokers are probably going to assume you’re heavier than average. We know that drug companies may use that data to recruit research subjects.  Marketers could utilize the data to target ads for diet aids, or for types of food that research reveals to be particularly favored by people who are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan.

We may also reasonably assume that the data can be put to darker purposes: for example, to offer credit on worse terms to the obese (stereotype-driven assessment of looks and abilities reigns from Silicon Valley to experimental labs).  And perhaps some day it will be put to higher purposes: for example, identifying “obesity clusters” that might be linked to overexposure to some contaminant

To summarize: let’s roughly rank these biosurveillance goals as: 

1) Curing illness or precursors to illness (identifying the obesity cluster; clinical trial recruitment)

2) Helping match those offering products to those wanting them (food marketing)

3) Promoting the classification and de facto punishment of certain groups (identifying a certain class as worse credit risks)

Read More

0

Trust is What Makes an Expectation of Privacy Reasonable

A few weeks ago, I defined trust as a favorable expectations as to the behavior of others. It refers to a behavior that reduces uncertainty about others to levels that us to function alongside them. This is a sociological definition; it refers directly to interpersonal interaction. But how does trust develop between persons? And is that trust sufficiently reasonable to merit society’s and the state’s protection. What follows is part of an ongoing process of developing the theory of privacy-as-trust. It is by no means a final project just yet. I look forward to your comments.

Among intimates, trust may emerge over time as the product of an iterative exchange; this type of trust is relatively simple to understand and generally considered reasonable. Therefore, I will spend little time proving the reasonableness of trust among intimates.

But social scientists have found that trust among strangers can be just as strong and lasting as trust among intimates, even without the option of a repeated game. Trust among strangers emerges from two social bases—sharing a stigmatizing identity and sharing trustworthy friends. When these social elements are part of the context of a sharing incident among relative strangers, that context should be considered trustworthy and, thus, a reasonable place for sharing.

Traditionally, social scientists argued that trust developed rationally over time as part of an ongoing process of engagement with another: if a interacts with b over t=0 to t=99 and b acts in a trustworthy manner during those interactions, a is in a better position to predict that b will act trustworthy at t=100 than if a were basing its prediction for t=10 on interactions between t=0 and t=9. This prediction process is based on past behavior and assumes the trustor’s rationality as a predictor. Given those assumptions, it seems relatively easy to trust people with whom we interact often.

But trust also develops among strangers, none of whom have the benefit of repeated interaction to make fully informed and completely rational decisions about others. In fact, a decision to trust is never wholly rational, it is a probability determination; “trust begins where knowledge ends,” as Niklas Luhmann said. What’s more, trust not only develops earlier than the probability model would suggest; in certain circumstances, trust is also strong early on, something that would seem impossible under a probability approach to trust. Sometimes, that early trust among strangers is the result of a cue of expertise, a medical or law degree, for example. But trust among lay strangers cannot be based on expertise or repeated interaction, and yet, sociologists have observed that such trust is quite common.

I argue that reasonable trust among strangers emerges when one of two things happen: when (1) strangers share a stigmatizing social identity or (2) share a strong interpersonal network. In a sense, we transfer the trust we have in others that are very similar to a stranger to the stranger himself or use the stranger’s friends as a cue to his trustworthiness. Sociologists call this a transference process whereby we take information about a known entity and extend it to an unknown entity. That is why trust via accreditation works: we transfer the trust we have in a degree from Harvard Law School, which we know, to one of its graduates, whom we do not. But transference can also work among persons. The sociologist Mark Granovetter has shown that economic actors transfer trust to an unknown party based on how embedded the new person is in a familiar and trusted social network. That is why networking is so important to getting ahead in any industry and why recommendation letters from senior, well-regarded, or renowned colleagues are often most effective. This is the theory of social embeddedness: someone will do business with you, hire you as an employee, trade with you, or enter into a contract with you not only if you know a lot of the same people, but if you know a lot of the right people, the trustworthy people, the parties with whom others have a long, positive history. So it’s not just how many people you know, it’s who you know.

The same is true outside the economic context. The Pew Internet and American Life Project found that of those teenagers who use online social networks and have online “friends” that they have never met off-line, about 70 % of those “friends” had more than one mutual friend in common. Although Pew did not distinguish between types of mutual friends, the survey found that this was among the strongest factors associated with “friending” strangers online. More research is needed.

The other social factor that creates trust among strangers is sharing a salient in-group identity. But such trust transference is not simply a case of privileging familiarity, at best, or discrimination, at worst. Rather, sharing an identity with a group that may face discrimination or has a long history of fighting for equal rights is a proxy for one of the greatest sources of trust among persons: sharing values. At the outset, sharing an in-group identity is an easy shorthand for common values and, therefore, is a reasonable basis for trust among strangers.

Social scientists call transferring known in-group trust to an unknown member of that group category-driven processing or category-based trust. But I argue that it cannot just be any group and any identity; trust is transferred when a stranger is a member of an in-group, the identity of which is defining or important for the trustor. For example, we do not see greater trust between men and other men perhaps because the identity of manhood is not a salient in-group identity. More likely, the status of being a man is not an adequate cue that a male stranger shares your values. Trust forms and is maintained with persons with similar goals and values and a perceived interest in maintaining the trusting relationship. But it is sharing values you find most important that breed trust.For example, members of the LGBT community are, naturally, more likely to support the freedom to marry for gays and lesbians than any other group. Therefore, sharing an in-group identity that constitutes an important part of a trustor’s persona operates as a cue that the trustee shares values important to that group.

What makes these factors—salient in-group identity and social embeddedness—the right bases for establishing when trust among strangers is reasonable and, therefore, when it should be protected by society, is that the presence of these factors is what justifies our interpersonal actions. We look for these factors, we decide to share on these bases, and our expectations of privacy are based on them.

1

What Makes a Stranger Not So Strange

Most of the literature on trust among strangers comes from game theorists. Scholars perform simulations of so-called “trust games” to suggest that “impersonal trust” can develop under this or that circumstance. This literature is voluminous (the previous link is just one of many hits from a JSTOR search). The mere fact that trust among repeat actors can be seen in repeated evolutionary games should, at the very least, complicate a legal doctrine that necessarily extinguishes privacy upon disclosures. But you don’t have to understand (or agree) with game theorists to see the problem with such a bright line rule.

Over the last year, I observed different types of support group meetings, including Alcoholics Anonymous, Narcotics Anonymous, and an HIV-positive support group. I interviewed several members, though many members declined to be interviewed, as I expected. These support groups thrive on privacy and anonymity. The very characteristic that made me want to study them was the very thing that would make it hard: members of such groups tend to know everything about a specific area of each other’s lives (their addiction), but often know precious little about a participant’s life and identity outside of what brought him to the group in the first place. In many cases, outside of the sponsor-recovering relationship, even last names remain unknown. And yet they share a secret that, unfortunately, retains a significant stigma in greater society.

This knowledge asymmetry is not always the case, I must admit. But for now, let’s accept the scenario: Participants are veritable strangers, except they know this one big secret about each other. This was in fact the story for most of the people I interviewed. And although this type of ethnography must always be a dubious source for grand conclusions about wide populations, we can still ask: Why do recovering addicts share their stigmatizing secret with strangers?

My research suggests it is because they all share the same stigmatizing secret. It is not simply that everyone shares the same secret or the same identity. People who are all Libras or all white males or all like Maroon5 do not necessarily feel a comfort level with those who were born at the same time, look the way they do, and listen to the same music, respectively. Rather, the shibboleth of a willingness to open up among strangers in this context is that everyone shares a stigmatizing identity. They trust each other not because they know them but because they know what they’ve been through in the greater world. And this is entirely reasonable.

I think this trust exists in other areas of life and not just in the unique support group environment. If it does, if trust develops among individuals who share a stigmatizing identity, then trust among so-called strangers can exist such that individuals would not be assuming the risk of further disclosure of a secret revealed to such a stranger.

I have designed a study to test this, using accepting/declining “friend” requests from strangers as a proxy. It is an imperfect proxy, but trust is hard to measure. But if we can control for other factors and see that friend requests from strangers are accepted more frequently by individuals who share a defining, stigmatizing characteristic — sexual minority status, is just one example — then we may have found a social determinant of trust among strangers.

0

Why Some Risk Sending Intimate Pictures to “Strangers” and What It Says About Privacy

It is, as always, an honor and a pleasure to speak with the Co-Op community. Thank you to Danielle for inviting me back and thank yous all around for inviting me onto your desks, into your laps, or into your hands.

My name is Ari and I teach at New York Law School. In fact, I am honored to have been appointed Associate Professor of Law and Director of the Institute for Information Law and Policy this year at NYLS, an appointment about which I am super excited and will begin this summer. I am also finishing my doctoral dissertation in sociology at Columbia University. My scholarship focuses on the law and policy of Internet social life, and I am particularly focused on online privacy, the injustices and inequalities in unregulated online social spaces, and the digital implications for our cultural creations.

Today, and for most of this month, I want to talk a little bit about the relationship between strangers, intimacy, and privacy.

Over the last 2 years, I have conducted quantitative surveys and qualitative interviews with almost 1,000 users of any of the several gay-oriented geolocation platforms, the most famous of which is “Grindr.” These apps are described (or, derided, if you prefer) as “hook up apps,” or tools that allow gay men to meet each other for sex. That does happen. But the apps also allow members of a tightly identified and discriminated group to meet each other when they move to a knew town and don’t know anyone, to make friends, and to fall in love. Grindr, my survey respondents report, has created more than its fair share of long term relationships and, in equality states, marriages.

But Grindr and its cousins are, at least in part, about sex, which is why the app is one good place to study the prevalence of sharing intimate photographs and the sharers’ rationales. My sample is a random sample of a single population: gay men. Ages range from 18 to 59 (I declined to include anyone who self-reported as underage); locations span the globe. My online survey asked gay men who have used the app for more than one week at any time in the previous 2 years. This allowed me to focus on actual users rather than those just curious. Approximately 68 % of active users reported having sent an intimate picture of themselves to someone they were chatting with. I believe the real number is much higher. Although some of those users anonymized their initial photo, i.e., cropped out their head or something similar, nearly 89 % of users who admitted sending intimates photos to a “stranger” they met online also admitted to ultimately sending an identifiable photo, as well. And, yet, not one respondent reported being victimized, to their knowledge, by recipient misuse of an intimate photograph. Indeed, only a small percentage (1.9) reported being concerned about it or letting it enter into their decision about whether to send the photo in the first place.

I put the word “stranger” in quotes because I contend that the recipients are not really strangers as we traditionally understand the term. And this matters: You can’t share something with a stranger and expect it to remain private. Some people argue you can’t even do that with a close friend: you assume the risk of dissemination when you tell anyone anything, some say. But, at least, the risk is so much higher with strangers such that it is difficult for some to imagine a viable expectation of privacy argument when you chose to share intimate information with a stranger. I disagree. Sharing something with a “stranger” need not always extinguish your expectation of privacy and your right to sue under an applicable privacy tort if the intimate information is shared further.

A sociologist would say that a “stranger” is a person that is unknown or with whom you are not acquainted. The law accepts this definition in at least some respects: sometimes we say that individuals are “strangers in the eyes of the law,” like a legally married same-sex couple when they travel from New Jersey to Mississippi. I argue that the person on the other end of a Grindr chat is not necessarily a stranger because nonverbal social cues of trustworthiness, which can be seen anywhere, are heightened by the social group affinity of an all-gay male environment.

Over the next few weeks, I will tease out the rest of this argument: that trust, and, therefore, expectations of privacy, can exist among strangers. Admittedly, I’m still working it out and I would be grateful for any and all comments in future posts.

I’ve heard people say books are getting more ‘gritty’, meaning more violent and less stylised in general. The realism there might be in terms of warrior not shrugging off their wounds and being fine the next day etc. Researched realism and detailed city/country mechanics are not something I was aware of a movement toward. To me nothing is added by, for example, the author working out a grain distribution network. I’m interested in story and character, not mechanics.

— Mark L.

0

Law and Hard Fantasy Interview Series: Mark Lawrence

Broken-EmpireI’ve sporadically run an interview series with fantasy authors who generally write in the burgeoning genre of gritty / hard / dark epic fantasy.  (I’m, obviously, a fan.)  The series began with this book review post, and continued with interviews of George R. R. Martin and Patrick Rothfuss.  The series continues today as I interview Mark Lawrence.  Mark is the author of the Broken Empire trilogy, and the forthcoming Red Queen’s War.  His work has been lauded on both sides of the Atlantic (Mark was raised in the U.K., where he works as a research scientist).  He was gracious enough to respond to my email queries, which follow after the jump.

Read More

Some Brilliant Thoughts on Social Media

The LSE has a consistently illuminating podcast series, but Nick Couldry’s recent lecture really raised the bar. He seamlessly integrates cutting edge media theory into a comprehensive critique of social media’s role in shaping events for us. I was also happy to hear him praise the work of two American scholars I particularly admire: former Co-Op guest blogger Joseph Turow (whose Daily You was described as one of the most influential books of the past decade in media studies), and Julie Cohen (whose Configuring the Networked Self was featured in a symposium here).

I plan on posting some excerpts if I can find a transcript, or a published version of the talk. In the meantime, some more brilliant thoughts on social media, this time from Ian Bogost:

For those of us lucky enough to be employed, we’re really hyperemployed—committed to our usual jobs and many other jobs as well. . . . Hyperemployment offers a subtly different way to characterize all the tiny effort we contribute to Facebook and Instagram and the like. It’s not just that we’ve been duped into contributing free value to technology companies (although that’s also true), but that we’ve tacitly agreed to work unpaid jobs for all these companies. . . . We do tiny bits of work for Google, for Tumblr, for Twitter, all day and every day.

Today, everyone’s a hustler. But now we’re not even just hustling for ourselves or our bosses, but for so many other, unseen bosses. For accounts payable and for marketing; for the Girl Scouts and the Youth Choir; for Facebook and for Google; for our friends via their Kickstarters and their Etsy shops; for Twitter, which just converted years of tiny, aggregated work acts into $78 of fungible value per user.

And perhaps also for the NSA. As participants in 2011’s Digital Labor conference gear up for a reprise, I’m sure we’ll be discussing these ideas.