Site Meter

Category: Social Network Websites

Social Science in an Era of Corporate Big Data

IsaacWorkingIn my last post, I explored the characteristics of Facebook’s model (i.e., exemplary) users. Today, I want to discuss the model users in the company–i.e., the data scientists who try to build stylized versions of reality (models) based on certain data points and theories. The Facebook emotion experiment is part of a much larger reshaping of social science. To what extent will academics study data driven firms like Facebook, and to what extent will they try to join forces with its own researchers to study others?

Present incentives are clear: collaborate with (rather than develop a critical theory of) big data firms.  As Zeynep Tufekci puts it, “the most valuable datasets have become corporate and proprietary [and] top journals love publishing from them.”  “Big data” has an aura of scientific validity simply because of the velocity, volume, and variety of the phenomena it encompasses. Psychologists certainly must have learned *something* from looking at over 600,000 accounts’ activity, right?

The problem, though is that the corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility.* That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

Grant Getters and Committee Men

Why are journals so interested in this form of research? Why are academics jumping on board? Fortunately, social science has matured to the point that we now have a robust, insightful literature about the nature of social science itself. I know, this probably sounds awfully meta–exactly the type of navel-gazing Senator Coburn would excommunicate from the church of science. But it actually provides a much-needed historical perspective on how power and money shape knowledge. Consider, for instance, the opening of Joel Isaac’s article Tangled Loops, on Cold War social science:

During the first two decades of the Cold War, a new kind of academic figure became prominent in American public life: the credentialed social scientist or expert in the sciences of administration who was also, to use the parlance of the time, a “man of affairs.” Some were academic high-fliers conscripted into government roles in which their intellectual and organizational talents could be exploited. McGeorge Bundy, Walt Rostow, and Robert McNamara are the archetypes of such persons. An overlapping group of scholars became policymakers and political advisers on issues ranging from social welfare provision to nation-building in emerging postcolonial states.

Postwar leaders of the social and administrative sciences such as Talcott Parsons and Herbert Simon were skilled scientific brokers of just this sort: good “committee men,” grant-getters, proponents of interdisciplinary inquiry, and institution-builders. This hard-nosed, suit-wearing, business-like persona was connected to new, technologically refined forms of social science. . . . Antediluvian “social science” was eschewed in favour of mathematical, behavioural, and systems-based approaches to “human relations” such as operations research, behavioral science, game theory, systems theory, and cognitive science.

One of Isaac’s major contributions in that piece is to interpret the social science coming out of the academy (and entities like RAND) as a cultural practice: “Insofar as theories involve certain forms of practice, they are caught up in worldly, quotidian matters: performances, comportments, training regimes, and so on.” Government leveraged funding to mobilize research to specific ends. To maintain university patronage systems and research centers, leaders had to be on good terms with the grantors. The common goal of strengthening the US economy (and defeating the communist threat) cemented an ideological alliance.

Government still exerts influence in American social and behavioral sciences. But private industry controls critical data sets for the most glamorous, data-driven research. In the Cold War era, “grant getting” may have been the key to economic security, and to securing one’s voice in the university. Today, “exit” options are more important than voice, and what better place to exit to than an internet platform? Thus academic/corporate “flexians” shuttle between the two worlds. Their research cannot be too venal, lest the academy disdain it. But neither can it indulge in, say, critical theory (what would nonprofit social networks look like), just as Cold War social scientists were ill-advised to, say, develop Myrdal’s or Leontief’s theories. There was a lot more money available for the Friedmanite direction economics would, eventually, take.

Intensifying academic precarity also makes the blandishments of corporate data science an “offer one can’t refuse.” Tenured jobs are growing scarcer. As MOOCmongers aspire to deskill and commoditize the academy, industry’s benefits and flexibility grow ever more alluring. Academic IRBs can impose a heavy bureaucratic burden; the corporate world is far more flexible. (Consider all the defenses of the Facebook authored last week which emphasized how little review corporate research has to go through: satisfy the boss, and you’re basically done, no matter how troubling your aims or methods may be in a purely academic context.)

Creating Kinds

So why does all this matter, other than to the quantitatively gifted individuals at the cutting edge of data science? It matters because, in Isaac’s words:

Theories and classifications in the human sciences do not “discover” an independently existing reality; they help, in part, to create it. Much of this comes down to the publicity of knowledge. Insofar as scientific descriptions of people are made available to the public, they may “change how we can think of ourselves, [and] change our sense of self-worth, even how we remember our own past.

It is very hard to develop categories and kinds for internet firms, because they are so secretive about most of their operations. (And make no mistake about the current PR kerfuffle for Facebook: it will lead the company to become ever more secretive about its data science, just as Target started camouflaging its pregnancy-related ads and not talking to reporters after people appeared creeped out by the uncanny accuracy of its natal predictions.) But the data collection of the firms is creating whole new kinds of people—for marketers, for the NSA, and for anyone with the money or connections to access the information.

More likely than not, encoded in Facebook’s database is some new, milder DSM, with categories like the slightly stingy (who need to be induced to buy more); the profligate, who need frugality prompts; the creepy, who need to be hidden in newsfeeds lest they bum out the cool. Our new “Science Mart” creates these new human kinds, but also alters them, as “new sorting and theorizing induces changes in self-conception and in behavior of the people classified.” Perhaps in the future, upon being classified as “slightly depressed” by Facebook, users will see more happy posts. Perhaps the hypomanic will be brought down a bit. Or, perhaps if their state is better for business, it will be cultivated and promoted.

You may think that last possibility unfair, or a mischaracterization of the power of Facebook. But shouldn’t children have been excluded from its emotion experiment? Shouldn’t those whom it suspects may be clinically depressed? Shouldn’t some independent reviewer have asked about those possibilities? Journalists try to reassure us that Facebook is better now than it was 2 years ago. But the power imbalances in social science remain as funding cuts threaten researchers’ autonomy. Until research in general is properly valued, we can expect more psychologists, anthropologists, and data scientists to attune themselves to corporate research agendas, rather than questioning why data about users is so much more available than data about company practices.

Image Note: I’ve inserted a picture of Isaac’s book, which I highly recommend to readers interested in the history of social science.

*I suggested this was a problem in 2010.

Facebook’s Model Users

DontAnthropomorphizePeopleFacebook’s recent pscyhology experiment has raised difficult questions about the ethical standards of data-driven companies, and the universities that collaborate with them. We are still learning exactly who did what before publication. Some are wisely calling for a “People’s Terms of Service” agreement to curb further abuses. Others are more focused on the responsibility to protect research subjects. As Jack Balkin has suggested, we need these massive internet platforms to act as fiduciaries.

The experiment fiasco is just the latest in a long history of ethically troubling decisions at that firm, and several others like it. And the time is long past for serious, international action to impose some basic ethical limits on the business practices these behemoths pursue.

Unfortunately, many in Silicon Valley still barely get what the fuss is about. For them, A/B testing is simply a way of life. Using it to make people feel better or worse is a far cry from, say, manipulating video poker machines to squeeze a few extra dollars out of desperate consumers. “Casino owners do that all the time!”, one can almost hear them rejoin.

Yet there are some revealing similarities between casinos and major internet platforms. Consider this analogy from Rob Horning:

Social media platforms are engineered to be sticky — that is, addictive, as Alexis Madrigal details in [a] post about the “machine zone.” . . . Like video slots, which incite extended periods of “time-on-machine” to assure “continuous gaming productivity” (i.e. money extraction from players), social-media sites are designed to maximize time-on-site, to make their users more valuable to advertisers (Instagram, incidentally, is adding advertising) and to ratchet up user productivity in the form of data sharing and processing that social-media sites reserve the rights to.
 

That’s one reason we get headlines like “Teens Can’t Stop Using Facebook Even Though They Hate It.” There are sociobiological routes to conditioning action. The platforms are constantly shaping us, based on sophisticated psychological profiles.

For Facebook to continue to meet Wall Street’s demands for growth, its user base must grow and/or individual users must become more “productive.” Predictive analytics demands standardization: forecastable estimates of revenue-per-user. The more a person clicks on ads and buys products, the better. Secondarily, the more a person draws other potential ad-clickers in–via clicked-on content, catalyzing discussions, crying for help, whatever–the more valuable they become to the platform. The “model users” gain visibility, subtly instructing by example how to act on the network. They’ll probably never attain the notoriety of a Lei Feng, but the Republic of Facebookistan gladly pays them the currency of attention, as long as the investment pays off for top managers and shareholders.

As more people understand the implications of enjoying Facebook “for free“–i.e., that they are the product of the service–they also see that its real paying customers are advertisers. As Katherine Hayles has stated, the critical question here is: “will ubiquitous computing be coopted as a stalking horse for predatory capitalism, or can we seize the opportunity” to deploy more emancipatory uses of it?  I have expressed faith in the latter possibility, but Facebook continually validates Julie Cohen’s critique of a surveillance-innovation complex.

Some Brilliant Thoughts on Social Media

The LSE has a consistently illuminating podcast series, but Nick Couldry’s recent lecture really raised the bar. He seamlessly integrates cutting edge media theory into a comprehensive critique of social media’s role in shaping events for us. I was also happy to hear him praise the work of two American scholars I particularly admire: former Co-Op guest blogger Joseph Turow (whose Daily You was described as one of the most influential books of the past decade in media studies), and Julie Cohen (whose Configuring the Networked Self was featured in a symposium here).

I plan on posting some excerpts if I can find a transcript, or a published version of the talk. In the meantime, some more brilliant thoughts on social media, this time from Ian Bogost:

For those of us lucky enough to be employed, we’re really hyperemployed—committed to our usual jobs and many other jobs as well. . . . Hyperemployment offers a subtly different way to characterize all the tiny effort we contribute to Facebook and Instagram and the like. It’s not just that we’ve been duped into contributing free value to technology companies (although that’s also true), but that we’ve tacitly agreed to work unpaid jobs for all these companies. . . . We do tiny bits of work for Google, for Tumblr, for Twitter, all day and every day.

Today, everyone’s a hustler. But now we’re not even just hustling for ourselves or our bosses, but for so many other, unseen bosses. For accounts payable and for marketing; for the Girl Scouts and the Youth Choir; for Facebook and for Google; for our friends via their Kickstarters and their Etsy shops; for Twitter, which just converted years of tiny, aggregated work acts into $78 of fungible value per user.

And perhaps also for the NSA. As participants in 2011′s Digital Labor conference gear up for a reprise, I’m sure we’ll be discussing these ideas.

3

Employers and Schools that Demand Account Passwords and the Future of Cloud Privacy

Passwords 01In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.

I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.

But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.

The Widespread Hunger for Access

Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”

Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”

In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”

Read More

Privacy & Information Monopolies

First Monday recently published an issue on social media monopolies. These lines from the introduction by Korinna Patelis and Pavlos Hatzopolous are particularly provocative:

A large part of existing critical thinking on social media has been obsessed with the concept of privacy. . . . Reading through a number of volumes and texts dedicated to the problematic of privacy in social networking one gets the feeling that if the so called “privacy issues” were resolved social media would be radically democratized. Instead of adopting a static view of the concept . . . of “privacy”, critical thinking needs to investigate how the private/public dichotomy is potentially reconfigured in social media networking, and [the] new forms of collectivity that can emerge . . . .

I can even see a way in which privacy rights do not merely displace, but actively work against, egalitarian objectives. Stipulate a population with Group A, which is relatively prosperous and has the time and money to hire agents to use notice-and-consent privacy provisions to its advantage (i.e., figuring out exactly how to disclose information to put its members in the best light possible). Meanwhile, most of Group B is too busy working several jobs to use contracts, law, or agents to its advantage in that way. We should not be surprised if Group A leverages its mastery of privacy law to enhance its position relative to Group B.

Better regulation would restrict use of data, rather than “empower” users (with vastly different levels of power) to restrict collection of data. As data scientist Cathy O’Neil observes:
Read More

0

Gamification – Kevin Werbach and Dan Hunter’s new book

Gamification? Is that a word? Why yes it is, and Kevin Werbach and Dan Hunter want to tell us what it means. Better yet, they want to tell us how it works in their new book For the Win: How Game Thinking Can Revolutionize Your Business (Wharton Press). The authors get into many issues starting with a refreshing admission that the term is clunky but nonetheless captures a simple, powerful idea: one can use game concepts in non-game contexts and achieve certain results that might be missed. As they are careful to point out, this is not game theory. This is using insights from games, yes video games and the like, to structure how we interact with a problem or goal. I have questions about how well the approach will work and potential downsides (I am after all a law professor). Yet, the authors explore cases where the idea has worked, and they address concerns about where the approach can fail. I must admit I have only an excerpt so far. But it sets out the project while acknowledging possible objections that popped to mind quite well. In short, I want to read the rest. Luckily the Wharton link above or if you prefer Amazon Kindle are both quite reasonably priced. (Amazon is less expensive).

If you wonder about games, play games, and maybe have thought what is with all this badging, point accumulation, leader board stuff at work (which I did while I was at Google), this book looks to be a must read. And if you have not encountered these changes, I think you will. So reading the book may put you ahead of the group in understanding what management or companies are doing to you. The book also sets out cases and how the process works, so it may give you ideas about how to use games to help your endeavor and impress your manager. For the law folks out there, I think this area raises questions about behavioral economics and organizations that will lay ahead. In short, the authors have a tight, clear book that captures the essence of a movement. That alone merits a hearty well done.

13

Banning Forced Disclosure of Social Network Passwords and the Polygraph Precedent

The Maryland General Assembly has just become the first state legislature to vote to ban employers’ from requiring employees to reveal their Facebook or other social network passwords.  Other states are considering similar bills, and Senators Schumer and Blumenthal are pushing the idea in Congress.

As often happens in privacy debates, there are concerns from industry that well-intentioned laws will have dire consequences — Really Dangerous People might get into positions of trust, so we need to permit employers to force their employees to open up their Facebook accounts to their bosses.

Also, as often happens in privacy debates, people breathlessly debate the issue as though it is completely new and unprecedented.

We do have a precedent, however.  In 1988, Congress enacted the Employee Polygraph Protection Act  (EPPA).  The EPPA says that employers don’t get to know everything an employee is thinking.  Polygraphs are flat-out banned in almost all employment settings.  The law was signed by President Reagan, after Secretary of State George Shultz threatened to resign rather than take one.

The idea behind the EPPA and the new Maryland bill are similar — employees have a private realm where they can think and be a person, outside of the surveillance of the employer.  Imagine a polygraph if your boss asked what you really thought about him/her.  Imagine your social networking activities if your boss got to read your private messages and impromptu thoughts.

For private sector employers, the EPPA has quite narrow exceptions, such as for counter-intelligence, armored car personnel, and employees who are suspected of causing economic loss.  That list of exceptions can be a useful baseline to consider for social network passwords.

In summary — longstanding and bipartisan support to block this sort of intrusion into employees’ private lives.  The social networks themselves support this ban on having employers require the passwords.  I think we should, too.

1

Pakistan Scrubs the Net

Pakistan, which has long censored the Internet, has decided to upgrade its cybersieves. And, like all good bureaucracies, the government has put the initiative out for bid. According to the New York Times, Pakistan wants to spend $10 million on a system that can block up to 50 million URLs concurrently, with minimal effect on network speed. (That’s a lot of Web pages.) Internet censorship is on the march worldwide (and the U.S. is no exception). There are at least three interesting things about Pakistan’s move:

First, the country’s openness about its censorial goals is admirable. Pakistan is informing its citizens, along with the rest of us, that it wants to bowdlerize the Net. And, it is attempting to do so in a way that is more uniform than under its current system, where filtering varies by ISP. I don’t necessarily agree with Pakistan’s choice, but I do like that the country is straightforward with its citizens, who have begun to respond.

Second, the California-based filtering company Websense announced that it will not bid on the contract. That’s fascinating – a tech firm has decided that the public relations damage from helping Pakistan censor the Net is greater than the $10M in revenue it could gain. (Websense argues, of course, that its decision is a principled one. If you believe that, you are probably a member of the Ryan Braun Clean Competition fan club.)

Finally, the state is somewhat vague about what it will censor: it points to pornography, blasphemy, and material that affects national security. The last part is particularly worrisome: the national security trump card is a potent force after 9/11 and its concomitant fallout in Pakistan’s neighborhood, and censorship based on it tends to be secret. There is also real risk that national security interests = interests of the current government. America has an unpleasant history of censoring political dissent based on security worries, and Pakistan is no different.

I’ll be fascinated to see which companies take up Pakistan’s offer to propose…

Cross-posted at Info/Law.

3

Ubiquitous Infringement

Lifehacker‘s Adam Dachis has a great article on how users can deal with a world in which they infringe copyright constantly, both deliberately and inadvertently. (Disclaimer alert: I talked with Adam about the piece.) It’s a practical guide to a strict liability regime – no intent / knowledge requirement for direct infringement – that operates not as a coherent body of law, but as a series of reified bargains among stakeholders. And props to Adam for the Downfall reference! I couldn’t get by without the mockery of the iPhone or SOPA that it makes possible…

Cross-posted to Info/Law.

3

Cyberbullying and the Cheese-Eating Surrender Monkeys

(This post is based on a talk I gave at the Seton Hall Legislative Journal’s symposium on Bullying and the Social Media Generation. Many thanks to Frank Pasquale, Marisa Hourdajian, and Michelle Newton for the invitation, and to Jane Yakowitz and Will Creeley for a great discussion!)

Introduction

New Jersey enacted the Anti-Bullying Bill of Rights (ABBR) in 2011, in part as a response to the tragic suicide of Tyler Clementi at Rutgers University. It is routinely lauded as the country’s broadest, most inclusive, and strongest anti-bullying law. That is not entirely a compliment. In this post, I make two core claims. First, the Anti-Bullying Bill of Rights has several aspects that are problematic from a First Amendment perspective – in particular, the overbreadth of its definition of prohibited conduct, the enforcement discretion afforded school personnel, and the risk of impingement upon religious and political freedoms. I argue that the legislation departs from established precedent on disruptions of the educational environment by regulating horizontal relations between students rather than vertical relations between students and the school as an institution / environment. Second, I believe we should be cautious about statutory regimes that enable government actors to sanction speech based on content. I suggest that it is difficult to distinguish, on a principled basis, between bullying (which is bad) and social sanctions that enforce norms (which are good). Moreover, anti-bullying laws risk displacing effective informal measures that emerge from peer production. Read More