Site Meter

Author: Scott Peppet

9

Opting Out Isn’t Socially Neutral Anymore

Various news outlets are reporting that Google “fans” in Germany have been egging the roughly 3% of houses whose inhabitants have chosen to opt out of Google’s Street View mapping feature.

Why?

Apparently the vandals left notes saying “Google is cool.”  So maybe this is just a “how dare you question anything Google?” protest.

But more likely it is something more. Jeff Jarvis on Buzz Machine recently labeled opting out of Street View the equivalent of “digital desecration,” saying that such “embarassing” “assaults” on the public “saddened and angered” him. It seems plausible that these vandals agree; “their” information has been taken from them by others asserting privacy interests, and they’re penalizing those who opt out for denying them what they view as theirs.

As Kashmir Hill put it on Forbes, “[i]t’s ironic that those who wanted more privacy through blurring their homes wound up getting less of it.”  Ironic, maybe. Surprising, not really. I’ve been making the argument here all month that opting out is not a privacy solution in many circumstances, because the act of opting out is itself visible to (and itself conveys information to) others. This is based on my forthcoming paper Unraveling Privacy; I’ve given examples from Mexico’s experiment with biometric retina recognition and from the quantified self / sensor movement. Egging is just a more crude version; in this example, it’s not that those who opted out are the “worst” members of a given pool (as in true unraveling scenarios) and are therefore discriminated against, but simply that others are pushing back against the right to opt out itself because it impedes unfettered access to all the information they want.

What’s next? If you won’t stream real-time data about your health (do you have the flu? other communicable diseases?) into your vicinity to warn others to walk on the other side of the street, will people heckle you? If you won’t display your criminal record prominently in digital form so that others can “see” (using their digital devices) whether you’re a sex offender or felon of some sort, will they assume you’re a criminal (unraveling) or harrass you for your “privacy” (like the German eggers)?  As I argue in Unraveling Privacy, the politics of privacy are getting more complicated; as some people increasingly share information about themselves, they will make attributions about those who do not — and potentially retaliate against them as well.

3

The Quantified Self: Personal Choice and Privacy Problem?

“The trouble with measurement is its seeming simplicity.” — Author Unknown

“Only the shallow know themselves.” — Oscar Wilde

Human instrumentation is booming. FitBit can track the number of steps you take a day, how many miles you’ve walked, calories burned, your minutes asleep, and the number of times you woke up during the night. BodyMedia’s armbands are similar, as is the Philips DirectLife device. You can track your running habits with RunKeeper, your weight with a WiFi Withings scale that will Tweet to your friends, your moods on MoodJam or what makes you happy on TrackYourHappiness. Get even more obsessive about your sleep with Zeo, or about your baby’s sleep (or other biological) habits with TrixieTracker. Track your web browsing, your electric use (or here), your spending, your driving, how much you discard or recycle, your movements and location, your pulse, your illness symptoms, what music you listen to, your meditations, your Tweeting patterns. And, of course, publish it all — plus anything else you care to track manually (or on your smartphone) — on Daytum or mycrocosm or me-trics or elsewhere.

There are names for this craze or movement. Gary Wolf & Kevin Kelly call this the “quantified self” (see Wolf’s must-watch recent Ted talk and Wired articles on the subject) and have begun an international organization to connect self-quantifiers. The trend is related to physiological computing, personal informatics, and life logging.

There are all sorts of legal implications to these developments. We have already incorporated sensors into the penal system (e.g., ankle bracelets & alcohol monitors in cars). How will sensors and self-tracking integrate into other legal domains and doctrines? Proving an alibi becomes easier if you’re real-time streaming your GPS-tracked location to your friends. Will we someday subpoena emotion or mood data, pulse, or other sensor-provided information to challenge claims and defenses about emotional state, intentions, mens rea? Will we evolve contexts in which there is an obligation to track personal information — to prove one’s parenting abilities, for example?

And what of privacy? It may not seem that an individual’s choice to use these technologies has privacy implications — so what if you decide to use FitBit to track your health and exercise? In a forthcoming piece titled “Unraveling Privacy: The Personal Prospectus and the Threat of a Full Disclosure Future,” however, I argue that self-tracking — particularly through electronic sensors — poses a threat to privacy for a somewhat unintuitive reason.

Read More

3

The Promise of Even Stronger Internet Intermediaries?

Recent privacy scholarship has begun to focus on the role of internet intermediaries (Google, Facebook, ISPs, etc.) and appropriate means to regulate their growing power. The general impression is that intermediaries have gotten too strong and too secretive, and that we need effective means to monitor and potentially control their behavior. Take them down a notch, so to speak, or at least make them more accountable.

This is useful and important work, and I don’t disagree with the specific concerns about Google, etc., that have been raised by Frank Pasquale, James Grimmelmann, and others.

There may be a problem with a general attack on intermediaries, however:  what if in some circumstances privacy could best be protected by stronger, more secretive, intermediaries–entities with legal privilege to protect the information in their care, legal obligations to those entrusting them with that information, and the ability to only selectively share that information with others in filtered and sometimes somewhat obfuscated ways?

Would it be worthwhile to create the legal architecture needed to effect such stronger intermediaries?

Read More

4

Unraveling Privacy as Corporate Strategy

The biometric technologies firm Hoyos (previously Global Rainmakers Inc.) recently announced plans to test massive deployment of iris scanners in Leon, Mexico, a city of over a million people. They expect to install thousands of the devices, some capable of picking out fifty people per minute even at regular walking speeds. At first the project will focus on law enforcement and improving security checkpoints, but within three years the plan calls for integrating iris scanning into most commercial locations. Entry to stores or malls, access to an ATM, use of public transportation, paying with credit, and many other identity-related transactions will occur through iris-scanning & recognition. (For more details, see Singularity’s post with videos.) Hoyos has the backing to make this happen: on October 12th they also announced new investment of over $40M to fund their growth.

There are obviously lots of interesting privacy- and tech-related issues here. I’ll focus on one: the company’s roll-out strategy is explicitly premised on the unraveling of privacy created by the negative inferences & stigma that will attach to those who choose not to participate. Criminals will automatically be scanned and entered into the database upon conviction. Jeff Carter, Chief Development Officer at Hoyos, expects law abiding citizens to participate as well, however. Some will do so for convenience, he says, and then he expects everyone to follow: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.” (For the full interview, see Fast Company’s post on the project.)

In a forthcoming article, I’ve written at length about the unraveling effect and why it now poses a serious threat to privacy. This biometric deployment is one of many examples, but it most explicitly illustrates that unraveling has moved beyond unexpected consequence to become corporate strategy.

Read More