Configuring the Networked Self Symposium: Reflections on the Self as Cultural Product, Never Fading Echoes and Digital Footprints

You may also like...

4 Responses

  1. Ted Striphas says:

    It’s quiet around here, Hector, and it doesn’t deserve to be, so I thought I’d kick off the conversation on your excellent post. I hope you don’t mind.

    You raise a very interesting — and, indeed, vexing — issue here: how do you do for privacy online what CC has done for copyright? What’s interesting to me is the nature of the fix you’re proposing. It’s not strictly legal, nor is it strictly technological. It’s both.

    What this signals is how, increasingly, people “working in twilight zone of law, culture, and technology,” as I put it in my post, need to vigilant in both recognizing and responding to the complexity of the current conjuncture. Legal fixes alone aren’t enough; nor are technical fixes; nor are fixes that only target markets; nor, for that matter, is play. To create enduring change — change that sticks, as it were — we need to think about forms of intervention that operate across all manner of these domains, but also, then, in ways that respect the specificity of these domains. In essence, what I’m talking about is a coordinated strategy for managing privacy online.

    Implicit in your post, then, is an intense call to interdisciplinarity — a kind of interdisciplinarity I see in Cohen’s book, but less so, unfortunately, in the conversations occurring in this symposium. (We’ll see how the rest of the week goes.) It’s an issue I pointed to in the appeal to Maslow I made in my post: people have a tendency to see solutions to problems in the terms their tools dictate to them. I’ll have to admit that cultural studies has been no exception to this criticism, although it’s also precisely why I do the type of work that I do.

    Anyway, thank you for the big picture perspective, Hector. You’ve given me a lot to think about. How DO you forget in an information age?

  2. Julie Cohen says:

    Hector, sorry for the delay. I’ve been puzzling over the problem of user hacks for privacy for some time, and so far all I’ve come up with is that it is really hard. The CC strategy of inserting a property regime with flipped defaults, a solution proposed by some in the legal/policy world around 10 or so years ago, doesn’t seem likely to work well, since users are contract takers in so many contexts. I agree with Ted that successful interventions will need to operate across multiple domains.

    I wonder whether one way to pry open this problem might be to scrutinize the supposed rationality of systems like klout and interrogate the benefits they are supposed to generate/enable.

    I wonder whether another way might be to take the destruction of information much more seriously. But this returns me to the seeming impossibility of the durable privacy hack: user hacks directed toward information destruction run the risk of serious legal liability and even criminal conviction.

  3. Hector Postigo says:

    Finally back from my travels. Sorry it took me a bit to get back to the discussion.

    Yes, it is a call for interdisciplinarity and more. I think this will address Julie’s response too. So a bit of history first so you can see where I’m coming from. You see I was a grad student in STS at RPI where Langdon Winner has been teaching for over 3 decades. You can imagine that he was a strong influence. I spent a lot of time thinking about his central arguments regarding the politics of technology, specifically was the soft determinism in his concepts something that we needed to return to? At that time many of us grad students in STS were dealing with what we thought was a sort of discursive saturation in the “social constructionist” perspective on technology. That we’d run aground on theory and the way forward was to take a long hard look at how technology actually does structure some very social things. We weren’t envisioning a hard determinism, but something more akin to iterative processes where technological structure “hardens” over time. It wasn’t quite SCOT either since we were acutely aware of the critiques against it. Julie, I think gets to the process of “hardening” in her book. But we went somewhat further. Some of us thought that it’s not enough to think through processes but we must design them (and technologies) too. So it was a bit of soft determinism with an interventionist/action oriented research agenda we concocted. In Whale and the Reactor there is a chapter on the discourse of risk-benefit analysis when considering technological implementation. Activists are cautioned to not engage in that discourse because it’s a “stacked deck,” meaning that it’s rationale does not included the types of values and epistemologies that those opposing nuclear power or industrial pollution can use to ground their arguments. I think Julie’s point against liberal economic/legal philosophy is grounded in this type of sentiment. My take is that it it’s important but it’s time to go further. That it’s not enough to argue for epistemological shifts (which, if the disciplinary discourses Ted points out are any indication, might be the actual impossibility) but time we started taking counter architectures seriously as a research goal. The interdisciplinarity I imagine is a research program where legal scholars, media theorists, designers and programmers come together to discover, imagine and implement. Transgressive design is part of that and liability certainly a concern. But I have to ask: What would music use and consumption be today without the .mp3 format and Napster circa 1999? Would video UGC be what is today without YouTube per-blanket licensing? What would our current discussions about user participation/creativity/situate users be like without DeCSS, iTunes hacks, jailbreaks, wiki, wikileaks and other technologies/practices, not designed by governments or corporations (in fact made illegal by them)? We are uniquely positioned as new media scholars (law, STS, communication, design) to make our theories a material reality. My question is what’s taking us so long?

    Lastly I would generally disagree with the statement that “users are contract takers.“ I would reframe and say they are contract ignorers. They click through EULAs and TOSs as fast as humanly possible. Now some might say that’s pretty much the same as agreeing since they are “clicking through.” An ethnographer might say something else, however. If we are to believe in the situated user then we have to ask when does the contract becomes real for the user? Not when they are clicking through but rather when they bump into it later on, during consumption/production. When they realize some personal data no longer belongs to them, when they see that their expected uses are no longer available, when copy/paste is disabled, etc. The questions we should ask are not how come the user didn’t read the contract carefully or how should it be better presented but rather how are architectures of participation occluding the contract, not through some technological slight of hand but through their epistemological grounding…through what they communicate, writ large, about participation? I mentioned CC and the like in my post because it shows that legal scholars were able to look through architecture to find the essence of the legal rationale and convert it. The fact that we haven’t been able to do the same doesn’t mean it’s impossible just really hard, as Julie says. I think technological counter-architectures are a possible way to open up discourse…to put into practice what the will have to be reverse engineered into policy. Maybe I’m a determinist?

  4. Julie Cohen says:

    In the context of copyright the term “contract ignorers” is right on target. In the privacy context, which involves large amounts personal information mandatorily furnished to various types of service providers (not just online merchants but also banks, health care providers and others, and also social networks), you can ignore the contract all you want but doing so won’t restore control over the information. So I think “contract takers” is a fair characterization with respect to personal information. The privacy context also differs from the copyright context in that it involves proprietary algorithms of various kinds (search, sorting) that are used to perform various operations on the information, and by extension on the people to whom the information pertains. The question then becomes what sort of hack would reach that far. Jonathan Zittrain has an old article on “Trusted Privication” in which he envisioned a DRM-type solution for personal information, which I suppose is one possibility, though I’m reasonably confident in the ability of Big Data to perform the DeCSS-equivalent hack right back.

    On the topic of risk-benefit analysis and whether it’s a stacked deck, I didn’t mean to suggest that the deck is inevitably stacked. I do think that aspects of contemporary risk-management practice are stacked in favor of the more-is-better view and produce architectures and practices that reflect the hardening you describe. (Side note: Ulrich Beck’s Risk Society is a really interesting read on these issues.) What would a counter-architecture for risk management look like? Counter-architectures can be thought of (maybe?) as existence proofs that demonstrate not only the possibility of disrupting existing orthodoxies but also (and more importantly) of disrupting them in a sustainable way. How does one demonstrate the possibility of disrupting existing risk-management practice sustainably?