Harvard Law Review Symposium on Privacy & Technology

You may also like...

15 Responses

  1. Orin Kerr says:

    Dan, maybe I’m missing something, but I don’t understand what the alleged paradox is. People are only so-so at managing their privacy for the reasons you mention. And paternalism is not a good solution because being so-so at managing your own privacy usually yields better results than having some elite decide your privacy for you. So we’re stuck with an empirical question — what are the best strategies that will optimize the benefits for users? I don’t see that as a paradox, and I don’t think the problem is that consent is “under theorized.” Rather, it seems to me that it’s just an empirical question that we don’t know the answer to.

  2. Bhavishya says:

    Hi,

    Any idea if there will be a broadcast of the same on the internet?

    I’d love to attend something like this. I would’ve but I’m on a different continent.

    Bhavishya

  3. Daniel Solove says:

    Orin,

    I think you’re comment is not really responsive to my piece, and perhaps is just responding to the abstract. Did you read the full piece?

    1. I acknowledge in my piece that paternalism is not a good solution. So you’re critiquing me for not recognizing the dangers of paternalism when in fact I do in the piece. Part of the paradox is that paternalism, in an effort to resolve difficulties in consent, eliminates consent. So the paradox is recognizing your point, which is why I find it odd that you criticize it.

    2. My arguments demonstrate why the problem is more than just people being so-so at managing their privacy. My argument is that it is virtually impossible for people to manage their privacy. And my argument is also that privacy is more than just having people manage it.

    So, yes, I do think you’re missing something.

  4. Orin Kerr says:

    Dan,

    Yes, I read the article before I commented. But you misunderstand my comment: Obviously, you do recognize the problems with paternalism. Rather, my concerns are that (1) I don’t see how recognizing the flaws with paternalism creates or implies a paradox, and (2) it seems like this is just an empirical question, not a problem with lack of theory.

  5. Daniel Solove says:

    I think then you’re misinterpreting my argument. The problems I raise are not empirical ones. I think I convincing demonstrate that it is impossible at the time of data collection for a person to make a sensible judgement about the future privacy implications. My reasons are that the implications are unknown, so I don’t see how empirical testing is required.

    The flaw in paternalism I note is that it undermines consent. The paradox is that the privacy self-management model’s goal is to promote consent, yet it fails. The only apparent viable solutions are paternalistic ones, and they override consent. The paradox is that to deal with problems of consent, the solution overrides consent.

    I really don’t see the empirical issue here or how it would be measured.

    Basically, my argument is that at the time of data collection, people are being asked to consent to things where there is too much unknown to make that consent meaningful. This isn’t empirical; it is just logical.

    And my argument is not that people are so-so at managing privacy. My argument is that managing privacy in the way envisioned by the self-management model cannot be done in a way that will achieve the goals of the model. Alternative solutions are paternalistic and aim for the model’s goals but fail because they paradoxically undermine those goals in their zeal to achieve them.

    This suggests not that the goals of the model are bad, but that there will be a certain degree of futility in achieving them. Maybe there can be some improvement, but I think that my structural arguments suggest that there are just some very hard limitations.

    So I suggest that we might find it more fruitful to pursue other goals when it comes to privacy protection.

  6. Daniel Solove says:

    Plus I do mention empirical evidence to support some of my critiques of the privacy self-management argument in the “cognitive problems” section of the paper.

  7. A.J. Sutter says:

    Any answer to Bhavishya’s question? I’m also on a different continent.

  8. Daniel Solove says:

    I’ve asked the HLR editors about a webcast and await their reply.

  9. Orin Kerr says:

    Dan,

    I don’t mean to be pedantic, but I don’t see how it’s a paradox if self-management or paternalism fail to achieve their goals. A theory’s failure to work isn’t a paradox. It’s just a problem with the theory.

    On the broader issue, the ideal solution is to have people be perfectly informed about exactly what will happen to their data, and then have people chose what they want to happen in light of that perfect knowledge. That ideal solution is impossible, however, as perfect knowledge is unattainable. So we’re stuck trying to find the most workable alternative that most closely approximate the ideal. What alternative approaches are best seems to be an empirical question, as it hinges on how much we can inform people, how the different cognitive biases might work in different factual contexts, and the like. That is, to devise the best realistic way forward, the primary questions are how different strategies and legal rules might actually work in the field to best approximate the ideal of perfect knowledge and therefore best enable decisions to match what people would want for themselves if they had perfect knowledge. Or so it seems to me.

  10. Daniel Solove says:

    Orin,

    I guess our dispute is turning on what the definition of “paradox” is. The paradox is not just that self-management fails to achieve its goals. The paradox is that those who want to have the law empower people to make meaningful decisions about their privacy turn toward paternalism because self-management fails, but paternalism also denies the very goal they are aiming for. I’ve tried to describe this several times, and I don’t think I can convince you because we have different definitions of paradox.

    I agree with your ideal knowledge paragraph, but I still don’t think it is responsive to my arguments. Of course no situation is ideal, and few have perfect knowledge. That’s not my point or expectation, or else my argument could be applied to nearly anything. My argument is that with privacy, knowledge is particularly absent and difficult to obtain at the time the decision is made. The decision might as well be a random guess! In other words, yes, all decisions are made with imperfect knowledge, but there are various levels, from educated guesses to speculation in the dark. My point is that privacy is speculation in the dark.

    The section on structural problems is different from the one on cognitive problems. The cognitive problems are empirical issues and can be addressed by studying how we can inform people, cognitive biases, etc. I discuss these in depth. But the structural problems I distinguish because they apply regardless of the cognitive problems. The structural problems occur because even if people receive a privacy notice written in the most ideal way imaginable, there’s simply no way to explain the consequences when they aren’t fully known. There is no ideal way to notify because too much remains unknown.

    Moreover, the problem of scale is an argument that there are too many entities for people to adequately manage data. I point to the number of entities people interact with that collect their data. I think it is obvious that few people, if any, could adequately self-manage their data at each and every entity that gathered it.

    You seem very focused on the view that it all just boils down to providing better notice, etc. But my point is that the problems transcend this. For structural reasons, the model is doomed. We know empirically that it doesn’t work, and I show evidence of this (nobody reads the policies or understands them). But I argue structurally that these problems can’t be fixed. The model is incapable of achieving the goal of having people really self-manage their privacy. The model might be valuable because it achieves other goals. It just won’t achieve the goal of self-management. I demonstrate in the cognitive section why it fails using empirical evidence. I demonstrate in the structural section why, even if all the cognitive issues are addressed, the model still won’t work. My point is that privacy law keeps pursuing the goal of self-management because it is a laudable one, but it is also not a feasible one. There is just too much data collection, too many entities, too much uncertainty, and the time in which decisions must be made is too soon to have any clue as to anything.

  11. Orin Kerr says:

    Dan,

    Thanks for the response. I was understanding your argument to be that having people manage their privacy runs into limits; there are circumstances in which it is hard to make work, for the reasons you mention. That position naturally prompts an empirical question of when it works and when it doesn’t. But from your comments here, it sounds you are arguing that people actually have zero capacity to ever manage their privacy just as a matter of logic: The idea can never work under any circumstances. Is that your argument? If so, it’s pretty dramatic, although I confess I don’t think you have made the case for it.

    Finally, on the paradox question, I agree it comes down to definitions of what a paradox is. I understand a paradox to be a self-contradictory statement, or at least a statement that appears to be self-contradictory.

  12. Daniel Solove says:

    Orin,

    I don’t want to say always never, but in many cases the model will not work. Maybe there will be some amazing new technology or approach that will enable people to manage across all the different entities that gather data and will provide enough information to make the management more than a guess in the dark. So there are situations and contexts when it can work, but as a general overall strategy for regulating privacy, it will fail.

    A paradox is more than a self-contradictory statement. That is just one of the definitions; a paradox is broader than that. Here’s Merriam-Webster Dictionary on paradox:

    1: a tenet contrary to received opinion
    2a : a statement that is seemingly contradictory or opposed to common sense and yet is perhaps true
    b: a self-contradictory statement that at first seems true
    c: an argument that apparently derives self-contradictory conclusions by valid deduction from acceptable premises
    3: one (as a person, situation, or action) having seemingly contradictory qualities or phases

    Examples of PARADOX
    It is a paradox that computers need maintenance so often, since they are meant to save people time.

  13. Orin Kerr says:

    Dan,

    Last round, I promise. It sounds like we agree that the strategy works in some cases and not in others, and that when it works and when it doesn’t is an empirical question. As for whether it works as an “overall strategy,” I would think it depends on what the alternatives are: If we’re trying to get as close to an ideal as we can, we need to find the least bad strategy rather than say what works or doesn’t work in some abstract sense.

    Finally, I still don’t understand why you think there is a paradox here, even on the broadest definition you name. But I’ll drop it.

  14. Daniel Solove says:

    Paradox is that in order to achieve a goal, a purported solution undermines that goal.

    Regarding the empirical issue, I still think we’re talking past each other. Of course, in some situations, the costs/benefits are more apparent than in others. In many situations, however, the costs/benefits are not apparent. And, as I argue, we can’t just look at each individual situation because data transfers and data aggregation are often involved, so the costs/benefits must be assessed more collectively. At time 1, it might make sense for me to reveal fact 1. At time 2, I might decide to reveal fact 2. What is hard to determine is that at time 3, fact 1 and fact 2 might be combined to reveal fact 3, and I might be harmed by fact 3. But I might have no idea that fact 1 + fact 2 would yield fact 3.

    Now let’s take this and multiply it by thousands of pieces of data. So over the course of the past 10 years, let’s say you’ve given out 54,205 pieces of data. Some have been transferred, some combined, but everything’s still fine. Then you give out fact 54,206 — a relatively innocuous fact — but it gets combined with some of the other facts you provided, and presto, fact 54,207 is produced and that proves harmful to you. This new fact is produced because some clever computer scientist came up with a clever algorithm just a few months prior that was able to deduce fact 54,207 from some of the other facts. Of course, fact 54,207 could be quite beneficial to you. Or to society as a whole — maybe it leads to a cure for cancer. The point is that it really becomes impossible for a person to make meaningful judgments about what facts to give out.

    Some argue: Stop the data aggregation and analytics! But some useful and good things can come from them. The privacy self-management model wants to bless the creation of fact 54,207 because you consented to giving up the facts so many years ago. It is that legitimacy that I’m challenging.

    I am not arguing for paternalism or for self-management — instead, I am arguing that we need something else to handle this kind of situation.

    Plus, we give out data so frequently over the course of so many transactions, that it is nearly impossible to do this kind of analysis writ large. It might work for just a few instances, but my argument is that it doesn’t scale well. Moreover, the work needed to manage privacy doesn’t scale well.

    So when it works and when it doesn’t in isolated instances may be determined empirically, but when we have data combinations in the future and scale issues, it becomes increasingly hard for the model to work.

    And in some cases, there is too much unknown to make a meaningful decision about costs/benefits.

    So your argument keeps focusing on each individual transaction, when my argument is focused on problems caused by the totality of all the individual transactions.

    So when you say that “when it works and when it doesn’t is an empirical question,” I say: So what? That’s not responsive to my argument that it doesn’t work because there are simply too many situations. And for that there is empirical evidence to suggest that very few people actually engage in privacy self-management in most contexts. I really don’t care about this one or that one when my argument is broader.

    Think of it this way: Suppose it were possible to self-manage in each instance. Doing so thousands of times each year is just not feasible. And not only with each specific site/company — each and every time you visit a site or buy a product, the privacy policy might have changed, so you must check it again. There are just too many points in which we provide data these days to manage them all. Maybe we can manage a few, and yes, we can try to determine which ones. But — as I will reiterate again — that’s not my argument! I’m looking at the whole picture.

  15. Daniel Solove says:

    For those who are asking, I have learned that the conference will not be live streamed, but it will be videotaped and posted to the conference site later.