Site Meter

Are People Really Harmed By a Data Security Breach?

You may also like...

16 Responses

  1. Jim Harper says:

    Serwin’s article is good, and a good touchstone for your commentary, Daniel, on a subject I know you’ve been thinking about for a long time. Legally cognizable harm is the elephant in the room when it comes to privacy. We’d all like people to enjoy privacy at the level they prefer, but if there is not harm, why are we calling on the state to police behavior?

    A couple of thoughts that came to mind as I read your piece might help sharpen the issues:

    You’ve said that companies were negligent or even reckless, but a negligence cause of action lies — as was burned into my brain — when there is a duty, a breach of that duty, causation, and damages. Being “negligent” about something that doesn’t cause a harm is not negligence. It is simply indifference to a priority held by Dan Solove and many others. A segue into the question of…

    What are “adequate security practices”? Perfection in data security would be nearly as bad as total failure. A perfectly secure database is unplugged, encased in concrete, and sunk to the bottom of a deep ocean from a secret location on the surface. Which is to say, it’s useless. You believe — and I don’t know to tell you you’re wrong — that *more* data security would be better. But what if greater investment in data security drives higher costs to consumers without driving their data security or privacy up by an equal or greater amount? Excess security would lower overall consumer welfare, which would be bad.

    I believe that holding data should obligate one to a duty of care toward the data subject. I’m most inclined to disagree with the third category of cases noted above, where a person’s reasonable steps to mitigate likely harm aren’t compensated.

    I’d like to see negligence cases succeed at a rate, and on facts, that drive data holders to optimal security practices. But I don’t know what success rate produces that, and I don’t think equating data holding to engaging in inherently hazardous activity gets you to the right place. It gets you to super-optimal security, which is sub-optimal consumer welfare.

  2. Daniel Solove says:

    Jim — In many cases, the security practices are egregiously bad. Unencrypted data. Employees taking home millions of records on portable devices. And so on. There is a lot of knowledge in the security field about what data security practices are better and worse. Of course, there is no such thing as perfect security, and there may be debates over what “adequate security” means, but I think that in many cases, it is clear that the data security isn’t adequate at all.

    Negligence doesn’t require perfection but the following of reasonable industry standards. Many companies aren’t doing this.

    With regard to negligence, I think we’re arguing past each other. When I said that companies can be negligent or reckless, I was speaking about their degree of fault. There is, of course, a difference from satisfying the fault standard of negligence and having a cause of action in negligence, which is what you’re referring to. The fault standard of negligence involves deviating from a reasonable standard of care. To be liable for a negligence cause of action, that’s where you have duty, breach, causation, and damages. Anyway, I think the argument is over semantics. I was referring not to a cause of action but to the fact that companies can deviate from a reasonable standard of care and not be liable.

  3. An excellent post and discussion, thank you.

    One extra thought, in March the UK ICO published a report which included a discussion on the value of personal information. It describes (Vol 1 p8) how the value may be different to the individual than from three other perspectives.

    The Privacy Dividend – the business case for investing in proactive privacy protection
    http://www.ico.gov.uk/upload/documents/library/data_protection/detailed_specialist_guides/privacy_dividend.pdf

    (I was a co-author of the ICO report together with Dr John Leach)

  4. Bruce Boyden says:

    I think the debate boils down to this: Is dread a harm?

  5. Ken Rhodes says:

    Daniel, the clarification of a reasonable standard for “degree of negligence” is certainly critical, but there is another prior issue that I think is more “black and white.”

    In my home state (Virgnia), which is not way out in left field on this issue, it is against the law to expose another individual to risk of AIDS without prior disclosure:

    Any person who, knowing he or she is infected with HIV, has sexual intercourse…without having previously disclosed the existence of his or her HIV infection to the other person shall be guilty of a class 1 misdemeanor.
    –Va. Code Ann. § 18.2-67.4:1

    The relevance is this: the law in this instance recognizes culpability not only for the infliction of AIDS, but for the *risk* of it. Yet in the situation you’ve described above, the law quite specifically requires that the damage *occur* in order to justify compensation, not merely the exposure to the risk of damage.

    I think if these types of risks were subject to the same treatment as others (AIDS, the publication of nude photos you mentioned, etc.) then we might see a momentary rush on the courts, but that would subside very quickly once the courts then turned to the issue you’ve explicated–a reasonable standard of diligence on the part of the data holders.

    If the holders of the data are subject to a standard of diligence, then an occasional accident which happens in spite of diligence can be seen as just that, an unforseen accident with no corresponding negligence.

    But that will only happen when the law recognizes the requirement to avoid, not only damage through negligence, but negligent exposure to risk.

  6. Ken Rhodes says:

    @Bruce: I don’t think so. Rather, I think it boils down to: Is risk a harm?

  7. Ryan Calo says:

    Great post, Dan. You know my view: people are harmed to the extent that (1) they experience distress worrying about the possibility of their information being used against them and (2) their information is actually used against them. I believe the court should recognize category (1) for the reasons you stated. Details here: http://ssrn.com/abstract=1641487

    This leads me to disagree, though, with your safety deposit box analogy. Why would I be harmed by the loss of a key to a deposit box unless or until I want something in it?

    Ryan

  8. Dissent says:

    Distress could be very time-limited if the consumer (only) has to cancel a credit card or debit card and/or change autopay settings to insert a new card number.

    But suppose that because of the experience, they now find themselves generally anxious about future breaches involving other entities and so they start spending time checking their bank statements every day or are afraid to use their new card as they would normally use it – and, as a result, do not enjoy the same quality of life that they had prior to the breach. Is that “harm?” The courts would seemingly say “no,” but if courts acknowledge lasting psychological impact as a result of other kinds of negligence, why not this kind?

  9. Daniel Solove says:

    Ryan — If the safety deposit boxes were property, wouldn’t the one with the lost key be worth a lot less? Isn’t this diminution in value a harm?

    You ask: “Why would I be harmed by the loss of a key to a deposit box unless or until I want something in it?”

    In other contexts, such as harm to property, we don’t require plaintiffs to prove that they will use the property in order to be damaged. Suppose you crash into my car. The only damage is that my heated seats won’t work. But I’ve never used my heated seats and I don’t know if I ever will. I’m still harmed, right? I can still recover for the loss of my heated seats, and I don’t have to prove I’ll be using them anytime soon.

    A problem that often arises with an identity theft or credit report error — courts say a person isn’t harmed unless they actually try to get a loan and are denied. Suppose you’re a victim of an error in a credit report that causes your score to be very low. At the moment, you’re thinking of buying a new house, but you decide that until your credit is fixed, you better not do anything. Indeed, trying to do something would only be a waste of time and money until your credit report is fixed. Have you been harmed? Courts often want people to apply for credit and be denied, but this seems like a stupid requirement to make people do. They are harmed regardless of whether they apply for credit or not because their freedom to obtain a loan is diminished. Their entire calculus of decisions is affected.

  10. Daniel, nice post. In the stolen credit card context, unless credit is damaged, I think it is difficult to establish any actual harm to the consumer. Consumers can be liable at most for $50 by law, and that amount is routinely waived by the issuing banks. Oftentimes, the consumer is issued a new card, which cuts off the chance of future fraudulent charges. What “harm” is left in this context then? If you open the door up to any increased risk of harm, you open up litigation floodgates that will be crippling relative to the inconvenience suffered by the credit cardholder. Not to mention, if “risk of harm” is cognizable harm, how many new torts have you created (does a reckless driver swerving in out of traffic increase one’s risk of harm)?

    The analysis may be different if other PII is involved (e.g. SS#), but in the credit card context this seems like the right decision.

    Finally, since this is about the relative societal cost-benefits of the fluidity of data transactions versus potential harm to consumers, isn’t this best left to the legislature. If we, as a society, believe that risk of future harm is worthy of recompense, then let’s pass a law.

    Another parting thought that may reveal some inconsistencies in various court’s risk of harm analysis. If risk of harm is not legally recognized, wouldn’t the same rationale apply if a data subject was subject to identity theft? Let’s say a PII breach occurs that allows a ID thief to open a credit account in a data subject’s name. Let’s say the ID thief racks up thousands in purchases. Our data subject discovers the ID theft and expends time and effort fixing his or her credit record. However, the data subject is never denied credit, and after his record is fixed, enjoys the same credit rating he or she had before? Has there been any harm in this case? Or has the data subject reacted to a significantly increased risk of harm? Now, I think that most courts would find cognizable harm if actual ID theft occurred post breach (and I think the Tri-West case indicated just that). So, how can you square the existence of harm when ID theft has occurred, but it has not adversely impacted the data subject except for time/effort to eliminate risk?

    Thanks,
    Dave

    P.S. I have a breakdown of the Hannaford court’s reasoning in my recent blogpost on the topic: http://tinyurl.com/2aock7j

  11. Ryan Calo says:

    Dan, thanks, I see what you’re coming from. Certainly if my car loses resale value because the heater doesn’t work, or if I can no longer sell my safety deposit box because I’ve lost the key, then I’ve been harmed. No question. And certainly if someone uses my information against me and steals my identity, thereby lowering my credit score and cutting off loan options, then I’ve suffered a harm–and a privacy harm at that.

    My point is much more modest: it’s not a privacy harm (at least) if I neither feel anxiety around the data loss, nor is it used against me. For instance, I change my name and social security number and, because of horrible negligence on the part of a data custodian, my old name and social security number gets out. Then there is only a privacy violation.

    I also agree that courts should recognize the subjective harm associated with having one’s information “out there,” whether or not it gets abused. They should compensate it—just as a court might compensate a person who is threatened but not actually hit (cf. assault without battery). Or, at the very least, they should offer credit monitoring and a guarantee that they will help should some bad actor misuse the leaked data.

    Anyway, great post!

  12. Doug DePeppe says:

    Hello Dan,
    I believe that the economic harm doctrine, which is preventing plaintiff recovery in these data breach litigation cases, will not permanently insulate companies from litigation risk (which, in turn, enables data holders to avoid implementing reasonable security measures). The organized crime element in cyberspace, using enterprise hacking botnets like Zeus/Zbot, is engaged in fraudulent activities involving the international transfer of funds from the banking accounts of small businesses through ‘money mules’ to their bank accounts overseas. This criminal scheme results in clear monetary losses to the small business. In several cases now being litigated, those small businesses are suing the banks with some causes of action sounding in tort, specifically the lack of reasonable security practices of the bank. It would not seem likely that the economic harm doctrine would prevent these cases from proceeding.

    My point is that the current Internet dynamic will likely change the macro environment, where suddenly a number of cases establish precedent for imposing liability on data holders under a negligence theory (lack of reasonable security).

    I tend to agree with you that law has to change a societal problem — where poor security exists within a macro cybercrime environment and the market is not adequately addressing the risk to society. Much like the Paisley Snail case stands as a marker for law enabling a needed remedy during societal change (in that case the changed societal transactional relationships brought about by the Industrial Revolution), the prevailing Internet dynamic presents a ripe opportunity to the institution of law to begin rebalancing risks and remedies.

  13. clarinette02 says:

    Thanks Dan for your great post and this online symposium.
    I am modestly adding my personal view on which I have been thinking for some times.
    In terms of data flaw, the closest analogy to my mind is the automibile. Obviously, driving has advantages and inconvenient.
    We have driving ‘codes’ and security measures. They are accidents, they are fines and insurance companies to compensate damages.
    More from a European perspective, in some countries like France car insurance is compulsory.
    Privacy is recognized as a Fundamental Right, also protected by the Article 8 of the European Convention of Human Rights.
    The ease of broadcasting, collecting and data base creation, has shown a rise of issues with available data traffic.
    The number of incidents where a breach of privacy has caused harm should, in my view, encourage to think of a code of practice for digital data traffic.
    I have in mind the case of this lady who sued the phone company she held responsible for her broken marriage as they passed onto her husband the log of her ‘private’ conversations with her lover. http://www.telegraph.co.uk/news/worldnews/northamerica/canada/7738371/Woman-to-sue-phone-company-after-husband-discovered-affair-through-bill.html
    Or the case of medical information leaked either to deny compensation or reveal medical information about Michael Jackson.(UCLA hospital fined over privacy breaches that sources say involve Michael Jackson’s records)http://www.pearltrees.com/#/N-s=1_839086&N-f=1_839086&N-play=1&N-u=1_72898&N-p=6513517
    How many laptops or USB drivers with confidential data have been lost? http://blog.dataleakprevention.eu/
    http://www.pearltrees.com/#/N-s=1_839086&N-f=1_839086&N-play=1&N-u=1_72898&N-p=5315290

    According to a recent study by the Ponemon Institute, the ‘actual breach incidents worldwide last year’ cost an average of $3.43 million for the organization.
    http://www.pearltrees.com/#/N-s=1_839086&N-f=1_839086&N-play=1&N-u=1_72898&N-p=5315290

    These incidents of breach of privacy have all caused a harm of various degrees.

    Coming back to the initial analogy, my suggestion is to evaluate the data subject’s rights of compensation according to the harm suffered and the attitude towards the risk.

    - data can be collected with or without consent or even the knowledge of the data subject ;
    - data subject would have suffered from an immediate or potential harm ;
    - data collector/processor negligence to secure data can aggravate their liability and therefore subject to higher compensation.
    These are some element to measure the degree of liability.

    The EU reform of Data Protection Act is considering to create a harmonized data breach penalty and an obligation of notification.

    I am wondering if, very similarly to the driving code, a data handling code could not create a set of rules and a greed of liability to compensate the harm and prejudice suffered by a data subject for intrusion of its privacy or more.

    Base on this, a fine could be imposed in case of non-compliance to the principles of security for data handling in combination with individual compensation guaranteed by an insurance fund policy.

  14. Omer Tene says:

    Great discussion. I think there’s definitely harm, whether or not id theft occurs. You lose your key holder with your home, office and car keys. Even if no one ever breaks in, you’re harmed (trust me – it happens to me often). Someone loses it for you – someone harmed you. Moreover, US law overemphasizes id theft in privacy matters. It’s the result of security breach notification legislation and there not having been a distinct “data protection” cause of action. Privacy isn’t all about id theft, as Dan explained thoroughly in his taxonomy and elsewhere.

  15. David Paul says:

    The point that you graze close to is this:
    If i have the door locks compromised in my home due to negligence…and i go out and PAY a locksmith to replace the locks, i have suffered an ascertainable loss. This is what happened in the Providence case in Oregon and we will fix it at the Supreme Court. Many folks paid for credit monitoring and the theory/proof is great on this.
    good discussion, in general.

  16. Doug DePeppe says:

    Following up on my post: this Washington Post article provides background for the likely sea change that will soon occur, forcing banks (and perhaps outside the financial sector) to implement cybersecurity processes to better secure online banking.
    “Cyberthieves Use Human Money Mules for Risky Work”

    There is too much money being stolen from business accounts – accounts which are not insured – for it to persist without litigation. And, there’s obvious harm here.

    The fundamental problem, in my judgment, is that banks have various security controls and regimes in place that derive from a static compliance-related mindset. In the cybersecurity era, more dynamic controls are needed. The banks are functioning in a Maginot Line era while the threat is a mobile, agile invader.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*
To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Anti-spam image