Site Meter

Can You Sue If a Computer Reads Your E-mail?

You may also like...

20 Responses

  1. Paul Ohm says:

    Is it likely that the ISPs would deploy a no doubt imperfect technology that blocks packets in this way without any type of accounting whatsoever? Isn’t it much more likely they would keep a log of the traffic that had been blocked, so that they could investigate future complaints for example?

    This is critical to the wiretap question, because once ISPs start keeping logs that preserve the “substance, meaning, or purport” (the definition of “contents” under section 2510) of messages on the network, your analysis might not apply.

    I’d say the same thing about Gmail. I’ve always assumed that Google has been keeping statistics about the contextual Gmail ads they display. In fact, their advertisers probably demand it. Those statistics themselves might constitute wiretaps. That’s why Google is wise to try to deal with this through consent.

    Based on what I’ve heard so far, I’d advise the ISPs to think long and hard about wiretap liability before deploying these filters.

  2. Cathy says:

    I tend to disagree with you on whether such interception runs afoul of the copyright act. I wrote my note on whether these fingerprinting devices could be used by universities and ultimately concluded “no.”

    Catherine R. Gellis, CopySense and Sensibility: How the Wiretap Act Forbids Universities from Using P2P Monitoring Tools, 12 B.U. J. Sci. & Tech. L. 340 (2006), available on SSRN or my blog.

    I’m with you that the definitions of “interception,” et al. are a mess, but later cases (see e.g. US v. Councilman) seem to want to try to apply the general fourth amendment protection principles more broadly. Which is good news, because otherwise you end up with a situation where traditional telephonic communications would have protections but ones made over the Internet wouldn’t be (see, e.g., VoIP – it’s clear that if you called someone with a traditionally-switched telephone network you’d have protection, so why shouldn’t you also have privacy in your identical voice calls that happen to be packet-switched over the Internet?)

    Also, see Deal v. Spears, 980 F.2d 1153, 1158 (8th Cir. 1992). Some business owners suspected an employee was an accomplice in a robbery of their business and decided to listen in to all of her phone calls, regardless of if they related to their business interests, and the court called foul on that. The business couldn’t listen to everything, as once ascertaining that the call did not relate to business purposes they no longer had any right to eavesdrop.

    Also see U.S. v. Jones, 542 F.2d 661, 673 n.24 (6th Cir. 1976) (“…there is a vast difference between overhearing someone on an extension and installing an electronic listening device to monitor all incoming and outgoing telephone calls”). Default 24/7 monitoring of the content of every packet transmitted would therefore seem to be inconsistent with anything that might be permissible under the act.

  3. student says:

    Assuming nonsentient computers, who cares if a computer reads your email and never tells anyone about it?

    Consider the exploit discussed at New cracks in Google mail (Dan Goodin, The Register, 28 Sep 2007).

    Are you saying that this doesn’t violate the wiretap act if no one actually collects the diverted email?

  4. Frank says:

    Fascinating post. I just have one tangential recommendation of a resource that might be of interest:

    Chopra and White, Privacy and Artificial Agents, or, Is Google Reading My Email?, at

    http://www.sci.brooklyn.cuny.edu/~schopra/choprawhite497.pdf

    I also vaguely recall Larry Lessig’s discussion of the “worm” in Code which harmlessly inspected computers. From an Amazon review: “What about a computer worm that can search every American’s PC for top-secret NSA documents? It sounds obviously unconstitutional but the worm code can’t read your letters, bust down your door, scare you or arrest anyone innocent. If you’re not guilty, you won’t even know you were searched.”

  5. Orin Kerr says:

    Very interesting post, Bruce.

    The difficulty, it seems to me, is that the point of the monitoring for copyrighted material would be to act on the contents. That is, the results of the filter are presumably given to a person, who is alerted as to the presence of a copyrighted file and can take action on that. If I’m right about that, it sure seems like an intercept to me. I don’t see a difference between (a) having a person isten in to a call, as in a traditional telephone tap, and (b) having a computer listen in and then indicate to a person the contents of the communication. Indeed, all wiretapping of electronic communications is a form of (b); the computer “listens” to the zeros and ones and then reports back when particular strings signaling different letters and numbers are found.

  6. Bruce Boyden says:

    Wow, thanks everyone for these comments.

    Paul, you’re right that it all depends on the construction of the system. I’m sure content owners would prefer that the filter not only blocked traffic, but sent a follow-up e-mail: “Dear Mr. Lucas, 198.222.0.5 just tried to download the Empire Strikes Back!” Naming the infringing file would probably come too close to the “substance, meaning, or import” of the communication, however. But I don’t think a filter system would need to transmit any information at all in order to be useful (again, assuming the practical difficulties can somehow be overcome). Also, I don’t think logging an IP address, plus an indication that a file was blocked, would be acquisition of the “substance, meaning, or import” of a message, so probably at least that could be done, for whatever good it would do. An IP address seems more like a telephone number than the content of the communication.

    Re: Gmail, I don’t think I agree there either. I don’t see how a record that 54 unnamed people sent e-mails containing the word “catfish” today acquires the “substance, meaning, or import” of any communication. Obviously if you start stringing those results together and identifying them with particular messages you might at some point get the contents of a message, but I think Google could maintain at least some records without consent.

    Cathy, just to be clear, I’m not arguing for any difference between traditional telephone and VOIP. Sanders and Pascale both involved regular phone lines. I think that hooking up a wire that doesn’t lead to a speaker or some other way of producing human-audible content is not “acquisition” of a wire communication, either. And there have been cases that have held 24/7 monitoring to be permissible in some circumstances, at least under the business extension exception — see Arias v. Mut. Cent. Alarm Serv., Inc., 202 F.3d 553 (2d Cir. 2000). Since I’m maintaining that automated scanning is not even acquisition, of course, any limits on the business extension exception would be inapplicable.

    Student, the Google exploit as I understand it would forward a copy of a message to some other location — that’s like making a recording of a telephone call. Most courts have held that even unlistened-to recordings are “acquisitions,” and as I mentioned in the post, that strikes me as a logical conclusion. So the Google exploit would be an intercept, or perhaps a violation of the Stored Communications Act, 18 U.S.C. s 2701. As well as a violation of the Computer Fraud & Abuse Act, 18 U.S.C. s 1030.

    Frank, along the same lines as the Lessig worm hypo is a very interesting note written by a friend of mine, Michael Adler, Cyberspace, General Searches, and Digital Contraband: The Fourth Amendment and the Net-Wide Search, 105 Yale L.J. 1093 (1996).

  7. Bruce Boyden says:

    Orin, I agree with you that your (a) and (b) are pretty similar. The parallel I’ve been drawing is between a “tap” that goes nowhere, either to an inoperable speaker or perhaps to an empty room, and a computer scan that does *not* report the contents to any human. I thought you were about to say that the computer taking action was enough to make it an acquisition; that would be a distinction between the two cases, but I don’t think blocking would equal acquiring.

    I admit that my entire analysis misses the point if the only way to implement such a filter is to have the results reported to a human. The NYT Bits post doesn’t help us out too much here, since it’s pretty vague. But I had been assuming that the most feasible way to implement network-level filtering, given the speed and amount of traffic, would be to have some sort of automated process to detect and block certain files.

  8. student says:

    The Google exploit as I understand it would forward a copy of a message to some other location — that’s like making a recording of a telephone call.

    You didn’t answer the question I asked, though. I probably phrased it badly, and didn’t provide enough context.

    It’s usual for a provider to close email dropboxes when they’re discovered.

    Suppose that the exploit code directed mail to a Yahoo dropbox. The malicious code is discovered in the wild (say it was used for a domain hijacking). Yahoo is notified, closes the dropbox.

    But that doesn’t necessarily mean that the exploit isn’t still spreading (Google has by now patched this particular vuln). Nor does it necessarily mean that users’ Gmail accounts are clean (users have been urged to check their Gmail filters for this exploit).

    So, I probably shouldn’t have compressed all that into “if no one actually collects the diverted email?” I’ll try again: What if the dropbox is closed?

  9. CDeBoe says:

    I agree with Orin Kerr. The key is whether an action is taken on the intercepted material, not whether a human takes the action. When I send and receive email, I give consent only for transmission, not for the ISP to add, delete, or alter the transmission. If I send a racy email to my wife and my ISP accidentally sends it to 100,000 people, isn’t the ISP going to be accountable for that, even though the problem was was a computer setting rather than a human’s deliberate action?

    Further, if I remember my copyright law class correctly (which I may not, it’s been 20 years), any informatiion fixed in a tangible medium of expression is copyrighted. So the photo I download is copyrighted, whether it’s from an ad agency or my brother. And how about if I use that photo as the background image in a spreadsheet? What if I photoshop it? How is my ISP going to tell?

  10. clazy says:

    Who is the ISP to enforce copyright law? Do they have any standing to decide that some giant corporation owns a copyright rather than me? Don’t I at least have the right to contest their claim?

    As for the key issue being the meaning of acquisition, it seems to me that the key issue would be what is content, and to my mind, any information at all relating to the email would comprise content, including whether it carries a file that appears to be copyrighted.

  11. Paul Ohm says:

    There doesn’t seem to be a lot of disagreement here. Liability depends on what the ISP does with the packets that match the signatures. At one extreme end (do nothing) there is no liability. At the other extreme end (send a nasty letter to the user) there is clear liability and no immunity.

    But the devil is in the details, and that’s why your original post–which assumes away the practical complexities–could have given non-experts the misimpression that the ISPs were without risk here. The risk seems pretty significant, and if I were advising the ISPs, I would tell them act very, very cautiously.

  12. Stephen says:

    It seems to me that whatever program put in place to listen/scan to messages is acting as a agent of a human. It doesn’t seem legitimate to allow a program to scan communications and report whether criminal activity was discussed even if they don’t report the content of the message.

    If a human was listening to phone calls and only reported that the participant were discussing a burglary without recording or repeating the actual conversation, we’d still find that a breach.

    So, any program that does that should be considered a breach. Just because they are scanning for marketing information doesn’t make it okay. Embarrassing marketing info could be used coercively by an unethical firm.

  13. Gene Hoffman says:

    What is the substantive difference between:

    The network filter device flags this packet as copyrighted material and blocks its transmission.

    And

    The network filter device flags this packet as (Tiananmen Square/Supportive of the opposition party/A petition to redress grievances) and blocks its transmission?

    As such it seems pretty clear that performing even an automated action without human intervention is the acquire some essence of the communication and do something “actionable” and outside the intent of the sender.

    -Gene

  14. Bruce Boyden says:

    Student, I’m not sure I understand your question. Is the question whether there’s wiretap liability for the person making use of an exploit, if there’s no dropbox? I.e., the forwarded messages all bounce or something. That to me seems like the wire attached to a phone line that doesn’t lead anywhere productive — so under my analysis, no, that wouldn’t be a wiretap (assuming for the moment the Wiretap Act applies and not Section 2701). Naturally, any messages successfully received in the dropbox prior to its closing WOULD be “intercepted,” and under the majority of court decisions, that would be true even if the hacker never read them. And in any event the hacker is likely liable under the CFAA no matter what the situation is with the dropbox, just for exploiting the flaw.

    Paul, not to get all worked up about it, but it sounds from your second paragraph like you think my initial post was too glib. I don’t see how. In any event, in case it wasn’t clear, I reiterate my warning in the post to “any telecommunications company to be wary before proceeding here,” particularly given, as I discussed, the confused state of the law on this point and the paucity of cases supporting the distinction I want to make. It’s also worth noting that, as we discussed last weekend, even where the law is clear courts screw up the ECPA all the time. Certainly any telecom people reading this exchange should note that I’m responding to Orin’s post, and they’d be idiots to ignore his conclusion on the matter.

    Second, you’re right that I did assume some practical difficulties away, but I’m not sure why that would give rise to any misimpression that proceeding here would be “without risk.” For one thing, I explicitly assumed that network-level filtering is feasible. If it’s not, then Orin’s post and my post and the original Bits blog post are all just idle speculation, and it’s trivially true that proceeding is without risk because no one will proceed. Plus, I’m not an expert on the technology, but it doesn’t strike me as intuitively obvious that network-level filtering involving humans would be any *more* feasible than automated filtering. In any event, as I said in the post, the situation I intended to analyze was the one where there is “automated filtering,” and “no contents from the communication are recorded or transmitted to humans.” If that’s not how network-level filtering would actually be constructed, then I agree my analysis does not apply, but I think that’s obvious. And if it *is* how it would be constructed, then I think ISPs *should be* “without [Wiretap Act] risk”, subject to all of the appropriate caveats about untested arguments, the vagaries of litigation, and statutes and risk factors not discussed in this post (e.g., public relations).

  15. mrsizer says:

    The interesting technical issue (I’m not a lawyer): ISPs _already_ do this – they must. They read the packets to various levels in order to route them – or throw them away.

    What’s the difference between:

    a) sending packets to the “bit bucket” based on IP data (e.g. try sending packets from a 192.168.0.0 network address to a valid destination – they will vanish)

    b) throwing them away based on protocol (e.g. “we don’t allow ftp”)

    c) throwing them away based on content type (e.g. “we don’t allow transmission of photos”)

    d) throwing them away based on content value (e.g. “we don’t allow porn – and we’ll analyze your pictures to find it”).

    It’s all the same thing. It’s just a matter of how much analysis you’re doing on the packets (although trying to analyze content value would probably require re-assembling them, and they might not all be going through your network).

  16. mrsizer says:

    P.S. I did get the distinction between simply throwing stuff away and “intercepting”.

  17. Ted McClure says:

    Having spent some time in the intelligence business before and after law school, I’m puzzled why there is any confusion over the word “acquire” in this context. We used it to mean “obtain [a flow of information] so as to be able to monitor it.” Whether action was ever taken or whether any human ever sensed it was not relevant. If we were intercepting voice radio transmissions, we “acquired” the signal as soon as we could detect it clearly enough to translate it. We used “acquire” similarly for radar and telemetry intercepts, electronic countermeasures, imagery, and by extension visual observation.

    In the wiretap context, this means that when the signal is diverted, when the recording is made, and when the recording is listened to are not relevant. The question is, when is the information in the signal able to be meaningfully monitored? The answer for internet monitoring is as soon as the IP packets can be read.

    I suspect the difficulty with this expression arose from the different experiences of those who drafted the statute (who probably had some familiarity with the law enforcement and intelligence communities) and those who have been called upon to apply it in the real world.

  18. student says:

    First, I should note for the record that there is no evidence —none— that the cross-site request forgery (XSRF) written up in The Register‘s September article is the actual exploit used to inject the malicious filter used in the domain hijacking written up in the December article. Instead, that appears to be pure speculation by the victim. The Register’s John Leyden agreed with that guess, and reported that that particular injection vector had been closed by Google. But, actually, all we really know is that that particular XSRF vulnerability was one feasible way for a third-party to install a Gmail filter. And we know that the domain-hijack victim discovered a Gmail filter intercepting his email.

    In short, it’s just a guess that Google has patched the XSRF vulnerability exploited in the wild. I repeat that users have been urged to check their Gmail filters.

    Is the question whether there’s wiretap liability for the person making use of an exploit, if there’s no dropbox? I.e., the forwarded messages all bounce or something. That to me seems like the wire attached to a phone line that doesn’t lead anywhere productive — so under my analysis, no, that wouldn’t be a wiretap [...].

    That answered my question—at least kinda, sorta.

    Let me step back. What I was hoping was that you would apply your understanding of the wiretap act to one class of hypothetical Gmail incidents “where there is ‘automated filtering,’ and ‘no contents from the communication are recorded or transmitted to humans’”

    To continue along that line:

    Internet email does not guarantee instantaneous delivery. In fact, it doesn’t guarantee delivery at all. (E)SMTP is simply a best effort service.

    RFC 2821 documents 4yz “Transient Negative Completion repl[ies]”, colloquially known as “Try Again” responses.

    Take the Gmail filter exploit, and suppose again that the Yahoo dropbox hasn’t been discovered, but instead that there is a temporary error preventing delivery. (Perhaps the Yahoo email quota has been exceeded.)

    Would you say the email interception violates the wiretap act during the time the email isn’t being delivered to the dropbox because of a temporary error condition?

  19. fishbane says:

    Just to say this upfront, I am not a lawyer, but rather a techie with a serious interest in the law.

    I realize that the law around this area is opaque and complicated, and to some extent based on analogizing new forms of communication to older forms.

    Just to take a different tack, how does, for instance, my router refusing to forward packets based on a signature not implicate me in the same way that I would be implicated in, say, setting a trap that harms someone?

    Setting aside contracts for now, if I boobytrap a door that then harms someone, I am liable for that harm, because my intention was to harm someone who did something (open the door) that I didn’t want them to do.

    If a person has a legitimate interest in their communications arriving at the destination, it seems to me that the intention of a person/carrier that interferes by installing a mechanism that selectively disrupts that communication is what is important, not that a human wasn’t directly involved in choosing whether or not to pass that packet. They preemptively made the decision, with deterministic results.

    Obviously, I’m not trying to compare the seriousness of dropping BitTorrent downloads with wiring a shotgun to a door handle, but the human agency involved in both do seem comparable to me.

    I forget where, but I saw a similar argument that a motion sensor on a video camera did not constitute surveillance, because a human was only involved once motion was detected. Since motion was consisdered suspect, at that point surveillance was justified. This strikes me as incredibly facile reasoning – obviously, the intent is to surveil, and a legal fiction that “only” a machine is watching until something suspicious happens simply begs expansion.

  20. A.J. Sutter says:

    It’s amazing to me that everyone is so tightly focused on the technical legal issues without questioning, even in passing, the social values implied by broad surveillance for copyright-violative material. (Clazy’s comment at 2008/01/11/13:56 comes close, but ultimately is focused more on the question of burden of proof.) Namely, that it’s OK for the interests of copyright owners to be deemed superior to the privacy interests of millions of individuals. Seems to me that if current law does permit such indiscriminate scanning, that should be fixed. And if it’s such a close call, then the protections for individuals should be strengthened.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*
To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Anti-spam image