Computer Crime Law Goes to the Casino

You may also like...

23 Responses

  1. Orin Kerr says:

    James, glad you’re focusing on these issues. They’re great fun to play with, I think.

    You’re right that this is is messy and fact-laden in the marginal cases. But isn’t that equally true of the analogous trespass concepts in the physical world? For example, is it a physical trespass to enter someone else’s home? What if the door is wide open? What if the door is wide open and the home is having an open house because it’s for sale? What if it’s Halloweeen night? What if a homeowner invites you in thinking you’re a door-to-door salesman, but actually you plan to rob the home?

    These are the kinds of questions that courts grapple with in trying to distinguish permitted entry from entry without permission in the context of physical trespass laws. And they’re pretty fine distinctions. But we don’t generally say that the difficulty of these issues render trespass statutes inherently problematic. We don’t throw up our hands and say that no one knows when it’s okay to go inside someone else’s house. Instead, we just recognize that we’re dealing with the marginal cases, where the lines may be blurry, and that lines have to be drawn between entries that are okay and entries that aren’t. So we look for sensible ways to draw those lines.

    Given how much more complex are the ways that people use the computers than enter homes — and how much newer the problem is — it shouldn’t surprise us that there are equally (or even more) hard and fact-specific questions that come up with trying to draw lines between entries to computer systems that are okay and those that aren’t.

    As for Kennison v. Daire, it’s worth noting that no one claimed that using the ATM was unauthorized access. Rather, the issue was whether the bank had authorized the issuing of the money to the defendant. The charge was theft after the computer had been accessed — that is, the taking of property belonging to another — not unauthorized access to the computer.

  2. Orin Kerr says:

    Oh, and I blogged my thoughts on US v. Kane back in 2012, when the magistrate judge’s opinion came out: http://www.volokh.com/2012/10/15/magistrate-judge-concludes-that-fraud-scheme-using-video-poker-machine-falls-outside-the-computer-fraud-and-abuse-act/

  3. Thanks, Orin. I’m in broad agreement with you. We do want “sensible ways to draw these lines.” And I think your diagnosis of the CFAA’s core problem — that “without authorization” has come to bear more and more weight as the other elements have been broadened out of existence — is absolutely right. What I want to suggest, though, is that your code-based reading of “authorization” is (1) only a partial solution, and (2) a way of importing some social judgments about what kinds of conduct should be punished.

    Given that, Congress could do three things that would significantly help the courts draw fair and sensible lines. First, it could do as you suggest and pay more attention to harm elements of the CFAA, particularly in grading offenses. Second, it could do as David Thaw suggests and tighten up the mens rea elements, which would help mitigate many of the notice concerns. And third, it could say more about which forms of “access” are problematic. That may be my biggest critique of Cybercrime’s Scope: your broad reading of “access” means that “authorization” has to do more work, but some of the work you ask it to do might be clearer and easier to implement under the heading of “access.”

    As for Kennison, yes, it wasn’t strictly an unauthorized-access case. But for precisely the reasons you suggest when talking about consent to trespass, authorization was a critical question in the case. And that authorization was conveyed, if at all, through the code of the ATM. So it might be more accurate to say that the use of computers changes (and complicates) how authorization can be provided to a range of conduct, not just that it’s access to computers themselves that matters. Indeed, similar questions come up for all kinds of other computer-mediated access: access to copyrighted works under the DMCA, access to to stored communications under the SCA. And without too much stretching, they also come up in trespass to chattels, browsewrap contract formation, implied copyright licenses under robots.txt, etc. Part of what I want to do when I get to the article is complicate the story of computer-mediated consent — by server owners and by users — in all of these contexts. It’s one of the three or four Big Issues that define the field of Internet law.

  4. Orin Kerr says:

    James,

    I entirely agree it’s only a partial solution, and that it imports some social judgments. In particular, I think it’s important to have narrower liability for felonies under the Act. See here: http://www.volokh.com/2013/01/20/proposed-amendments-to-18-u-s-c-1030/

    I tend to disagree with David Thaw’s focus on mens rea, as most of the CFAA already uses the highest and narrowest mens rea standard, that if intent. You can’t really tighten up the standard from intent, at least unless you want to use a willful mens rea (which would be a bad idea here for many reasons). In my view, you need to change the underlying element of what the person is intentionally doing.

    As for your idea that it would be clearer and easier to do some of the work under the access prong rather than the authorization prong, can you fill in why you think it is easier and clearer that way? As you know from my article, I couldn’t come up with a good way to limit access, and I instead ended up concluding that it was really all best understood as an authorization problem. But maybe that’s wrong; would love to hear your take at some point (whether here or in a future article).

  5. David’s point that “intent” should require that “the actor’s intent be specifically that their actions would violate the given restriction” strikes me as well-taken because it helps shift the focus to what the computer owner actually communicated to users about permission, rather than what the computer owner meant to permit.

    As for “access,” your contrast between password-protected webpages and obfuscated URLs strikes me as a question that might be clearer under access rather than authorization. One difference is that the former is two-step — the user goes to two webpages — and the latter is single-step — the user goes to one webpage. That’s potentially a plausible line at which to draw an “access” threshold.

    Another way of putting it is that there is an essential nexus between an “access” and the lack of “authorization” for that particular access. If we were more willing to say that an unauthorized access required crossing an access threshold from an authorized side to an unauthorized side, that would help narrow the ambiguities significantly. It would immediately eliminate cases about the impermissible use of information after an initially authorized access, for example.

  6. Great post. That must have been some twitter conversation 140 characters at a time.

    I share the concerns and thoughts – as I noted recently with concerns about scraping, for example. I wonder whether we can get any benefit from the DMCA anti-circumvention provision (which has its own issues). There, a common defense is that the measures were not effective protection, but they are usually rejected because the measures are effective in the ordinary course of usage.

    So, the question under that standard would be whether the ordinary course of usage would allow the access/use. I think the probably helps the gamblers, but not the ATM withdrawal. It also might allow for different parsing of things like URL guessing.

  7. Orin Kerr says:

    James writes:

    ************
    David’s point that “intent” should require that “the actor’s intent be specifically that their actions would violate the given restriction” strikes me as well-taken because it helps shift the focus to what the computer owner actually communicated to users about permission, rather than what the computer owner meant to permit.
    ************

    I disagree. The problem is that this already *is* the intent standard. If you believe that violating TOS is a crime, then the intent standard requires that the person knows that they are violating the TOS and acts intentionally to do so. Actual notice is already required. But who cares? Actual notice that you’re violating a TOS has nothing to do with any actual harms.

    James next writes:

    *************
    As for “access,” your contrast between password-protected webpages and obfuscated URLs strikes me as a question that might be clearer under access rather than authorization. One difference is that the former is two-step — the user goes to two webpages — and the latter is single-step — the user goes to one webpage. That’s potentially a plausible line at which to draw an “access” threshold.

    Another way of putting it is that there is an essential nexus between an “access” and the lack of “authorization” for that particular access. If we were more willing to say that an unauthorized access required crossing an access threshold from an authorized side to an unauthorized side, that would help narrow the ambiguities significantly. It would immediately eliminate cases about the impermissible use of information after an initially authorized access, for example.
    ********

    I disagree. First, it doesn’t clarify things just to switch doctrinal boxes: It just takes a conclusion and rearranges it. Position A is this: “A person is authorized to visit a public webpage, but he is not authorized to then enter in a password belonging to someone else.” Position B is this: “A person doesn’t access a computer when they visit a public webpage, but he does access it without authorization if he then enters in a password for someone else’s account.” What’s the difference? You have to deal with the line drawing in one box or another. And as I argue in the NYU article, making the access prong do the work then sets up hard puzzles outside the application of the Web. For example, does sending a virus “access” the computer? You need to come up with definitions of access for each kind of Internet application, which seems pretty complicated.

  8. You’re starting to convince me on the intent point. I would say I want to go back and look at how some of the cases parse out “intent,” but in view of whom I’m having this conversation with, I’m inclined to take your word for it.

    I agree that from a functional perspective, the pile of dirt has to end up under one rug or the other. But it’s the same pile of dirt. Any complications that ensue from needing different definitions of access for different applications will also ensue from trying to determine the meaning of “authorization” for different applications. That’s the hidden issue with the Morris “intended function” test: the process of determining what the finger program’s “intended” function is (an “authorization” question) is isomorphic to the process of determining how the program ordinary works (an “access” question). I think it’s clearer to call it the latter, because in this subset of cases the focus tends to be on how the program works when users access it in what the DMCA calls “the ordinary course of its operation.”

  9. Orin Kerr says:

    I should add that there is very little caselaw on how the intent standard applies in this setting. United States v. Carlson, 209 Fed. Appx. 181 (3d Cir. 2006), is probably the leading case on what intent means in the CFAA, but it’s dealing with intent in 1030(a)(5)(A), not in the context of intentional authorized access. But the issue is rarely litigated because what intent means is entirely dependent on what authorization means. If the authorization line is TOS, then the intent standard requires notice of the TOS. If the authorization line is breaching code-based restrictions, then the intent standard is notice of breaching code-based restrictions. If the authorization line is doing whatever the computer owner doesn’t like, then the intent standard is notice of doing whatever the computer owner doesn’t like. The meaning of intent is all about what authorization means; the latter essentially governs the former.

    Re your point about Morris, I look at it differently. Gaining access contrary to the way the program ordinarily works is not an access issue; it’s a classic authorization issue. Consider a physical analogy. Imagine someone enters a home by jumping down the chimney, Santa Claus style. They land at the bottom, dust themselves off, and are arrested for trespass. “Trespass?!”, they respond, “But I entered through the open chimney! I was invited to enter!” That would seem ridiculous to us because entering a home through a chimney is contrary to the intended function of a chimney. An open chimney is a way for smoke to exit a house, not a way for people to enter. But this is largely a question of social understandings and ordinary usage: If a Martian landed on earth and heard this dispute, he might think that it is a very fine distinction indeed to say it is authorized to enter through an open door but not an open chimney. But we find the line intuitive because we intuitively understand that authorization to enter a home is partially about social expectations as to what ways of entering a home are intended ones. I see that as a question of authorization, not access. Entering a home by jumping down the chimney and ending up in the living room is very much still an access into the home; it’s just an access that is unauthorized.

  10. Thanks, Orin, the chimney analogy is extremely helpful. I think it clears up how close our points of view are, and where we still disagree.

    It appears we’re in complete agreement on the idea that there’s a meaningful difference between entering via the chimeny and entering via the door. We agree that the difference is small from a external Martian perspective. We agree that this puzzle can be resolved by taking by taking an internal perspective that understands that there are different social expectations about doors and chimneys. My point in the post is that this difference can be understood in terms of what chimneys and doors communicate to visitors.

    Here’s what I see as the sticky part for your theory of “authorization.” When you say that only “the circumvention of code-based restrictions” should count as unauthorized access, you introduce a second element to the test. First, we have to decide whether the access was authorized or unauthorized. Second, we need to decide whether the access was unauthorized because it involved the circumvention of code-based restrictions. It’s this second element that strikes me as an access test in disguise. Someone whose theory of unauthorized access encompasses violating terms of service or other contract- or word-based restrictions doesn’t have to draw such a line.

    So, in your example, not every civil trespass is a crime. The offline architecture-based equivalent to your code-baed test would be a law that criminalizes only breaking and entering, not entering without permission. We can say say that someone who dives down the chimney lacks authorization, but what makes him a dangerous criminal is not the lack of authorization per se but the entry via the security hole in the roof of the house. Yes, a court can infer lack of authorization from the means of entry, but if that’s the only permissible source from which the court is alllowed to infer lack of authorization, it’s also an access test.

  11. Orin Kerr says:

    James, I think you’ve lost me with your second paragraph. What is the “second element” of the test? To clarify, in my view, when you hook up a computer to a network and use a publicly-accessible platform to let others communicate with your machine, access to that open area of the computer is presumptively authorized. You necessarily authorize the access to that data by setting up the machine so that others can use it. On the other hand, access becomes unauthorized when you erect a code-based restriction designed to thwart that user from gaining initial or additional access that the user manages to successfully circumvent. Bypassing the code-based restriction renders that access to that data unauthorized. So yes, it’s akin to a breaking and entering idea in the physical world, but it’s still fundamentally a question of authorization because use of open architecture (like a public URL) necessarily makes access authorized.

  12. Bruce Boyden says:

    “To clarify, in my view, when you hook up a computer to a network and use a publicly-accessible platform to let others communicate with your machine, access to that open area of the computer is presumptively authorized.” That’s sort of the whole question isn’t it? That is, if a page owner can fairly be characterized as “letting” members of the public communicate with the machine, and the page itself is fairly characterized as “open,” then access is authorized pretty much by definition. On the other hand, if the page is “hidden,” and access “restricted” to those in possession of the magic word (the “nonpublic” URL, let’s say), then that sounds more like unauthorized access. But the “public/nonpublic,” or “restricted/open” distinction, just seems to reduce to a difference of opinion over whether unauthorized access can constitute something other than circumvention of a scheme of limited-distribution passwords entered in text boxes and backed by a properly implemented refusal to respond to other requests.

  13. James and Orin: Your list of hypotheticals is incomplete.

    The bugs you describe aren’t really the bugs that hackers exploit. The principle you describe is that computers do what the programmers say, but not necessarily what they want. That’s the source of a lot of ambiguity in the law.

    But the bugs that hackers exploit work on a different principle. They don’t simply access data, they cause code written by the hacker to run on their victim. Specifically, I’m talking about “buffer overflows” and “SQL injection”. You know how you are constantly patching Windows, Adobe, or Java? Those are usually patching buffer-overflow bugs. You know the major website breaches that hit the news? Those are usually SQL injection bugs.

    A typical buffer-overflow bug, when used in a URL, looks something like this:
    http://example.com?id=NNNNNNNNNNNNNNNNNNNN%u9090%u6858%ucbd3%u7801%u9090%u6858%ucbd3%u7801%u9090%u6858%ucbd3%u7801%u9090%u9090%u8190%u00c3%u0003%u8b00%u531b%u53ff%u0078%u0000%u00

    That jumble of data in the URL is x86 machine code. The original programmers can see where the buffer overflow exists, but they can’t predict what happens next, because it depends upon the code the hacker has injected.

    This gives us a clear line between authorized and unauthorized access. A public website that authorizes everyone to “access data” clearly still do not authorize anyone to “run code” on the server. When a hacker creates a URL like the one above, they are intentionally/purposefully doing something they know is not authorized.

    What you are discussing confuses us coders/hackers/nerds. We have a clear idea of the line between authorized and unauthorized. I feel like we are the Martians in the your discussion above. We come to Earth, and see that you guys have come up with a completely different set of rules about what is authorized and unauthorized. Your decisions seem arbitrary to us.

  14. lucia says:

    Robert David Graham

    This gives us a clear line between authorized and unauthorized access.

    This gives us a clear line in a specific instance where a clear line is obvious. But the example doesn’t clarify the line for other situations.

    Consider this semi-hypothetical (semi-because it springs from an action someone is doing. See ” Samuel Clay EMPLOYEE
    Samuel Clay (Official Rep) … but Craigslist is rate limiting NewsBlur. The reason insta-fetch works is because I obscure some things (and force a cache buster). I may just hard-code a cache buster for Craigslist since that’s the only thing that’ll fix it.
    ” at https://getsatisfaction.com/newsblur/topics/craigslist_failing_to_update_automatically_insta_fetch_is_fine )

    I’m not entirely sure what Clay is doing and for all I know the coding is getting around a bug or something. But it does motivate my hypothetical:

    1) Person A running public facing utility (e.g. Craigslist), that ordinarily people can read (e.g. the feed and website.) Loading that feed to read is “authorized”; loading the website is authorized. In fact, Craigslist wants people to do both.

    2) A writes TOS which include a “UNAUTHORIZED ACCESS AND ACTIVITIES” section describing what (e.g. http://www.craigslist.org/about/terms.of.use) To the uninitiated non-legal scholar, non-computer nerd, it appears these TOS describe a huge range of things as unauthorized. For example

    Any copying, aggregation, display, distribution, performance or derivative use of craigslist or any content posted on craigslist whether done directly or through intermediaries (including but not limited to by means of spiders, robots, crawlers, scrapers, framing, iframes or RSS feeds) is prohibited. As a limited exception, general purpose Internet search engines and noncommercial public archives will be entitled […]

    […] Circumvention of any technological restriction or security measure on craigslist or any provision of the TOU that restricts content, conduct, accounts or access is expressly prohibited. […]

    3) A second party, “B” signs up for a service from “C”. The service from C is to provide B with content from A in some format that, evidently B prefers.

    4) A third party, say “C” the operates a system that visits A’s site and copies the content from a page and saved to A’s server. That page might be a “feed” or it might be “the regular old site”. The copied material copied from is then displayed to B who ‘subscribed’ and who may or may not have paid a fee to view the material in the format C presents. The material might in fact be displayed to anyone and everyone who loads the proper page at C’s site.

    5) Now suppose it turns out that B, who is C’s customer complains that content from A is not appearing expeditiously. C explains it appears that A may be “rate-limiting” B’s visits but B has attempted to implement methods to get around this an intends to implement more.

    Since this is about authorized access, I won’t ask questions like “Is C’s copying to his server and display from his server a copyright violation?” or “Is B’s request a copyright violation”. Instead, I want to know:

    Given these facts, in your view:

    * If the rate limiter did not kick in to limit visits, would you consider B authorized to visit for the purpose of copying and displaying to his subscriber C?

    * Once any ordinary user with modest coding skills (say D) encounters the rate limiter, would they be authorized to make bumbling efforts to visit more frequently than the rate limiter ordinarily permits?

    * Does the fact that a user like B has more advanced coding skills and can efficiently evade the coding restriction in a non-bumbling way change the answer to the previous question? That is: are users who can overcome rate limitations authorized to overcome something like a rate limiter?

    * Without regard to the rate limiter, if A’s TOS prohibits people from using a service like B to view their content, is TOS, C authorized to request B to access A’s content for the purposes of displaying it to C? Is the answer to this affected by whether or not B read the TOS? Or whether A detected behavior they didn’t like, discovered B’s identity and specifically informed them of the TOS?

    Note that none of these involve uploading a script to A’s machine, turning As machine into a zombie drone, injecting A’s into A’s machine. But I’d say as a non-attorney and a non-expert coder that it’s pretty clear that if a written TOS says a particular behavior is not authorized that behavior is not authorized. If a coding restriction has been put in place to inhibit a behavior, that behavior is not authorized. If the restriction has become apparent to B and, more over, they seem to recognize that it is intended to be a restriction and not just a bug, then B knows the sort of access associated with that behavior is not authorized by A. Moreover, in my opinion, some sort of legal penalties for B’s behavior are warranted.

    As for C who merely subscribed to B’s service: I’m a bit perplexed. I would say that C’s behavior is not authorized by A. But given the way a subscription might be set up and described by B, C may easily be unaware of A’s TOS, not understand them, have no idea what actions B is taking and so on. So, I don’t think C ought to find himself brought up on charges.

    But it seems to me that on VC I read some people who represented themselves as computer nerds who know all about this insist a that B’s access remains authorized if it is possible for a skilled programmer to code around a measure that would otherwise block their access either at a particular instant in time or fully. (I don’t remember the name of the commenter who took this position, but s/he was quite insistent.)

    So, getting back to your point, as I see it, the fact that a bright line may exist for cases involving uploading of scripts that take over a server, they don’t exist in all cases. Do you think your bright line helps above? If so,can you describe why it’s helpful?

  15. Orin Kerr says:

    Robert, a buffer overflow attack is a classic circumvention of a code-based restriction; it’s one of the standard examples of what we’re talking about. It works by exploiting a flaw that allows the actor to insert code where it is not supposed to go, allowing the actor to execute code he is not supposed to execute and thereby gain access to information that he is not supposed to have. In the language of the Morris decision, it gains access by using a program in a way contrary to its intended function.

  16. Orin, yea, but I’m trying to refute the claim in the first paragraph that “all CFAA cases are hard”. Cases involving buffer-overflows are easy.

    I was also trying to refute the idea that there is only one kind of computer code. Programmers write high-level languages like “C” but the machine interprets machine code. What’s interesting about a buffer-overflow is that the programmer does not know, can not know, how the machine interprets the programmer’s code to allow execution of the hacker’s code (and likewise, cannot predict the hacker’s code as well). You cannot interpret a buffer-overflow as the programmer granting access, because the programmer doesn’t have enough knowledge about the machine code that results from the high-level code.

    I was also trying to refute his interpretation of your post that guessing URLs is always legal. I think virtually all guessing of URLs should be legal (because it’s too confusing to figure out what “authorization” means), except for very narrow cases of buffer-overflows, SQL injection, and password guessing — cases where the hacker intentionally (in the mens rea sense) gains unauthorized access.

  17. Orin Kerr says:

    Ah, got it, Robert. Thanks for the explanation, and I’m glad we agree.

  18. Robert, I agree that buffer overflows are at the easiest end of the spectrum of CFAA cases. My point in saying that all CFAA cases are hard is that they depend on a series of assumptions about how programs work and what programmers intended to allow. It is possible to imagine a buffer overrun attack which the programmer intended to allow — as part of a security course, for example — it is just overwhelmingly unlikely in most cases, for reasons that depend. It’s precisely facts like the ones you list that help us reach this conclusion; my post was designed to help bring them out into the open.

  19. Good questions, Lucia. I’m a bit confused by the relationship between B and C in your example, since C seems to be loading the content from A, but B is the one taking the steps to get around the rate-limiter. But these caes are in a much harder part of the spectrum, because there is an attempt to reduce access by code but not a fully comprehensive one. These are hard cases on anyone’s theory of the CFAA, because the line has to be drawn somewhere. I’m not familiar with this very insistent VC commenter, but the “Could a skilled programmer defeat this system?” test isn’t and couldn’t be the law, because then even buffer overruns might be considered authorized.

  20. jon stanley says:

    With some irony…we might have come full circle to the very first EF Cultural opinion….and the concept of the “reasonable expectations of the website owner”

  21. David Thaw says:

    I am jumping into the thread a bit late, so apologies for re-opening an earlier point. I would like to turn back to James’ and Orin’s exchange re: intent.

    Orin writes (responsive to James):

    ————————————-

    James writes:

    ************

    David’s point that “intent” should require that “the actor’s intent be specifically that their actions would violate the given restriction” strikes me as well-taken because it helps shift the focus to what the computer owner actually communicated to users about permission, rather than what the computer owner meant to permit.

    ************

    I disagree. The problem is that this already *is* the intent standard. If you believe that violating TOS is a crime, then the intent standard requires that the person knows that they are violating the TOS and acts intentionally to do so. Actual notice is already required. But who cares? Actual notice that you’re violating a TOS has nothing to do with any actual harms.

    ————————————-

    With respect to Orin’s position, I and (I think) the Ninth Circuit disagree. In Nosal (en banc), Chief Judge Kozinski addresses the 1030(a)(2)(C) distinction, describing it as “the broadest provision [of the CFAA], which makes it a crime to exceed authorized access of a computer connected to the Internet without any culpable intent. Were we to adopt the government’s proposed interpretation, millions of unsuspecting individuals would find that they are engaging in criminal conduct.”

    I do not find the Ninth Circuit’s language interpreting the current (a)(2)(C) intent as equivalent to the standard I propose (“specifically that [they expect and desire] their actions would violate the given restriction” — restating James’ formulation of my proposal). Quite the contrary — I read this language to be the court asserting that (a)(2)(C) has a nearly tautological intent element — if you engaged in the action, you therefore intended any/all possible results therefrom.

    This is, in my mind, a rather absurd result for an intent standard. It obliterates many of the (important!) intent distinctions drawn in the criminal law. For example, in crimes-against-persons, the difference between: 1) swinging my arm with the intent of slamming shut my (heavy) car door and *accidentally* striking someone’s face; and 2) swinging my arm with the intent of striking someone in the face (to cause them physical injury).

    Kozinki’s opinion then goes on to note:

    Minds have wandered since the beginning of time and the computer gives employees new ways to procrastinate, by gchatting with friends, playing games, shopping or watching sports highlights. Such activities are routinely prohibited by many computer-use policies, although employees are seldom disciplined for occasional use of work computers for personal purposes. Nevertheless, under the broad interpretation of the CFAA, such minor dalliances would become federal crimes. While it’s unlikely that you’ll be prosecuted for watching Reason TV on your work computer, you could be. Employers wanting to rid themselves of troublesome employees without following proper procedures could threaten to report them to the FBI unless they quit. [6] Ubiquitous, seldom-prosecuted crimes invite arbitrary and discriminatory enforcement.

    [6] footnote six is particularly important because it describes the fact that this employer-response threat is not hypothetical: see Lee v. PMSI, Inc., No. 8:10–cv–2904–T–23TBM, 2011 WL 1742028 (M.D.Fla. May 6, 2011). The fact that this case was dismissed does not, in my mind, at all lessen the probability that aggressive employers will (and do) engage in such threats.

    Actual notice is, to me, at the *core* of intent. In the Drew opinion, and if I recall correctly, in Orin’s Minn. L. Rev. piece on Vagueness Challenges to the CFAA, the concept of “fair notice” is essential to surviving a void-for-vagueness challenge. The Federal Trade Commission (and others) have repeatedly criticized lengthy Terms of Use, Privacy, and other Policies that are beyond the practical readability of the average user. At the same time, there is good empirical work (which I cite in my J. of Crim. L. & Criminology piece, and am happy to post links to here) on layered notices and other methods of providing effective notice to users upon which a theory of criminal liability might be based. Civil liability may still result from the underlying “deep” terms of the full contract — I do not (in this work) take a position on that point — but criminal liability, in my mind, requires a higher degree of notice.

    Finally, I note that the intent-based approach to CFAA reform still leaves a substantial amount of “wiggle room” for unusual results, hence why the second element of my proposal requires that the act in question *also* be either:

    1) in furtherance of something on a list of activities Congress specifically has identified as impermissible; or

    2) in furtherance of another act otherwise criminialized by state or federal law (essentially “glomming on” to the state statutes, as my colleague Rebecca Bolin suggested).

    The full draft of the proposal/paper is linked on my website (www.davidthaw.com) for folks are are interested.

  22. Orin Kerr says:

    David,

    You are misreading Judge Kozinski’s Nosal opinion.

    When Kozinski writes that 1030(a)(2) “makes it a crime to exceed authorized access of a computer connected to the Internet without any culpable intent,” I’m pretty sure he means just that there is no requirement beyond the intentional unauthorized access. He’s comparing (a)(2) to (a)(4), which has the added elements that the unauthorized access must have an intent to defraud and must “further[] the intended fraud.” The government argued in its briefs in Nosal that the Court didn’t need to get into the overbreadth of (a)(2) because Nosal involved(a)(4), which required intent to defraud. In that passage, Kozinski was just noting that the same language applies to another part of the statute that does not require intent to defraud. It’s true that Kozinski uses the phrase “intent,” but the comparison to (a)(4) suggests that he just means that there isn’t an intent to defraud requirement. Thus, critically for our discussion, he’s not making a comment on what intent means when it is the mens rea associated with unauthorized access prong.

    Oh, and it might be of interest to readers (if there still are any) that the government moved to dismiss the CFAA counts in the Kane video poker case that is discussed in the main post. The court then dismissed the counts on the government’s motion.

  23. lucia says:

    since C seems to be loading the content from A, but B is the one taking the steps to get around the rate-limiter.

    Here’s how it works
    • C joins B’s service and clicks “subscribe to A”.
    • B then goes and collects A’s content, makes copies and stores those copies on B’s servers. B makes these visible to the public.
    • C then visits B’s server where he loads B’s copies of A’s content. So, C is reading content original to A (possibly copyrighted content. ) but that content was fetched by B and stored by B.

    If you think of B as being something like “feedreader”, then C could be a person who subscribed to the feed. C could be the person who subscribed to the feed. A is the content originator (e.g. Craiglist, NY Times etc.) But bear in mind: more than the “feed” is being collected and the TOS at least seem to say this shouldn’t be done. (Let’s assume for the hypothetical that the TOS really say this shouldn’t be done.)

    There can be many “C’s” in this system and then don’t necessarily know the details of what B does.