Site Meter

The Sorcerer’s Apprentice, Or: Why Weak AI Is Interesting Enough

You may also like...

5 Responses

  1. Woody says:

    Great post, Ryan. It caused me to think about bots and the capability of AI to make contractual decisions. One of the reasons most people don’t read terms of use agreements is because they are selectively enforced. The likelihood that a user will suffer any kind of consequence for breach of the agreement is usually quite low.

    However, what if someone developed and deployed AI that could comb a website (or other websites) in search of a terms of use violation? For example, patrolling for unauthorized pseudonyms or copyrighted content? Couldn’t software be used to detect language commonly used in flame wars or bullying and automatically suspend or terminate user accounts? (Aren’t these type of searches already a part of some website’s regular administration?)

    Do you think the mass deployment as AI as a contractual agent to enforce terms of use is realistic? Or would it bring too much consumer attention to the terms that are actually under the hood? We’ve already seen an emphatic reaction to the G+ account terminations.

  2. Ryan Calo says:

    Thanks, Woody. It sounds like you have the makings of a new article! Have you met Harry Surden at Colorado? He is a former Stanford CodeX fellow whose has been thinking about automating law and compliance.

  3. Miriam A. Cherry says:

    Enjoyed your post as well, Ryan. I have not yet read Robopocalypse, but may do so based on your rec. I tend to be somewhat of a tech optimist though (you’ve already heard how much I like my GPS). I hope you’ll do a future post on the effects of human-computer interaction. Best, Miriam

  4. Ryan Calo says:

    Thanks, Miriam! I think you’ll find the book entertaining at a minimum.

    Concurring Opinions has invited me back for a second month so I will be sure to write something on human-computer interaction. Thanks for the request!

  5. Ray Renteria says:

    Thanks for taking the time to write this Ryan. Enjoyed it! I’ve got a few more books on my reading list now!

    I also couldn’t help but think about an Austin, TX company when I read your comment exchange with “Woody” above. The company CSIdentity develops AI agents that patrol chat rooms and message boards to solicit stolen data (and thus identifying data fencers) using the same lexicon it picks up from the dialogue of the hackers.

    I’ve also posited that we will not be able to discern humans from agents on our twitter lists or social network contacts. The more impressionable humans will be at the whim of referrals and recommendations of agents. I’d like to read your thoughts on that topic sometime, too.

    Thanks again!

    –Ray

    (here’s an article about CSIdentity on SFGate http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2011/09/18/BUL81L57IL.DTL)