Site Meter

Author: Danielle Citron

3

Is the Net Impeding Our Intellectual Life (or Something Else)?

Computerkids.jpgRecent books and articles contend that the Internet has made us narcissistic, shallow, and uncreative. See here, here, and here. According to critics, search engines produce easy answers, discouraging independent and critical thinking. They also provide access to bogus information, confirming prejudices and fostering stupidity and extremism. These arguments seemingly build on the work of many thoughtful scholars, such as Neil Postman who authored Amusing Ourselves to Death and Benjamin Barber who wrote Consumed.

In Wired, David Wolman takes this argument to task, characterizing these critics as modern-day Chicken Littles. Just as the telephone did not extinguish letter writing and modern transportation did not ruin community life, the Internet will not stunt intellectual life in the twenty-first century. Wolman argues that digital technologies, in fact, give us more opportunity to become engaged in the world of ideas. Wikipedia and Wiktionary demonstrate a bona fide hunger for learning and accurate information. And irrationality and prejudice cannot be blamed on technology—it was there long before the emergence of the Internet and will remain long after we have moved on to another communications medium.

The Internet’s overall impact on our intellectual life is surely debatable. But recent reports suggest that it is having a positive effect on our family lives, bringing us in closer contact with our loved ones than ever before. As the Washington Post notes today, the Pew Internet and American Life Project released a report, described as the first of its kind, that finds our families lives richer as a result of Information Age technologies. The report notes that 25 percent of adults said that cellphone calls, emails and text messages, and other forms of online communications made their families closer. 60 percent of responding adults said that the technologies had no impact on their family lives, and only 11 percent said the technology had a negative effect. 47 percent of the adults said cellphones and the Internet had improved family communication. Barry Wellman, an author of the report and sociology professor at the University of Toronto, explained that the communication innovations allow families to “know what each other is doing during the day” and does not “cut back on their physical presence with each other.” The findings were based on a nationally representative poll of 2,252 people, which explored technology use and profiled a group of 482 adults with children.

0

Political Transitions and Agency Rulemaking

According to The New York Times, Attorney General Michael Mukasey has issued new guidelines that allow FBI agents to use intrusive investigative techniques, even if there is no clear reason for suspecting an individual or group of wrongdoing. Under the new rules, agents may engage in lengthy physical surveillance, covertly infiltrate lawful groups (much in the way that the Maryland state police were recently chastised for doing, see this post), and conduct pretext interviews where agents lie about their identities while questioning a subject’s associates and friends based merely on a generalized “threat.” The new rules permit the FBI to use these techniques on people “identified in part by their race or religion and without requiring even minimal evidence of criminal activity.” AG Mukasey promises that these investigations will not violate the Constitution. This “trust us” approach will no doubt provoke concern about further erosions of civil liberties, especially in light of the FBI’s long history of abusing its power to spy on civil rights groups over the ages.

These new rules will no doubt be a part of a flurry of regulatory activity in the final months of the Bush Presidency. Anne Joseph O’Connell has written a terrific article entitled Political Cycles of Rulemaking: An Empirical Portrait of the Modern Administrative State in the Virginia Law Review that highlights the uptick in agency policymaking in the period just before and after Presidential transitions. In a study that is a first of its kind, O’Connell surveyed a database of agency rulemaking from 1983 to 2003 and found that agency rulemaking is not as ossified as has been previously believed, particularly during political transitions. As the end of the Bush Presidency nears and a new Presidential administration approaches, we will likely see more rules like the ones recently adopted by AG Mukasey.

2

Decisions About Ohio Voters Left to State’s Secretary of State (For Now)

120px-FreedmenVotingInNewOrleans1867.jpgSection 303 of the 2002 Help America Vote Act requires states to verify voter registration applications with government databases like those for driver’s licenses or Social Security cards. Recently, Ohio election officials found that although 200,000 out of the 600,000 applicants did not match the names in Social Security databases, most of the nonmatches involved new voters, not duplicate registrations, whose failure to match resulted from problems with the databases. Republican GOP officials responded to this finding by seeking a temporary restraining order that would require Ohio Secretary of State Jennifer Brunner to provide those names to local election officials who would then insist that the identified voters cast provisional ballots, rather than regular ones, and ask partisan poll workers to challenge their votes on Election Day. On Tuesday, the Sixth Circuit affirmed a district court’s TRO granting the relief sought by GOP officials. But yesterday the Supreme Court, in a per curiam order, reversed the Sixth Circuit, finding that the district court likely lacked jurisdiction in actions to enforce Section 303 of HAVA brought by private litigants. The Court declined to address “whether HAVA is being properly implemented.”

These developments raise a number of important questions. First, can a public actor, such as Attorney General Michael Mukasey, enlist the courts to answer the question of whether Ohio’s Secretary of State correctly implemented HAVA? Orin Kerr at the Volokh Conspiracy thoughtfully suggests that such an actor should not be permitted to do so as the Supreme Court’s per curiam decision follows the logic of Bush v. Gore that “[w]hen elections are close, or a winner must be named in a recount, courts should stay out and let the state election boards function without judicial inteference.” Second, if a court addresses the issue on the merits, does Section 303 of HAVA support the district court’s original TRO? Daniel Tokaji answers in the negative, explaining that HAVA’s matching requirement aimed to accelerate procedures at the polls, somewhat like an E-Z pass lane at highway poll plazas, to allow voters to avoid showing identification if they had already been screened using database checks, not to determine eligibility, deter voter fraud or raise added barriers for voters by forcing some to vote provisionally. Richard Hasen further explains that “any effort to use the list to purge the rolls at this point could vioate the federal provision that prohibits systematic voter removal purges within 90 days of a federal election. Third, are the automated decisions flagging individuals as failing to match Social Security and DMV records reliable? The answer there is unquestionably no. As Wendy Weiser of the Brennan Center for Justice at NYU Law School explains, nonmatches result from faulty information in databases and typographical errors by government officials, not voter ineligibility.

A final concern about voter registration that the Ohio case does not directly raise, but is no less important, is the erroneous removal of voters from the rolls by automated matching programs that cannot distinguish between similar names. As I highlighted in my article Technological Due Process recently published by the Washington University Law Review, data matching systems employ crude algorithms that can lead to mistaken results. Unfortunately, individuals who show up to the polls on November 4 may find their names removed from the voting rolls because their name is similar to someone who is in fact ineligible due to a move, death, or criminal conviction.

Wikimedia Commons Images

2

Sometimes You Just Cannot Sue

According to BBC News, the suit entitled Ernie Chambers v. God has met its maker. Nebraska state senator Ernie Chambers sued God in federal district court, seeking a permanent injunction to prevent “death, destruction and terrorisation.” The complaint alleged that God had threatened the plaintiff and the people of Nebraska and had inflicted widespread death and destruction “upon millions of the Earth’s inhabitants.” The court dismissed the case on the grounds of insufficient process: because the defendant has no address, legal papers cannot be served. The court apparently rejected the plaintiff’s argument that “since God knows everything, God has notice of the lawsuit.”

10

Want to See My Rhinoplasty?

120px-Heermann.jpgDoctors are increasingly offering discounts on elective surgeries or free Botox injections in exchange for a patient’s agreement to post videos of the surgery or a before-and-after shot along with an endorsement of the treating physician. You Tube is littered with videos of Lasek surgeries, breast augmentations, and nose jobs. According to The New York Times, patients have taken the discounts on the belief that sharing videos of their transformed eyes or noses tell others nothing about them and thus cannot hurt them. But that assumption is certainly worth rethinking. An employer may be less than thrilled that a Google search of a potential or current employee produces the chronicles of that employee’s plastic surgery. And some suggest that our irises can reveal certain medical conditions, such as hypertension. The grocery store bonus card that tracks the groceries you buy seems quaint by comparison to this trend.

5

Sentence Reduction: A New Remedy for Prosecutorial Misconduct

Typically, remedies for prosecutorial misconduct are all or nothing–convictions and pleas are reversed or dismissed, on the one hand, or the abusive behavior is viewed as harmless error and nothing is done about it, on the other. But, on September 24, 2008, Judge Bennett in the N.D. of Iowa eschewed this binary choice in United States v. David Dicus, reducing the defendant’s sentence for the prosecutor’s breach of the plea agreement regarding a sentence enhancement instead of viewing a withdrawal of the guilty plea (or specific performance of the breach provision) or no response as the only available options. The court refused to ignore the misconduct, even though the sentencing court did not in fact impose the sentence enhancement, because “it would do nothing to deter prosecutorial misconduct or to give defendants an incentive to raise prosecutorial misconduct claims.” In making the decision to remedy prosecutorial misconduct with a reduction of the defendant’s sentence to the low end of the advisory sentencing guidelines range, the court relied on Sonja Starr’s compelling new piece, Sentence Reduction as a Remedy for Prosecutorial Misconduct (which will be coming out in the Georgetown Law Journal in 2009). Starr’s article is ground-breaking and makes an important contribution to the law development’s in this area. In it, she argues that sentence reduction would be both an effective deterrent to prosecutorial misconduct and an important corrective and expressive remedy.

Read More

1

COINTELPRO in a Digital World

In a move reminiscent of the FBI’s infiltration of political advocacy groups in the 1960s and early 1970s, the Maryland State Police engaged in covert surveillance of groups opposed to the Iraq war and capital punishment. According to a report recently released by former Maryland Attorney General Steven Sachs, Maryland troopers secretly attended meetings of anti-death penalty and anti-war activists in 2005 and 2006. At one meeting, a small group of activists met at a church to call a death-row inmate for whom they provided emotional support. This activity, and others like it, prompted the Maryland State Police to include group members in state and federal criminal intelligence databases. Unfortunately for the activists, the state database, known as Case Explorer, had a limited drop-down screen for entering names, all of which ensured that the users of the system would categorize individuals as terrorists.

News of the covert surveillance and the individuals’ inclusion in these databases as terrorists came to light this summer when the Maryland State Police responded to a public records request pursued by the ACLU. Maryland Governor Martin O’Malley commissioned former AG Steven Sachs to investigate the matter. Sachs’s report explains that the Maryland State Police commanders never bothered to ask if the groups posed a reasonable threat to public safety before commencing covert surveillance of them. On the contrary, the groups were determined not to violate the law. According to the New York Times, Maryland State Police are now tracking down 53 “innocent individuals to let them know they were entered as suspected terrorists” in the state and federal databases for their involvement in peaceful protest. In legislative hearings in Annapolis, Maryland this week, former Maryland State Police superintendent insisted that the program was a legitimate surveillance of “fringe people” who wanted to “disrupt the government.”

To be sure, the surveillance itself raises serious concerns about chilling protected political expressive activity. But it also demonstrates the profound power of automated systems, whose design forces important decisions to be made about individuals. By requiring police to categorize individuals as some form of “terrorist,” the systems’ design effectuated an important decision about those individuals, one that could have serious impact on their reputation and lives if that information were released. The digitization of such designations has a lasting, generative power, far beyond the FBI files of the COINTELPRO era that could not be shared with the ease of today’s networked computer systems.

2

Deja Vu Blues

The saying “the more things change, the more they stay the same” is an unfortunate truth when it comes to our voting machines. In 2000, optical scanning machines in certain New Mexican counties counted a straight-party vote without distributing the votes to each of the individual party candidates. For instance, if a voter filled in the oval for a a straight-party Democrat, the scanner would record the ballot as cast but would not allocate votes to Presidential candidate Al Gore and the other Democratic candidates. Fast-forward to 2008: election officials in Sante Fe, New Mexico report that testing of their optical scanning machines revealed that a glitch in the memory cards prevented the tabulating machine from counting the votes in the Presidential, Senate, and House races when a voters marked their ballots indicating that they wanted to vote a straight-party ticket. Had the error not been caught, all of the county’s tabulating machines would have been affected. Although the memory cards have been re-burned and fixed in these counties, concerns about the rest of the country’s optical scanning machines remain. As e-voting and information security expert Peter Neumann noted during his presentation for Columbia University’s Computer Science Distinguished Lecture series held this Monday, we should worry about the accuracy and security of the upcoming elections as it is the computer scientists who say that computers are unsuitable for voting.

3

Secrecy in Voting (But Not the Good Kind)

A New Jersey Superior Court judge is currently presiding over a case brought by the Rutgers Constitutional Litigation Clinic that seeks to decommission New Jersey’s electronic voting machines. (New Jersey mostly uses direct-recording electronic (DRE) machines manufactured by Sequoia Voting Systems, one of the country’s top e-voting machine vendors.) The lawsuit contends that because the DRE machines fail to produce a voter verified paper audit trail, there is no way to know whether the machines, in fact, record the votes as cast. In June, the judge issued a protective order, requiring Sequoia to turn over its source code to plaintiffs’ expert Andrew Appel and assuring the company that its trade secrets would be protected. The order asserted that Appel could publish his report 30 days after its delivery to the court. On September 2, Appel and his team of computer scientists delivered the report to the court, assuming that they could publish it on October 2.

On Freedom to Tinker (where fantastic guest blogger Paul Ohm is now blogging permanently), Appel reports that the judge has now changed her mind. On September 24, the judge, ruling from the bench, told plaintiffs that they cannot release the report or discuss its findings.

Because the court ensured that the report would not reveal the company’s trade secrets, this eleventh-hour reversal does not appear based on a desire to protect the company’s legitimate interests. Instead, this new veil of secrecy seems intended to delay the bad news that the New Jersey voting machines are troubled until after the election. (Doesn’t it seem unlikely that the judge would delay the report’s release if it gave the machines a glowing review?) Perhaps the judge reversed course in the hopes that keeping the report secret would avert a crisis of confidence in the state’s voting apparatus. But such a crisis seems inevitable. And, more importantly, New Jersey voters deserve to be told that the machines may not count their votes accurately. This would allow them to decide for themselves whether they want to cast their votes as absentees, which would be counted on optical scanning machines and not DREs. Further, as Andrew Appel argues, the Governor and members of the New Jersey legislature need to read the report so they can protect their constituents’ right to vote. Hiding the results of the report can only cast doubt on the legitimacy of the returns in November.

2

Extreme Case of Automation Bias

According to cognitive systems engineering literature, human beings view automated systems as error-resistant. In other words, we trust a computer’s answers, even if evidence suggests that we should doubt them. Our automation bias was on full display on Monday night when a New York man drove onto railroad tracks because his GPS told him to do so. Luckily, the man and his passengers escaped injury before the train hit his car. A Metro-North spokesperson told reporters: “You don’t turn onto train tracks even if there are little voices in your head telling you to do so. If the GPS told you to drive off a cliff, would you drive off a cliff?” If this train incident and another like it nine months ago provide any guidance, the answer may tragically be yes.

car crash dkc.jpg

Wikimedia Commons