Archive for the ‘Technology’ Category
posted by Daniel Solove
I’m pleased to share with you my new article in Harvard Law Review entitled Privacy Self-Management and the Consent Dilemma, 126 Harvard Law Review 1880 (2013). You can download it for free on SSRN. This is a short piece (24 pages) so you can read it in one sitting.
Here are some key points in the Article:
1. The current regulatory approach for protecting privacy involves what I refer to as “privacy self-management” – the law provides people with a set of rights to enable them to decide how to weigh the costs and benefits of the collection, use, or disclosure of their information. People’s consent legitimizes nearly any form of collection, use, and disclosure of personal data. Unfortunately, privacy self-management is being asked to do work beyond its capabilities. Privacy self-management does not provide meaningful control over personal data.
2. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy self-management model.
3. People cannot appropriately self-manage their privacy due to a series of structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses.
4. Privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically — not merely at the individual level.
5. In order to advance, privacy law and policy must confront a complex and confounding dilemma with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution – paternalistic measures – even more directly denies people the freedom to make consensual choices about their data.
6. The way forward involves (1) developing a coherent approach to consent, one that accounts for the social science discoveries about how people make decisions about personal data; (2) recognizing that people can engage in privacy self-management only selectively; (3) adjusting privacy law’s timing to focus on downstream uses; and (4) developing more substantive privacy rules.
The full article is here.
Cross-posted on LinkedIn.
posted by William McGeveran
The privacy scandal of the week involves Bloomberg terminals, reporters, and Wall Street traders. It started making the rounds of the financial press in the last couple of days and today reached the New York Times, which led its story by declaring that a “shudder went through Wall Street” in response to the revelations. But as with many of the periodic Facebook privacy scandals, this one is only surprising if you haven’t been paying attention. And it distracts the press and the public from more serious matters.
The story, in a nutshell: a Bloomberg terminal like the one in the picture sits on every trading desk. It is the central platform for managing a constant stream of information about market activity, financial news, economic data, and much more. By making this very expensive equipment a necessity, Michael Bloomberg (now New York’s mayor, of course) built a multibillion-dollar empire and made himself fabulously wealthy.
From the beginning, company employees have been able to look up individual Bloomberg subscribers and scrutinize their most recent activity in the system. That may make some sense for sales and technical personnel (although even then it probably ought to have been more anonymized than it seems to have been). Unfortunately, that access also extended to journalists at the many news outlets that have been added to the Bloomberg corporate family over the years. And these reporters appear to have mined that data routinely for tidbits that might have helped with their stories.
Don’t get me wrong, this is not an example of good privacy practices. But it ain’t exactly the allegations of pervasive bribery, eavesdropping, and hacking by journalists in the employ of Rupert Murdoch. Quartz has a pretty good explanation of the data that was available. Primarily, it boils down to the last time a person logged in, the “functions” used (essentially, what general categories of information services were accessed, such as reports of corporate bond trades), and the transcript of any online customer service chats. Crucially, Quartz notes, “Employees can see how many times each function was used but not further details, like which company’s bonds were being researched.” In other words, a lot of it resembles information that many web sites, including news sites, can already glean about most of their customers, particularly those who are logged in. At most, Bloomberg journalists might have obtained some slight lead that would send them on the hunt for more solid information, much as a tip from a source might. In the incident that brought the practice to light, for example, a reporter surmised that a Goldman Sachs partner might have left the firm because he stopped using his Bloomberg terminal.
posted by Ryan Calo
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read the rest of this post »
April 14, 2013 at 12:57 am Posted in: Bioethics, Civil Rights, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Technology, Uncategorized Print This Post One Comment
posted by Deven Desai
It dawns on me that Turing tests may have a role for the future of education and MOOCs. In short, can one create a Socratic style system that automates probing what a student knows? A combination of gamification (not a great word) and machine learning might allow a system to press a student to express more than “I memorized X” and move to explaining why in a discussion. If I understand the simple idea of Turing tests, one should not know that the other side is a machine in a conversation. It should be a discussion. That is what a professor does in Socratic method. There would likely be a wall of sorts where the student has no more questions or perhaps the machine determines that some level of mastery is in place. To me, a key reason to press questions is to see whether the student can answer why their claim or understanding is correct. When they can do that they may at last “own” the idea and then do something with it. Insofar as the key is to keep questioning, this approach will hit a different wall where a person may need to engage with the student. In addition, when a student asks something the teacher has not considered, a “does not compute” response will likely be a let down. Assuming one solves that personal dimension, that moment would be a signal to shift to other resources including instructors to go deeper into the issue. Otherwise we are left with test passing equals knowledge. As Erika Christakis put it, we have:
a broken system built on the dangerous misconception that testing is a proxy for actual teaching and learning. Somehow, along the path of good intentions, testing stopped being seen as a diagnostic tool to guide good instruction and became, instead, the instruction itself. It’s as if a patient were given a biopsy, learned she had cancer and was then told that no further medical treatment was necessary. If that didn’t sound quite right, we could just fire the doctor who ordered the test or scratch out the patient’s results and mark “cured” in the file.
Although I am leery of easy solutions, I think that a system that may prod a student to see what they know and then come to a teacher to gain further insight and evaluate what they grasp would be great. It might be a step away from a system that asks students to jump through a hoop and receive a star or treat for performing a trick without knowing why the words or ideas coming from them matter or how to apply the words and ideas to new contexts, which I think would be knowledge rather than inert data.
posted by Deven Desai
3D printing and its related technology is general purpose technology that can train kids for the future. I saw an example of that yesterday when I was able to visit La Jolla Country Day School where sixth to eighth grade kids on spring break were learning basic 3D Modeling and Design. Last week they worked on How to Make Musical Electronics. In the 3D modeling program, Ann Worth, an MIT School of Architecture graduate, guided the youngsters as they manipulated files of their heads so that at the end of the program they could print them. I also watched a video of two girls who had been shown how to make an amplifier and oscillator for their iPhones. Brendan Bernhardt Gaffney, UCSD was their instructor. The kids talked about trial and error, vectors and faces, and circuit boards with energy and joy. How often does that happen? If Katie Rast and her co-visionaries at FabLab San Diego have their way, much more often.
Despite some nerds are cool ideas, we still hear that kids are turned off by math and science and that there is a lack of good Science Technology Engineering Math (STEM) education. New programs may change all that. By taking an old idea like shop and updating it, a FabLab (short for Fabrication Lab) offers the chance to make learning about programing, engineering, geometry, and the jot of creation. Kids are willing to engage with formulas; start, fail, and restart projects; and work rather hard at their projects, because there is fun and an outcome for them. The spring break program I visited is called Science Technology Engineering Arts and Math, or STEAM. The University of California, San Diego and FabLab SD worked together to offer the classes (which to me is a tech transfer moment that is quite important).
In the 3D modeling program, the kids started with a series of photos, which were uploaded to 123D (a suite of 3D modeling apps by Autodesk). That service knits the images together into a file that the kids then download. In many cases there are holes in the images. As they made models of their heads, they laughed at the holes in their heads. They then used a program called Blender to learn about filling the gaps. That meant some kids were telling me about vectors, others about textures, and all showed off as they pulled, stretched, and edited files to create the proper rendering of their heads. After that, they grabbed files for the bodies. A range of animal bodies will be virtually sliced up to make the new creature upon which the heads will attach. When asked what they might do next, these folks talked about how metals, glass, and other materials would be awesome so they could make really functional items. Some talked about being able to have a home printer that could make solar cells to power other printers. When told that these ideas were already being pursued, eyes popped out of their heads, and then grins covered their faces at thoughts of what’s next (and I think a little pride at predicting where the technology could go).
The skills learned in these programs will persist even as the machines and software are superseded. Who knows? If I had access to this sort of tech training combined with math and science education, I might have stuck with that path. Even if I didn’t, I’d have a greater ability to play with and understand the technology that surrounds us. In short, congratulations to La Jolla Country Day School, UCSD, FabLab SD, Ann Worth, Brendan Bernhardt Gaffney, and Katie Rast for pursuing ways to make STEM fun and for kids. The ideas here remind me of Julie Cohen’s work about play and its importance in her book, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice. As Rast said on a panel at SxSW, computer labs were often seen as saviors for education especially in low income areas, but they often gathered dust. The key is to have maker spaces that work for the group’s context. A lab need not have the latest technology. If the technology is connected to people in meaningful ways, then the magic can happen. I agree. The magic of playing with technology, understanding what you can do with it, and seeing new possibilities will fire the desire to learn and create. As Neil Gershenfeld (a leader in the Maker and Fab movement) put it, this is a liberal, as in liberating, art. But don’t take my word for it. As one kid told me at lunch, adults’ brains are not as good at learning as kids’ brains, and kids like showing what they can do. Now that is education.
Bartelt’s Dog and the Continuing Vitality of the Supreme Court’s Tacit Distinction between Sense Enhancement and Sense Creation
posted by Albert Wong
Last Term, in an amicus brief in United States v. Jones, 565 U.S. __, several colleagues and I highlighted the Supreme Court’s long, albeit not always clearly stated, history of distinguishing between sense-enhancing and sense-creating technologies for Fourth Amendment purposes. As a practical matter, the Court has consistently subjected technologies in the latter category to closer scrutiny than technologies that merely bolster natural human senses. Thus, the use of searchlights, field glasses, and (to some extent) beepers and airplane-mounted cameras was not found to implicate the Fourth Amendment. As the Court explained, “[n]othing in the Fourth Amendment prohibit[s] the police from augmenting the sensory faculties bestowed upon them at birth with such enhancement as science and technology” may afford. 460 U.S. at 282 (emphasis added). In contrast, the Court has held that technologies that create a new capacity altogether, including movie projectors, wiretaps, ultrasound devices, radar flashlights, directional microphones, thermal imagers, and (as of Jones) GPS tracking devices, do trigger the Fourth Amendment. To hold otherwise, as the Court has stated, would “shrink the realm of guaranteed privacy,” leaving citizens “at the mercy of advancing technology.” 533 U.S. at 34-36.
In fact, of the landmark cases involving technology and the Fourth Amendment during the past 85 years (from United States v. Lee, 274 U.S. 559, in 1927 to Jones in 2012), only in one instance did the Supreme Court appear to deviate from this distinction between sense enhancement and sense creation. In that case, United States v. Place, 462 U.S. 696, and its successors, City of Indianapolis v. Edmond, 531 U.S. 32, and Illinois v. Caballes, 543 U.S. 405, the Court held that the use of trained narcotics-detection dogs (more apparently similar to using a new capacity than merely enhancing a natural human sense) did not implicate the Fourth Amendment. In our amicus brief in Jones, we rationalized Place, Edmond, and Caballes by arguing that dogs were unique, being natural biological creatures that had long been used by the police, even in the time of the Framers. Further, we argued, a canine sniff, unlike the use of, say, a wiretap or a thermal imager, “discloses only the presence or absence of narcotics, a contraband item.” 462 U.S. at 707 (emphasis added). Still, the apparent ‘dog exception’ was rankling. Read the rest of this post »
March 31, 2013 at 11:35 am Posted in: Anonymity, Constitutional Law, Privacy, Privacy (Electronic Surveillance), Privacy (Law Enforcement), Supreme Court, Technology, Uncategorized Print This Post 14 Comments
posted by Frank Pasquale
The media attention to Sheryl Sandberg’s Lean In has been extraordinary. Two reviews should not be missed. First, from Kate Losse, a former Facebook insider (employee #51) who felt exploited by the company:
[Why does Lean In focus] on the problem it does: women’s presumed resistance to their careers rather than companies’ resistance to equal pay[?] Why not focus on renovating the pay structure so that women aren’t denied raises[?] . . . The faster my career accelerated at Facebook, the more my financial returns diminished, until my workload was being elevated but not my salary or equity. Leaning in, then, starts to look like it can benefit companies more than it benefits workers. . . Women in tech are much more likely to be hired in support functions where they are paid a bare minimum, given tiny equity grants compared to engineers and executives, and given raises on the order of fifty cents an hour rather than thousands of dollars. . . . [W]hat if women, even in a company like Facebook, are still paying a gender penalty that nothing but conscious, structural transformation can cure?
posted by Frank Pasquale
An emerging, “solutionist” narrative about drones goes something like this:
Yes, we should be very worried about government misuse of drones at home and abroad. But the answer is not to ban, or even blame, the technology itself. Rather, we need to spread the technology among more people. Worried that the government will spy on you? Get your own drones to watch the watchers. Fearful of malevolent drones? Develop your own protective force. The answer is more technology, not regulation of particular technologies.
I’d like to believe that’s true, if only because technology develops so quickly, and government seems paralyzed by comparison. But I think it’s a naive position. It manages to understate both the threats posed by drones, and the governance challenges they precipitate.
Read the rest of this post »
posted by Deven Desai
In the words of Portlandia, innovation is over. Or as another era of hipsters might say, innovation is dead anyway (Swingers). Take a look at the posturing of European Publishers Council and Google over the recent German bill to force search to pay for material longer than a snippet.
“As a result of today’s vote, ancillary copyright in its most damaging form has been stopped,” Google said in a statement. “However, the best outcome for Germany would be no new legislation because it threatens innovation, particularly for start-ups. It’s also not necessary because publishers and Internet companies can innovate together, just as Google has done in many other countries.”
Translation: Insert resistance is futile jokes as needed, but you will work with us and win! We all will win, because we innovate and belong to the Church of Innovation (located somewhere south of San Francisco and north of San Jose).
“With the right legal conditions and the technical tools provided by the Linked Content Coalition, it will be easy to access and use content legally,” the European Publishers Council said in a statement (PDF) on Friday. “This will mean that publishers will have the incentive to continue to populate the internet with high-quality, authoritative, diverse content and to support new, innovative business models for online content.”
Translation: We have no idea what is next. But please give us more time, protection, and money. We promise we will come up with something new.
Confession: Have I invoked innovation. Of course. It is seductive. It is too seductive. Pam Samuelson is a fan of Orwell’s Politics and the English Language, as is Neil Richards, and as am I. I must confess that I have sinned. I slipped away from Orwell’s mandate and went with the easy, meaningless word. I hate when that happens. I will try and stop.
Of course, what other word or words would say more is the next struggle. The German law says only a snippet is allowed. Right. What’s a snippet? Someone says innovate. I say, “Right. What’s innovate?” I hope to find out. If I am lucky, I may be like Bill Cosby’s Noah and come up with an answer no one else thought of. Hmm is that innovat… Khannn!!!!
Enjoy the clip
posted by Ryan Calo
I got the chance to testify at a hearing of the full Senate Judiciary Committee about the domestic use of drones yesterday. The New York Times has this coverage and, for aficionados of torts, I talk about intrusion upon seclusion with Senator Dick Durbin in this clip from NBC News. Should you get a chance to watch the hearing in full, Senator Al Franken’s thoughts at the end were particularly vivid. My written and oral comments were similar to those outlined in my previous post: privacy law places few limits on the use of drones for surveillance, but we should be very careful in crafting any drone-specific legislative response. It happens that, about when I was testifying, my students were taking a final where one of the questions involved a drone filming a private party. I feel they had fair notice that this might be on the exam.
posted by Deven Desai
I know that Silicon Valley gets all the hoopla for the way knowledge and industry can thrive, but look a bit north and you will find that similar things happened in the wine industry. That industry just lost a leader. James Barrett was the head of Chateau Montelena when its Chardonnay beat French wines in a taste test that changed the wine industry. He died yesterday. (Stag’s Leap’s Cabernet Sauvignon won the red category). The story (embellished but fun) was told in the film Bottle Shock.
Barrett was an attorney (Loyola L.A., ’51) who became a winemaker. Reports say he fell in love with wine. He followed a dream. I would bet that his legal training helped with the business. Regardless, he and others in Napa changed the wine industry. Part of that success came from using science and research from U.C. Davis to guide the wine making process. The vineyard also employed Mike Grgich who went on to run a rather good vineyard on his own. As Barrett said about the success, “Not bad for some kids from the sticks.”
Technology, lawyers, and new approaches to a business that has made a huge amount of money and that happens to bring joy to those who imbibe wine. What’s not to love? I, for one, will raise a glass to Barrett and hope that other kids from the sticks are inspired to try and do likewise in whatever field they love.
posted by Deven Desai
Scientists have come to a “technical, not biological” problem in trying to resurrect a once extinct frog. Popular Science explains the:
gastric-brooding frog, native to tiny portions of Queensland, Australia, gave birth through its mouth, the only frog to do so (in fact, very few other animals in the entire animal kingdom do this–it’s mostly this frog and a few fish). It succumbed to extinction due to mostly non-human-related causes–parasites, loss of habitat, invasive weeds, a particular kind of fungus.
Specimens were frozen in simple deep freezers and reinserted into another frog. The embryos grew. The next step is to get them to full adulthood so they can pop out like before. Yes, these folks are talking to those interested in bringing back other species.
As for this particular animal, the process reminds me a bit too much of Alien, which still scares the heck out of me.
the gastric-brooding frog lays eggs, which are coated in a substance called prostaglandin. This substance causes the frog to stop producing gastric acid in its stomach, thus making the frog’s stomach a very nice place for eggs to be. So the frog swallows the eggs, incubates them in her gut, and when they hatch, the baby frogs crawl out her mouth.
Science. Yummy. Oh here is your law fodder. What are the ethical implications? Send in the clones! (A better title for Attack of the Clones, perhaps).
posted by Deven Desai
Just as Neil Richards’s The Perils of Social Reading (101 Georgetown Law Journal 689 (2013)) is out in final form, Netflix released its new social sharing features in partnership with that privacy protector, Facebook. Not that working with Google, Apple, or Microsoft would be much better. There may be things I am missing. But I don’t see how turning on this feature is wise given that it seems to require you to remember not to share in ways that make sharing a bit leakier than you may want.
Apparently one has to connect your Netflix account to Facebook to get the feature to work. The way it works after that link is made poses problems.
According to SlashGear two rows appear. One is called Friends’ Favorites tells you just that. Now, consider that the algorithm works in part by you rating movies. So if you want to signal that odd documentaries, disturbing art movies, guilty pleasures (this one may range from The Hangover to Twilight), are of interest, you should rate them highly. If you turn this on, are all old ratings shared? And cool! Now everyone knows that you think March of the Penguins and Die Hard are 5 stars. The other button:
is called “Watched By Your Friends,” and it consists of movies and shows that your friends have recently watched. It provides a list of all your Facebook friends who are on Netflix, and you can cycle through individual friends to see what they recently watched. This is an unfiltered list, meaning that it shows all the movies and TV shows that your friends have agreed to share.
Of course, you can control what you share and what you don’t want to share, so if there’s a movie or TV show that you watch, but you don’t want to share it with your friends, you can simply click on the “Don’t Share This” button under each item. Netflix is rolling out the feature over the next couple of days, and the company says that all US members will have access to Netflix social by the end of the week.
Right. So imagine you forget that your viewing habits are broadcast. And what about Roku or other streaming devices? How does one ensure that the “Don’t Share” button is used before the word goes out that you watched one, two, or three movies on drugs, sex, gay culture, how great guns are, etc.?
As Richards puts it, “the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate
our intellectual privacy—the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition.” So too for video and really any information consumption.
posted by Deven Desai
Some day we might do away with pretext traffic stops, because some day autonomous vehicles will be common. At ReInventlaw Silicon Valley, David Estrada from GoogleX, made the pitch for laws to allow autonomous vehicles a bright future. He went to the core reasons such fuel sustainability and faster commutes. He also used the tear jerking commercial that showed the true benefits of enabling those who cannot drive to drive. I have heard that before. But I think David also said that the cars are required to obey all traffic laws.
If so, that has some interesting implications.
I think that once autonomous vehicles are on the road in large numbers, the police will not be able to claim that some minor traffic violation required pulling someone over and then searching the car. If a stop is made, like the Tesla testing arguments, the car will have rich data to verify that the car was obeying laws.
These vehicles should also alter current government income streams. These shifts are not often obvious to start but hit home quickly. For example, when cell phones appeared, colleges lost their income from high rates for a phone in a dorm room. That turned out out to be a decent revenue stream. If autonomous vehicles obey traffic laws, income from traffic violations should go down. Cities, counties, and states will have to find new ways to make up that revenue stream. Insurance companies should have much lower income as well.
I love to drive. I will probably not like giving up that experience. Nonetheless, reduced traffic accidents, fewer drunk drivers, more mobility for the elderly and the young (imagine a car that handled shuttling kids from soccer, ballet, music, etc., picking you up, dropping you home, and then gathering the kids while you cooked a meal (yes, should I have kids, I hope to cook for them). The time efficiency is great. Plus one might subscribe to a car service so that the $10,000-$40,000 car is not spending its time in disuse most of the day. Add to all that a world where law enforcement is better used and insurance is less needed, and I may have to give in to a world where driving myself is a luxury.
posted by UCLA Law Review
Volume 60, Discourse
|Edifying Thoughts of a Patent Watcher: The Nature of DNA
|Dan L. Burk
David H. Kaye
posted by Frank Pasquale
Last month, I noted some important innovations in teaching, while striking a cautionary note about massive, open online courses (MOOCs). But for those who prefer MOOC-thusiasm, Tom Friedman’s recent column delivers:
You may think this MOOCs revolution is hyped, but my driver in Boston disagrees. You see, I was picked up at Logan Airport by my old friend Michael Sandel, who teaches the famous Socratic, 1,000-student “Justice” course at Harvard, which is launching March 12 as the first humanities offering on the M.I.T.-Harvard edX online learning platform. When he met me at the airport I saw he was wearing some very colorful sneakers.
“Where did you get those?” I asked. Well, Sandel explained, he had recently been in South Korea, where his Justice course has been translated into Korean and shown on national television. It has made him such a popular figure there that the Koreans asked him to throw out the ceremonial first pitch at a professional baseball game — and gave him the colored shoes to boot!
Friedman spends much of the remaining column arguing that universities need to a) get rid of “sage on a stage” lecture courses, while substituting in for them b) sages on YouTube like Sandel. The critical link to Education 2.0: intensive, individualized assessment & problem solving. So in Friedman’s ideal world, philosophers like Sandel would teach all the intro “Ethics” or “Justice” courses for millions, while local adjuncts would apply them to particular dilemmas (such as: should columnists disclose if they are “heirs to a multi-billion-dollar business empire”?).
The irony here is twofold. Read the rest of this post »
posted by Frank Pasquale
Gary King and Maya Sen have argued that traditional universities “can build on our tremendous advantage in research to improve teaching and learning.” In a recent article entitled “How Social Science Research Can Improve Teaching,” they give more details:
We marshal discoveries about human behavior and learning from social science research and show how they can be used to improve teaching and learning. The discoveries are easily stated as three social science generalizations: (1) social connections motivate, (2) teaching teaches the teacher, and (3) instant feedback improves learning. We show how to apply these generalizations via innovations in modern information technology inside, outside, and across university classrooms. We also give concrete examples of these ideas from innovations we have experimented with in our own teaching.
I don’t think all the ideas they propose in the piece could work in a law school context, but several seem well worth trying. I have found, for instance, that teaching a course in Health Data Analysis & Advocacy with a professor from my university’s math department has been a good “stretch” exercise for all involved. In other courses, I’ve tried to introduce students to various online communities that encourage learning about health law. (I’ve found that Twitter may well be the best place to keep track of what’s going on in the law and policy of health information technology.) The King/Sen paper offers many more ideas for promoting new kinds of learning, particularly for those willing to buck the MOOC trend with FASOCs (focused and small online courses).
posted by Frank Pasquale
What would a world of totally privatized justice look like? To take a more specific case—imagine a Reputation Society where intermediaries, unbound by legal restrictions, could sort people as wheat or chaff, credit-worthy or deadbeat, reliable or lazy?
We’re well on our way to that laissez-faire nirvana for America’s credit bureaus. While they seem to be bound by FCRA and a slew of regulations, enforcement is so wan that they essentially pick and choose the bits of law they want to follow, and what they’d like to ignore. That, at least, is the inescapable conclusion of a brief but devastating portrait of the bureaus on 60 Minutes. Horror stories abound regarding the bureaus, but reporter Steve Kroft finds their deeper causes by documenting an abandonment of basic principles of due process:
Read the rest of this post »
posted by Deven Desai
Anyone interested in where legal practice may beheaded should check out ReInvent Law Silicon Valley 2013 on March 8 at teh Computer History Museum in Mountain View, CA (disclosure I am a speaker). The conference is devoted to law, technology, innovation, and entrepreneurship in the legal services industry. Dan Katz gave and excellent talk at the mid-year AALS conference. He talked about how automated system, machine learning, and more are defeating outsourcing and changing the face of legal practice. I nodded as what he said mapped to what I learned while I was at Google. In 2008 I started writing about problems with the structure of legal education. Those issues are now with us in full force. I think Dan and this project get to issues within the legal industry that may make the what about firm jobs question obsolete (which it may already be for a host of reasons) but present opportunities going forward.
Here is how he sums up the idea:
At all price points, the legal services market is rapidly changing and this disruption represents peril & possibility. This meeting is about the possibility … about the game changers who are already building the future of this industry. This is a 1 day event featuring 40 speakers in a high energy format with specific emphasis on technology, innovation and entrepreneurship. It will inspire you to consider all of the possibilities.
In that Silicon Valley way, it will be a blitz of 40 speakers covering LegalTechStartUp, Lawyer Regulation, Business of Law, Quantitative Legal Prediction, Design, 3D Printing, Driverless Cars, Legal Education, Legal Information Engineering, New Business Models, Lean Lawyering, Legal Supply Chain, Project Management, Technology Aided Access to Justice, Augmented Reality, Legal Process Outsourcing, Big Data, New Markets for Law, Virtual Law Practice, Information Visualization, E-Discovery, Legal Entrepreneurship, Legal Automation … and much more.
Tickets are Free but registration is required.
Please feel free to sign up today.
posted by Mary Anne Franks
It would be one thing if the only people defending the practice of non-consensual sexual activity were the easily identifiable misogynists, the ones who always come crawling out of the gutters to inject their poorly spelled and exclamation-point-filled victim-blaming screeds into any discussion of rape, sexual harassment, or gender inequality. But the victim-blaming rhetoric that has surfaced in the conversation about revenge porn is also coming from seemingly reasonable people – people who think deeply about other social and legal issues and who even seem to have some sympathy for the victims.
Let me take as one example a recent post in Forbes by someone I respect, Professor Eric Goldman. The post is titled “What Should We Do About Revenge Porn Sites Like Texxxan?” and the answer, apparently, is nothing. Prof. Goldman characterizes revenge porn as “distasteful,” likens it to the “bad etiquette” of checking out the price of a colleague’s home on Zillow, and concludes with this recommendation: “for individuals who would prefer not to be a revenge porn victim or otherwise have intimate depictions of themselves publicly disclosed, the advice is simple: don’t take nude photos or videos.”
The first thing that strikes me about Prof. Goldman’s discussion of revenge porn (and this is true of many discussions of the issue) is the failure to note its gendered dimensions. This is despite the fact that empirical evidence so far indicates that revenge porn is primarily produced and consumed by men and primarily targets women. Revenge porn belongs to that class of activities that includes rape, domestic violence, and sexual harassment – that is, the class of activities overwhelmingly (though of course not solely) perpetrated by men and directed overwhelmingly (again, not solely) at women. Like those activities, one major effect of revenge porn is to limit women’s freedom to live their lives: it punishes women and girls for engaging in activities that their male counterparts regularly undertake with minimal negative (and often positive) consequences. Read the rest of this post »