Site Meter

Author: Steven Bellovin

1

The Roots of Sharing

I’d like to thank Danielle Citron and everyone else at CO for inviting me to be part of this symposium.  I’m honored to be here again.

I want to start by discussing the culture of sharing in engineering, and in particular in the computer field; it far antedates Richard Stallman and the FSF.  I can point to some classic science fiction stories from the 1940s and early 1950s, George O Smith’s classic Venus Equilateral collection.  The heroes are engineers—this is pure “wiring diagram” science fiction, where you feel that you almost have enough information to build the circuits described—and the villains, other than the laws of physics, are businessmen (and, I fear, lawyers).  That engineers will cooperate, even when they work for different (and rival) companies, is one of the themes of the stories.  Trying to understand, and to make things work in the face of an uncooperative Nature, took precedence over the mere commercial interests of the “suits”.

That attitude has certainly carried over to the real world.  In the IBM mainframe world, SHARE (which dates to the mid-1950s) has long served as a forum for computer programmers to exchange not just tips and success stories but also source code, both original and patches or additions to the code from IBM.  (IBM itself ran a library to distribute contributed software; these packages came from both its own employees and its customers.)  There were other organizations similar to SHARE, such as DECUS for users of Digital Equipment Corporation’s machines and Usenix for Unix users.  (That latter was originally known as the Unix User’s Group, until the prospect of commercialization led Bell Labs’ lawyers to insist on their trademark rights to “Unix”, circa 1980.)  In those early years, it was common wisdom that one should always bring a reel of mag tape to a Usenix meeting, in order to bring back useful software contributed by others.  As networking took over, the form of cooperation changed, but cooperation and sharing remained important goals.  Indeed, one of our primary goals when we came up with the idea for Usenix in 1979 was to provide an online forum for sharing and self-help.

This culture—the one in which Stallman came of age professionally—was born in part of economic necessity; changing economics threatened it.  SHARE et al. arose because there were too few computers in the world to support much of an independent software industry.  IBM and its rivals all bundled software with their machines not so much from a desire to be monopolists as because the very expensive hardware was useless without software.  It would have been like selling cars without seats: in principle, you could buy them elsewhere, but in fact there weren’t any other manufacturers, because there were too few “cars” to make it worthwhile.

As Coleman notes, IBM switched to an unbundled model circa 1969.  This was certainly in response to a threatened antitrust suit, but it was also a response to the changing market.  Not only were there nascent software vendors, there were also companies making clones or near-clones of IBM mainframe computers.  By 1980, software had acquired a value independent of the hardware it ran on.  That in turn led companies to restrict access to their source code, which threatened not so much the ideology of sharing but rather some of the ability to do so practically.  Things were not necessarily completely locked down; IBM, for example, still sold source code to its operating systems, albeit in a form that for technical reasons wasn’t nearly as useful.  (For reasons I’ll discuss in a later post, it is less than clear that the change in copyright law was a major factor.)

Stallman’s Manifesto, then, had two components.  The first was an attempt to preserve a culture in the face of economic and technological change.  By itself, that can be seen as a quixotic enterprise; trying to hold back the tides of time is rarely successful.  Stallman took it a step further, though: he elevated sharing to a moral principle and effectively hacked copyright law to enforce his views via the GPL.  It is an interesting question to what extent the open source software movement, built around the same culture of sharing but without the mandatory aspects of the GPL, would have thrived without the free software movement.  To give one very specific example, BSD systems (also Unix-like systems, derived from the so-called Berkeley Software Distributions that built on Bell Labs’ originals) are considered by many (including myself) to be technically cleaner; however, their development was hindered by lawsuits, internecine conflicts, and personality clashes  The net result was that Linux won mindshare and market share—but could BSD have survived at all without free software movement providing philosophical cover for the culture of sharing?  It is an interesting, albeit perhaps unanswerable, question.

8

A New Threat to Generativity

The symposium is over, but when I saw an important news item on a major threat to generativity, Danielle graciously urged me to post one last message to this blog.

A big player — one of the very biggest, Intel — has embarked on a new strategy, including a major corporate acquisition, that poses major threats to generativity.  Specifically, according to a news report on Ars Technica, Intel is planning to add hardware support for “known good only” execution.  That is, instead of today’s model of anti-virus software, which relies on a database of known-bad patterns, Intel wants to move to a hardware model where only software from known-good sources will be trusted.  For a number of reasons, including the fact that it won’t work very well, this could be a very dangerous development.  More below the fold.

Read More

0

Future of the Internet Symposium: Identity

Zittrain’s book mentioned en passant that unlike the closed, proprietary services, the Internet has no authentication; he also suggests that this is tied to the alleged lack of consideration for security by the Internet’s designers.  I won’t go into the latter, save to note that I regard it as a calumny; within the limits of the understanding of security 30 years ago, the designers did a pretty good job, because they felt that what was really at risk — the computers attached to the net — needed to protect themselves, and that there was nothing the network could or should do to help.  This is in fact deeply related to Zittrain’s thesis about the open nature of the Internet, but I doubt I’ll have time to write that up before this symposium ends.

The question of identity, though, is more interesting; it illustrates how subtle technical design decisions can force certain policy decisions, much along the lines that Lessig set forth in Code.  We must start, though, by defining “identity”.  What is it, and in particular what is it in an Internet context?  Let me rephrase the question: who are you?  A name?  A reputation?  A fingerprint?  Some DNA?  A “soul”?

Tolkien probably expressed the dilemma best in a conversation between Frodo and Tom Bombadil in Lord of the Rings:

‘Who are you, Master?’ he asked.

‘Eh, what?’ said Tom sitting up, and his eyes glinting in the gloom. ‘Don’t you know my name yet? That’s the only answer. Tell me, who are you, alone, yourself and nameless?

We are, in some sense, our names, with all the baggage appertaining thereto.  For some web sites, you can pick an arbitrary name and no one will know or care if it’s your legal name.  For other purposes, though, you’re asked to prove your identity, perhaps via the oft-requested “government-issued photo ID”.  In other words, we have a second player: an authority who vouches for someone’s name.  This authority has to be mutually trusted — I’m not going to prove my identity to Mafia, Inc., by giving them my social security number, birthdate, etc., and you’re not likely to believe what they say.  Who is trusted will vary, depending on the circumstances; a passport issued by the government of Elbonia might be sufficient to enter the US, but MI-6 would not accept such a document even if it were in the name of James Bond.  This brings up the third player: the acceptor or verifier.

When dealing with closed, proprietary networks, the vouching authority and the acceptor are one and the same.  More to the point, the resources you are accessing all belong to the verifier.  The Internet, though, is inherently decentralized.  It is literally a “network of networks”; no one party controls them all.  Furthermore, the resource of most interest — end-systems — may belong to people who don’t own any networks; they just buy connectivity from someone else.  Who are the verifiers?

A biometric — fingerprints, DNA, retina prints, even “soul prints” — doesn’t help over the net.  The verifier simply sees a string of bits; it has no knowledge of where they’re from.  You may authenticate yourself to a local device via a biometric, but it in turn will just send bits upstream.

Because of the decentralized nature, there is no one verifying party.  I somehow have to authenticate to my ISP.  In dial-up days, this was done when I connected to the network; today, it’s done by physical connection (e.g., the DSL wire to your house) or at network log-in time in WiFi hotspots.  My packets, though, will traverse very many networks on the way to their destination.  Must each of them be a verifier?  I can’t even tell a priori what networks my packets will use (see the previous discussion on interconnection agreements); I certainly don’t have business relationships with them, nor do I know whom they will consider acceptable identity vouchers.

This isn’t just a performance issue, though I should note that verifying every packet in the core of the Internet was well beyond the state of the art 30 years ago, and may still be impossible.  It is an architectural limitation, stemming from the decision in the late 1970s to avoid a centrally-controlled core.

The design of  the Internet dictates that you are only strongly authenticated to your local host or site.  Anything beyond that is either taken on faith or is done by end-to-end authentication.  That, though, it exactly how the Internet was designed to operate, and it doesn’t assume that any two parties even have the same notion of identity.  My identity on my phone is a phone number; my login on my computers is “smb”; my university thinks I’m smb2132; Concurring Opinions knows me by my full name.  Which is correct?  Any and all — the Internet is too decentralized for any one notion of identity.  Had the designers created a more centralized network, you might indeed able to authenticate to the core.  But there is no core, with all of the consequences, good and bad, that that implies.

(This is my last post of the symposium.  I’ll be offline for a few days; when I come back online, I may add a few comments.  I’ve very much enjoyed participating.)

0

Future of the Internet Symposium: Why Appliances Won’t Work

I commented earlier that I doubted that a “banking appliance” layered on top of a generic PC would indeed be secure.  Examining that statement sheds light on the limitations of computer security.

We first must define what we mean by “secure”.  The usual computer science definition is the so-called “CIA trilogy”: Confidentiality, Integrity, and Availability.  That is, private data should stay private, no unauthorized changes should be made to anything, and you should always be able to do your banking when you want to, regardless of what the malware is doing.  I don’t think that appliances can do this.

It is clear from the start that overlay appliances cannot preserve availability; if nothing else, malware on the base operating system can delete any files used by the appliance.  Perhaps those files are encrypted, but that doesn’t protect them from deletion or from being overwritten with garbage.  This is probably a minor concern, though; empirically, there have been few recent attacks on the availability of desktop machines because it’s harder to make money that way.  There have been a few, though, notably programs that encrypt people’s files but offer to decrypt them if a ransom is paid.

Read More

0

Future of the Internet Symposium: The Roles of Technology and Economics

I’m delighted to have this opportunity to participate in this symposium.  I’m a computer scientist, not a law professor; most of my comments will tend to be at the intersection of technology and public policy.

When reading Jonathan Zittrain’s book — and I agree with his overall thesis about generativity — it’s important to take into account what was technically and economically possible at various times.  Things that are obvious in retrospect may have been obvious way back when, too, but the technology didn’t exist to do them in any affordable fashion.  While I feel that there are a number of sometimes-serious historical errors in the early part of the book — for example, AT&T, even as a monopoly, not just leased modems but also modified its core network to support them; data networking was not solely a post-Carterphone phenomenon — the more serious problems stem from ignoring this perspective.  I’ll focus on one case in point: the alleged IBM control of mainframes.

Read More