Colour codes for "unremarkable" instantiations:
reduced before consonant ― unreduced before vowel
Colour codes for "remarkable" cases:
unreduced before consonant ― unreduced, then changed to reduced, before consonant ― reduced, then changed to unreduced, before consonant
(I haven't included an, since the pronuciation is entirely unremarkable. "Consonant" includes [j], as in "United States". Reduced vowels: [ðə]/[ə] ― unreduced vowels: [ði:]/[ej].)
[I'd like to talk about the interaction between DRM and] public policy but I'm not going to come at that from the ordinary direction saying what public policy should be about DRM. I want to talk instead about what the impact of DRM is on the public policy process related to other issues, that is, my argument will be that DRM not only is a public policy issue [in] itself, but has a [significant] negative impact on the public policy debate. Basically this stems from the fact that DRM strategies tend to take devices, whether they are computers or media players, and turn them into black boxes, black boxes that users are not supposed to, or allowed to, analyze or examine or understand. This goes under a lot of different euphemistic names; sometimes it's called a secure execution environment; sometimes people say that the device is an appliance, although that's also a misnomer, it's not like any normal appliance you might have in your house; sometimes it's called the robustness requirement. But all of these things really mean that the technology is supposed to be a black box, you're not supposed to be able to look inside of it. And this black box effect tends to grow over the scope of the system for example if you're talking about a computer system you might say well only the part that deals with the media has to be a black box the boundaries of that black box tend to grow because there's concern that the content will be grabbed off of the video card or the audio card that it would be grabbed off of the disk, that it will be grabbed as it goes across the system's IO bus and so on. And the result is that the entire device tends to get turned into a black box. There's a combination of technology and law that's used to try and [to] make these devices into black boxes. The devices are engineered in a way that armors them so that it's difficult technically to analyze or understand what's happening inside the device.
The use of a particular kind of black box design may be mandated by law. That's essentially what the tech mandates as in the Hollings Bill would do. And possibly, the black box nature of the systems is backed by laws like the DMCA that tend to ban analysis or tinkering or discussion related to the device. So as a result of all of this, DRM and the [uh] things that come with DRM turn technological devices into black boxes. Now, the other side of this has to do with the interaction between technology and public policy. There are a lot of important policy questions that depend in an intimate way on understanding technology and understanding of the technology an important input to making reasonable public policy decisions. And this is especially true right now with respect to the things that are at stake with DRM. And so I'm going to argue that bans on understanding technology tend to cripple the public debate about these issues.
Now there are lots of examples of issue in which this is true and I want to give you three examples, but just to raise the degree of difficulty a little bit a[nd] hopefully help convince you that there are many many examples. I'm going to use [examples] that other people have already mentioned in the conference. The first one was mentioned by Dave Farber this morning, the Total Information Awareness Program. This is obviously a public policy issue that's very much at the forefront now. I managed to get a copy of their logo off another website instead of [since they've] taking[en] it down. The logo's not too popular. I imagine the name Total Information Awareness is likely to get changed to something like Next Generation Secure Information Awareness. So here's the public policy issue with TIA. Law enforcement and intelligence communities in the United States want to mine commercial databases. They want to do it for good reasons, to catch people who would like to blow us up. But there is a significant privacy issue involved here. The advocates of TIA say that we shouldn't worry too much about abuses by rogue agents, by rogue law enforcement personnel because methods like DRM, methods designed to prevent misuse of information or violations of policy, will prevent them. Is this true? Well, if you want to know, then you need to understand the black boxes, you need to understand the efficacy of DRM technology, whether it's going to work. You need to be able to take a skeptical look at this technology and understand how much can we count on it. And that's an important factor in any public policy decision that one might make about TIA.
My second example comes from Bob Blakely's talk this morning, it's the Girls Gone Wild video or, in particular, the attempt to blaock it. Another public policy issue before us has to do with blocking and filtering technology. For example, to block porn. So there are products out there that claim to block pornographic websites and they claim not to block non-porn content. Is this true? Should we use this technology isn schools and libraries and homes and so on. The advocates of this technology claim that we shouldn't worry about over blocking because their blocking list, the list of sites to block is accurate. Is that true? If we want to know whether or not it's true we need to be able to open up their black box and see what their block list actually is. There's a law suit going brought by Ben Edelman, a researcher at Harvard, about this very issue under the DMCA. We need to look inside that black box in order to understand the accuracy of the block list and, again, that's an important input to the public policy decision that needs to get made.
My third example comes from a question that Barbara Sarmonds asked yesterday about electronic voting. This is an electronic voting machine. All electronic. You walk up to it, you push some buttons on the front and it records your vote. At the end of the election, it spits up a count of how many votes were cast for each candidate, or at least we hope it does that. So after the Florida 2000 election, there was a big push toward different voting technology, in particular computerized voting machines. Counties all over the place are looking at that. Santa Clara County, California is in the middle of a decision process and my own county, Mercer County, in New Jersey, is also in the throws[es :-)] of a decision about whether to go ahead with computerized voting or what kind of computerized voting. And you face a lot of tradeoffs there. There's no doubt that direct recording electronic, the all electronic machines are convenient to use and at the end of the election you get a count really fast. The big problem, though, is the risk of fraud. How do you know that the election result is right? How do you know that there hasn't been some sort of horrible mistake? Of course, this is a problem that has gone back a long time in elections, but we change it when we move to an electronic system. We change the kind o[f] failure modes that we face. The advocates of these technologies, mostly the vendors, say don't worry about tampering, we use methods to seal the devices so that outsiders can't tamper, people at the polling place can't tamper with it. Th[e]y claim to use methods that prevent even their own engineers from changing what the machines do. Is that true? Do the technologies they use actually prevent tampering? How difficult is it to tamper? Is it even possible to do that? We need to understand the black boxes that they're building and we need to understand black box technologies in general to be able to evaluate that. In all three of these cases, all three of the policy examples, we need to answer basic technological questions about black boxes in general and specific black boxes in order to make good public policy decisions. Given more time, I could go on and on and talk about other examples. There are lots of examples related to technology policy, to regulation of spectrum, and so on. Examples related to defense, and so on. But rather than go on, what I'll do is stop here and just point out that in my view this is a serious problem. People don't understand enough about technology. Technology, god knows, is hard enough to figure out. What we don't need to do is make it harder. Thank you.