According to an article in the 20 Mar 2005 issue of *The Star-Ledger* (Newark NJ), New Jersey's Essex County Jail has experienced another failure of its touchscreen based physical access control system. http://www.nj.com/news/ledger/essex/index.ssf?/base/news-6/1111297930265190.xml Essex County Executive Joseph DiVincenzo sounds like he's never encountered a competent systems engineer. "These things happen. This is not the first time it's happened. ... It's happened a couple of times already, and it's not going to be the last time either." Police Benevolent Association Local 157 president Joe Amato has a more practical view, "Modern technology at its finest. Who needs it? ... An old-fashioned turnkey operation would have just been fine, but we spent millions for a high-tech computer-controlled jail that isn't worth the contaminated dirt that it's built on." One wonders how well the system will function once the inmates get their hands on it.
Recently in France a number of failures of "cruise control" systems especially on recent models of Renault made cars have been reported, some creating serious accidents (including a deadly one). In general it is reported that the car stays at his set speed and no matter what the driver does, including cutting the ignition and breaking, the car continues at that speed. What's more surprising is that it is also reported that brakes become ineffective (the brake pedal resists pressure). I could imagine that the cruise control being probably under control of some microprocessor, this microprocessor could "hang" due to some software problem and therefore that everything it controls just stays as it is. Especially in newer cars where fuel injection is completely electronically controlled (no mechanical link between the gas pedal and the fuel injection controls). However, I have difficulties believing that the same microprocessor would control the brakes and make them ineffective. I wonder if somebody on this board has some insight on how the electronic controls of modern cars are designed and especially if a single component's failure (such as a common microprocessor) could affect multiple functions (e.g., acceleration and brakes).
[Source: Amy Schatz <Amy.Schatz@wsj.com>, *Wall Street Journal*, 25 Mar 2005, A4; PGN-ed] http://online.wsj.com/article/0,,SB111172077661889592,00.html?mod=todays_us_page_one A new government report says officials in the Department of Homeland Security didn't do enough to keep airline-passenger data secure when using it to test a traveler-screening program. DHS's Inspector General says the Transportation Security Administration gathered 12 million passenger records from February 2002 to June 2003 and used most of them to test the Computer Assisted Passenger Prescreening System, or CAPPS 2, which was designed to check passenger names against government watch lists. Passengers weren't told their information was being used for testing. TSA officials shelved CAPPS 2 last year amid complaints it was an invasion of passenger privacy. The agency has replaced it with a similar system, called Secure Flight, which is being tested and is expected to debut in August. The report raises concerns because Secure Flight ultimately will gather private information, such as names, addresses, travel itineraries and credit-card information, on anyone who takes a domestic flight. That effort could be slowed by a Government Accountability Office study due Monday which is expected to be critical of TSA's efforts to develop passenger-privacy protections. The report said TSA "did not ensure that privacy protections were in place for all of the passenger data transfers" and noted that "early TSA and [CAPPS 2] efforts were pursued in an environment of controlled chaos and crisis mode after the Sept. 11 attacks." Investigators also found TSA provided inaccurate information to the media about the agency's use of real passenger records for CAPPS 2 testing and wasn't "fully forthcoming" to the agency's own internal privacy officer during an investigation into the matter. "Although we found no evidence of deliberate deception, the evidence of faulty processes is substantial," investigators said.
By Jacqueline Emigh, eweek.com, 23 Mar 2005 After uncovering a security weakness in a radio-frequency identification tag from Texas Instruments Inc., researchers from RSA Security Inc.'s RSA Laboratories and The Johns Hopkins University are now eyeing future exploits against other RFID products in the interests of better security, one of the researchers said this week. Meanwhile, TI will keep making the compromised RFID tag in order to meet the needs of applications more sensitive to speed and pricing than to privacy, according to a TI official. The Johns Hopkins University Information Security Institute and RSA first publicized their findings about the RFID security hole in January. In a paper posted at www.rfidanalysis.org, the researchers claim that by cracking a proprietary cipher, or encryption algorithm in one of TI's DST (digital signature transponder) RFID tags, they were able to circumvent the tags' built-in security enough to buy gasoline and turn on a car's ignition. The researchers from Johns Hopkins and RSA reverse-engineered and emulated the 40-bit encryption over two months. [The full article is on IP:] IP Archives at: http://www.interesting-people.org/archives/interesting-people/
Police foil 220-million-pound 'keyboard hacker' raid on bank, TimesOnLine, 17 Mar 2005 http://www.timesonline.co.uk/article/0,,2-1529429,00.html
The note in RISKS-23.79 regarding COPE [Computerized Physician Order Entry Systems] prompts me to point to our website, www.clab.org, where some of the papers cited by the JAMA articles by Koppel et al., Garg et al., and editorial maybe found. The finding that CPOE is a source of new forms of failure is not surprising. We have, indeed, predicted this for at least a decade, as the editorial by Wears & Berg points out. It is not surprising, either, that some continue to claim that most medical "mistakes" are "caused by humans". Although this notion of error has been thoroughly debunked over the past twenty-five years, the idea is deeply ingrained. The scientific understanding of the nature of human performance, technology, and complex systems and their failures traces back to the aftermath of the Three Mile Island nuclear event in 1979. Woods, Norman, Rasmussen, Hollnagel, Senders, Moray, Wreathall, and many others spent fifteen years understanding the relationships between failure and success in domains including aviation, nuclear power generation, and, more recently, in healthcare. [There is a useful bibliography in the short paper "A Brief Look at the New Look in Complex SYstem Failure, Error, and Safety" that can be found at www.ctlab.org .] When the patient safety movement began in the 1990's, our hope was that healthcare could avoid getting caught up in the sterile business of error attribution and counting and quickly move to the modern view of failure and success. Several scientists, notably David Woods, spent a great deal of time and effort with groups like the National Patient Safety Foundation in an effort to 'jump start' healthcare's work on safety. We achieved only a partial success --- the healthcare world did 'discover' error and become fascinated by it but, after a decade, most of the leadership now understands that the pursuit of 'error' is unproductive and a mistaken goal. The JAMA papers and editorial are correct in their assessment of the current state of Clinical Healthcare Information Technology (CHIT). What is missing from the JAMA paper on CPOE and also from the editorial by Wears and Berg is a clear understanding of why current CPOE is so badly suited to the task of improving safety. Neither the paper authors nor the editorial writers are able to look deeply into the design features of these systems or the work that they are supposed to support. Such close examination reveal, as RISKS readers will already have anticipated, that it is the failure to produce USER centered design that is the root cause of the poor performance of CHIT. The complex activity network that produces patient care is perhaps the most difficult place to insert interactive computing aids and the designers of these systems have done little to understand the patterns of work that occur there or the kinds of support that would be helpful; the paper by Patterson et al. in J Am Med Informatics Assn 2002:9;540-53 provides a detailed study. The result is TECHNOLOGY centered systems that generate failures because they are so ill-suited to the work at hand. Of course the designers of these things were certain that they were making user centered designs but the actual results are thoroughly technology centered. As David Woods said, "the road to technology centered systems is paved with user centered intentions." We know, in principle and through demonstration, what it takes to make good CHIT. As Nemeth, et al., point out in a recent issue of IEEE Systems Man and Cybernetics (part A, vol 34, 2004), what is needed is detailed, calibrated understanding of the actual task requirements of the work domain and the tradeoffs and strategies that workers use to meet these demands. There are excellent examples of this sort of approach available but , like all good design, it takes time, money, and more than a little sophistication to do it. The rush to eliminate "human error" from medicine has led an eager and somewhat naive group to insist that new CHIT be put in place to forestall error by practitioners. Fueled by folk models of human error, this insistence has produced a whole lot of CHIT that will be the source of a steady stream of interesting failures over the next decade. It is unsettling and disappointing to realize that the efforts to produce really good CHIT are going to require a great deal more time, effort, and money than has been budgeted. But RISKS readers will recognize that this too is a common experience with large systems. Many hospitals are already deeply involved in buying and installing new CHIT and the strong government pressure to continue this effort is likely to continue. We can only hope that a parallel effort to understand the technical work of healthcare will be undertaken so that, in time, it will be possible to make better, more useful, more user centered technology. Richard Cook, MD, Cognitive Technologies Laboratory, University of Chicago
In RISKS-23.79, Bob Morrell once again wants to blame the human for error in complex medical systems. Geesh, I thought that RISKS readers knew better. Yes, people do make mistakes, but as I and many others have repeatedly pointed out, in complex systems, there is seldom a single point of failure, so to trying to assess "the" cause of an error is counterproductive. Yes, it feels good to be able to blame some person or thing, but this is what I have called the "blame and train" philosophy. It fails to fix the complex underlying causes. If there really is a single point of failure, especially one that repeats over time, the proper response is to make the system insensitive to this problem. If we know that a system component is noisy or error-prone (a transducer, say, or a noisy transmission line), we take care to design the system so as to be tolerant of those problems. We use error-correcting codes, or redundancy or we change the procedures. This is frequent with mechanical and electronic components, but almost never with people. When people err in this fashion, we punish them, which does nothing to get at the real cause. We know people transpose digits, confuse complex directions, and make other well-known and simple errors. Therefore it is a system error not to have designed the system to be tolerant of these problems. Morrell gives the following example: "The most common mistake, at its core, was raw human misunderstanding: conceptual misunderstanding leading to misinterpretation of medical data (surgeons who thought the higher the bacterial MIC number, the better the antibiotic, when the reverse is true...)". Gee, what stupid surgeons — at least that is what we are supposed to believe. Even this simple example is open to question. These surgeons sound incompetent: why couldn't they remember that higher MIC numbers are bad? Well, how many arbitrary little rules do surgeons have to remember? Note that the human default is that high numbers are good (and that "up" maps to "higher," "more," "larger," "louder," etc. - all of which usually are interpreted as "good." In general, larger numbers mean better (hence all the jokes about excellent golf and bowling scores). So assuming that high MIC is better makes sense. For me to understand whether this was surgeon stupidity or a system problem, I would ask how many such rules had to be learned, how consistent where they, and how frequently did this one come into play. Indeed, what is the meaning of an MIC number? A quick Internet search reveals these two definitions of MIC (from very different sources): Definition 1: "The MIC of a drug is defined in broth as the lowest concentration that prevents visible turbidity of the broth following the overnight incubation of 105-6 colony forming units (CFU)/ml (obtained during the log phase of growth)." Definition 2: "The lowest concentration of antimicrobial agent that inhibits the growth of the microorganism is the minimal inhibitory concentration (MIC). The MIC and the zone diameter of inhibition are inversely correlated (Fig. 10-5). In other words, the more susceptible the microorganism is to the antimicrobial agent, the lower the MIC and the larger the zone of inhibition. Conversely, the more resistant the microorganism, the higher the MIC and the smaller the zone of inhibition." (I am tempted to say: case closed. Quick: is high MIC good or bad? Rule of thumb: Any definition that has to contain the phrase "in other words" is a definition in trouble. In this case, after reading the "in other words" phrase, I still don't know. I think this means that a High MIC number is good for the organism, but bad for the physician trying to kill it. I still have no idea of how this translates into the MIC rating for an antibiotic.) Folks, there are major system errors here. Don't be so quick to blame the people, even if surface evidence indicts them. The problems are rich, complex, and deep. MIC is perhaps a wonderful term for scientists, but it is a bad term to be used by practitioners. I sympathize with the surgeons. We need system thinking, and a deep understanding of the complex context in which medicine is practiced before we can asses blame and before we can start to fix the problem, whether with technology or not. The RISK here is enormous. Well-meaning people claim that technology will fix the problem of medical errors. Nonsense. Technology is a tool, and whether it is effective or even more damaging depends upon how it is deployed. Thinking there is a single source of error - and therefore a single problem to be solved — will lead us to even worse problems. Don Norman, Nielsen Norman Group Northwestern University email@example.com www.jnd.org
IE appears to be insecure in part because of flawed logical thinking by its development team. There is currently a debate of sorts in the news between Mitchell Baker ("chief lizard wrangler" of the Mozilla Foundation) and Dave Massy (head developer of Internet Explorer) over which web browser is more secure. In a recent ZDNET article (also covered on Slashdot; see links at end), Baker points out that, since IE is tightly coupled ot the Microsoft Windows operating system, it is bound to be less secure than Mozilla, which is well separated from its host OS. Dave Massy's reply is very interesting (link at bottom): >The issue of not being part of the OS is an interesting one though that >is frequently the subject of misunderstanding. IE is part of [Microsoft >Windows] so that parts of the SO and other applicaaitons [sic] can rely on >the functionality and APIs being present. IE in turn relies on OS >functionality to do it's [sic] job. To be clear there are no OS APIs that >IE uses that are not documented on MSDN as part of the platform SDK and >available to other browsers and any other software that runs on Windows. Dave is making a flawed argument: Premises: - IE uses a documented interface to the OS - The OS interface is available to other software on the OS Conclusion: - The complexity of our interface is irrelevant to security The argument is wrong for two reasons: there is a false hidden premise (that the OS is bulletproof); and the argument itself is invalid (even if the hidden premise were true, the conclusion would not follow). One only need read back-issues of RISKS to find case after case of complex, unanticipated failure modes in complicated interfaces, each element of which is thought to be secure. That lesson is at least 30 years old — I am thinking of the stories about hidden data channels in Multics. This is of interest to RISKS readers because it is a stunning example of poor design by flawed logic: even if the IE coding were flawless at the subroutine level (we can bet that it isn't), Dave's stated attitude toward interface security would doom it to be susceptible to attack. References: http://news.zdnet.com/2100-9588_22-5630529.html http://blogs.msdn.com/dmassy/archive/2005/03/22/400689.aspx http://slashdot.org/article.pl?sid=05/03/24/1352211&tid=113&tid=154
In RISKS-23.80, Arthur T. writes of a shortcoming with using tinyurl.com as a substitute for typing long URLs, that being that you do not know where a tinyurl will take you. The links created by Makeashorterlink <http://www.makeashorterlink.com> first take you to a page displaying the URL that you are to be redirected to, giving you the opportunity to bail out if you don't want to go there. D.F. Manno firstname.lastname@example.org
US Bank has a Visa product targeted at teens (or rather, their parents), called VisaBuxx. It looks and acts like a standard Visa-logo debit card, but is more like a prepaid phone card - you pre-load it with value, and it's not directly tied to any bank account. Their web site and marketing literature talk about being able to easily add value to the card by transferring money online from an existing US Bank checking account. Unfortunately, the system leaves a lot to be desired. The usbank.com website has a link for the VisaBuxx program. When you click on it you're redirected to another site, called visabuxx.com. This site is apparently run by someone called "WildCard Systems". In order to transfer money from your US Bank checking account to the card you have to provide WildCard Systems with your checking account number and routing information and authorization to pull funds from the account, or give them your own debit card number. While WildCard Systems may be honorable and trustworthy, the risks in this are so obvious that it's painful. Meanwhile, the Terms Of Service published on the site go to great lengths to explicitly disavow any responsibility for anything bad that might result from the use of the site. The correct way for the bank to have implemented this would have been to provide the ability to associate the card with your existing Internet banking identity, and then let you log in through the bank's website and tell the them to send money from an account to the card rather than allowing the card operators to pull money from your account. Having the ability to provide account data to the VisaBuxx website is useful for non-US Bank customers, but a legitimate US Bank customer I shouldn't be forced to do it. I find it mind-boggling that financial corporations still can't see the obvious when it comes to protecting customer account data. When dealing with an official bank product I should NEVER have to tell the application anything about my accounts.
[As a member of the PITAC and a co-author of the report, I strongly encourage people will take time to read this and think about how to help carry out the recommendations. --spaf] PRESIDENT'S INFORMATION TECHNOLOGY ADVISORY COMMITTEE RELEASES NEW REPORT CYBER SECURITY: A CRISIS OF PRIORITIZATION Vital to the Nation's security and everyday life, the information technology (IT) infrastructure of the United States is highly vulnerable to disruptive domestic and international attacks, the President's Information Technology Advisory Committee (PITAC) argues in a new report. While existing technologies can address some IT security vulnerabilities, fundamentally new approaches are needed to address the more serious structural weaknesses of the IT infrastructure. In Cyber Security: A Crisis of Prioritization, PITAC presents four key findings and recommendations on how the Federal government can foster new architectures and technologies to secure the Nation's IT infrastructure. PITAC urges the Government to significantly increase support for fundamental research in civilian cyber security in 10 priority areas; intensify Federal efforts to promote the recruitment and retention of cyber security researchers and students at research universities; increase support for the rapid transfer of Federally developed cyber security technologies to the private sector; and strengthen the coordination of Federal cyber security R&D activities. To request a copy of this report, please complete the form at http://www.nitrd.gov/pubs/, send an e-mail to email@example.com, or call the National Coordination Office for Information Technology Research and Development at (703) 292-4873. Cyber Security: A Crisis of Prioritization can also be downloaded as a PDF file by accessing the link at http://www.nitrd.gov/pubs/. About PITAC The President's Information Technology Advisory Committee (PITAC) is appointed by the President to provide independent expert advice on maintaining America's preeminence in advanced information technology. PITAC members are IT leaders in industry and academia representing the research, education, and library communities, network providers, and critical industries, with expertise relevant to critical elements of the national IT infrastructure such as high-performance computing, large-scale networking, and high-assurance software and systems design. The Committee's studies help guide the Administration's efforts to accelerate the development and adoption of information technologies vital for American prosperity in the 21st century. Contact: "Alan S. Inouye 1-703-292-4540" <firstname.lastname@example.org>
I'm pleased to announce "EEPI" ( http://www.eepi.org ), a new initiative aimed at fostering cooperation in the areas of electronic entertainment and its many related issues, problems, and impacts. I've teamed with 30+ year recording industry veteran Thane Tierney in this effort to find cooperative solutions to technical, legal, policy, and other issues relating to the vast and growing range of electronic technologies that are crucial to the entertainment industry, but that also impact other industries, interest groups, individuals, and society in major ways. There are many interested parties, including record labels, film studios, the RIAA, the MPAA, artists, consumers, intellectual freedom advocates, broadcasters, manufacturers, legislators, regulators, and a multitude of others. The issues cover an enormous gamut from DVDs, CDs, and piracy issues to multimedia cell phones, from digital video recorders to Internet file sharing/P2P, from digital TV and the "broadcast flag" to the Digital Millennium Copyright Act (DMCA) and "fair use" controversies. Working together, rather than fighting each other, perhaps we can all find some broadly acceptable paths that will be of benefit to everyone. For more information, please see the EEPI Web site at: http://www.eepi.org A moderated public discussion list and an EEPI announcement list are now available at the site. Public participation is cordially invited. Thank you very much. Lauren Weinstein email@example.com or firstname.lastname@example.org or email@example.com +1 (818) 225-2800 http://www.eepi.org http://www.pfir.org/lauren http://lauren.vortex.com http://www.pfir.org http://www.vortex.com
Please report problems with the web pages to the maintainer