Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
I was delighted by Don Norman's piece in RISKS 9.15, since I too feel that RISKS could and should be even better, and I very much liked the illustrative examples he gave of the sorts of discussions that it would be nice to see in RISKS. (Peter Neumann's comment, stating that RISKS by its very nature attracts stories of failures rather than successes is also true, but somewhat misses the point.) I have in the past tried on occasion to stimulate discussion of general issues, rather than just of the details of specific incidents, but without much success. Let me try again, on the subject of risk assessment this time, using some points that I made during a panel session at the recent IFIP Working Conference on Dependable Computing Systems for Critical Applications. Ideally, deployment of any potentially risky computer-based system will be preceded by the sort of careful assessment of the risks involved that is typical in a number of engineering disciplines. There are a number of well-established techniques for such risk assessment, such as Failure Modes and Effects Analysis, Event Tree Analysis, and Fault Tree Analysis. As I understand it, all of these involve enumeration and consideration of possibilities, and identification of dependencies, which are then represented in some sort of graph structure, but none of the ensuing analyses take account of the possibility that this graph structure will be incorrect. To my mind, this makes these techniques of limited value for systems employing computers running large suites of software. In making this statement, I have three characteristics of such systems in mind: (i) their great logical complexity, and hence the danger of their harbouring potentially risky design faults, (ii) the largely discrete nature of their behaviour, which means that concepts such as "stress", "failure region", "safety factor", which are basic to conventional risk mangement have little meaning, and (iii) the almost ethereal nature of software, which makes it much more difficult to identify appropriate components and to understand their interactions (both planned and accidental) than with many physical systems. It is of course good practice to design software with a highly modular structure, and to try to isolate critical parts of the software in as small and simple a subsystem as possible. However with a really complex system such structuring is again essentially ethereal and by no means simple. There will therefore remain a significant and unquantifiable likelihood that such modularisation and isolation is itself faulty. I thus believe that the graph structure which purports to represent, at any significant level of detail, the internals of (especially the software in) a complex computing system, is likely to be wrong, in that it will not represent any residual design faults properly, so that such faults will remain an (unquantified) contributor to overall system risks. The question of whether we will ever able either to guarantee that a complex computing system is entirely free of design faults, or alternatively able to quantify the likely impact of residual design faults is moot. The point I am trying to make here is that in present circumstances, I see no current alternative to basing the assessment of the risks of the overall system on a worst case scenario for the behaviour of the computing system, based on the physical capabilities of the its interfaces to the outside world, rather than on mere hopes about its internal activities - and to taking appropriate precautions externally to the computing system. Yet, unfortunately, one sees these days the opposite trend: namely the increasing use of computing systems in situations where there is little or no ability of some surrounding system to mask the failures of the computing system. It would be nice to learn the reactions to the above comments of people who have tried applying conventional risk assessment and management techniques to systems involving really complex software. Brian Randell, Computing Laboratory, University of Newcastle upon Tyne +44 91 222 7923
Last evening I visited a local hospital ICU to check on the status of a patient I had treated and transported eariler in the day (I am a certified paramedic, in my copious spare time). The patient was on a state-of-the-art mechanical respirator. While discussing the patient's case with the attending physcian at the patient's bedside, the respirator failed, accompanied by a visual and auditory fanfare of alarms. The physcian quickly reset the device and we continued our discussion. Moments later the device once again failed and had to be reset (it should be noted that a number of differing events can cause alarms in these devices and they occasionally need adjusting and service). By the third occurance of this failure I had noticed a pattern. Each time I heard radio traffic on my portable radio (in the 800 MHz band) the respirator failed. We were not transmitting mind you, only receiving but this seemed to be enough to set off the alarms. Just then we received a call and when my partner keyed his mike to transmit several other respirators in adjoining rooms also becamed alarmed and prompted a flurry of activity to reset them. Needless to say, our radios are now banned from the unit. Unfortunately, I had to leave abruptly and did not have time to research the model or manufactuer but will follow up when the opportunity presents itself. Perhaps it is time to require TEMPESTing of medical equipment... -ear
'Business Week' magazine for Sept. 4, 1989 features a cover story entitled IS NOTHING PRIVATE? The teaser line says "Computers hold lots of data on you — and there are few limits on its use" The article focuses mainly on financial data, especially credit records. It includes some good sidebar pieces: * THE RIGHT TO PRIVACY: THERE'S MORE LOOPHOLE THAN LAW (reviewing existing privacy laws) * NEVER MIND YOUR NUMBER — THEY'VE GOT YOUR NAME (guarding your Social Security Number is almost pointless today) * THE SCOOP ON SNOOPING: IT'S A CINCH (it's easy for almost anyone to get almost anyone else's credit records) In that last piece, the reporter posed as an employer checking out potential job candidates. He was required to produce almost no verification. He then requested, among other things, the credit record for one Dan Quayle (at an Indiana address taken from an old "Who's Who in the Midwest"). This raised no alarms. It turned up an "a.k.a. J. Danforth Quayle" with a Washington-area address, who charges more at Sears than at Brooks Brothers and has a big mortgage. It gave his credit card numbers, etc. The Vice President's office was not amused. "We find the invasion of privacy aspect of the credit situation disturbing. Further controls should be considered," said a spokesman.
Donald J. Weinshank in a recent posting (via Tom Thomson) to RISKS asks if the law is all we can appeal to in teaching ethics. The answer, at least in U.S. taxpayer-supported institutions, must be "Yes". The law *IS* the current ethical consensus. To go beyond the law in a course taught at a taxpayer-supported institution is to begin to mix state and religion which we in America have agreed through law making that we wont' do. A teacher in a institution receiving U.S. taxpayer support is legally obliged to state from the front of the classroom that if it isn't illegal it is by definition ethically correct.
The following story appeared on the front page of THE SEATTLE TIMES, Monday September 4, 1989: WORK IN US: HIGH DEATH RATE --- Associated Press and United Press International ... The National Safe Workplace Institute found that more workers are killed on the job in this nation than in most other industrialized countries. ... US workers are 36 times more likely to be killed than a Swede, and nine times more likely to be killed than a Briton, it said. The report said one out of 14 workers will be killed or seriously injured at work. ... [ Such a big difference is surprising to me. Is this for real? It is often noted that European safety regulations are generally more stringent than those in the US. Does this contribute to the reported difference, or is the effect dominated by some other factor - e.g., proportionately more people employed in hazardous industries in the US? This report may seem only marginally related to RISKS; at COMPASS '88 last year someone cited a study that said about two percent of industrial accidents were attributable to control system failures. - JJ] - Jonathan Jacky, University of Washington
Does anyone have any experience with the RISKs of digitized faces such as are found on uunet.uu.net:/faces? I want to gather digitized faces of our staff members and students to make them publicly available. I would like to be forewarned of any problems that other people have encountered. --russ (nelson@clutx [.bitnet | .clarkson.edu])|(firstname.lastname@example.org)| (Russell.Nelson@f360.n260.z1.fidonet.org)|(BH01@GEnie.com :-)
Quite regularly I receive mail from sources in the U.S., whose computer systems are obviously unable to handle European-style postal codes (not fitting the two-letter-plus-five-digit scheme). Coming in the mail today was something new: A mass mailing from a North Californian company, directed at my street address in Switzerla, ND 08008 (my four-digit Swiss postal code zero extended) I still wonder, whether the typist who entered my address that way acted out of ignorance or in a flash of genius... Interesting enough, the U.S. Post Office people in California must have believed that "Switzerla" is somewhere in North Dakota. I don't think that "Bulk Rate U.S. Postage PAID" mailings are usually transported overseas... All other mail of this type that I receive has some extra postage added. This should be investigated. How about writing a letter to those friends of yours in Germa, New York or those in Brita, Indiana... :-) Michael Franz, Computersysteme, ETH Zurich, Switzerland +41-1-256'22'23
Recently, my beloved (and elderly) Porsche 924 Turbo expired at the roadside. It has a electronic engine management system, and after some initial investigation (including discovering that the hardware doesn't match the manual!), I realised that the problem was (a) in the EMS, and (b) beyond my ability to diagnose. I had it towed to the nearest dealer. After a few days, I rang the dealer to find out what had happened. They said "We can't find out what's wrong with it". I rang back the following day, and the day after - Same story. Then I went there. The service manager sheepishly admitted that their diagnostic computer was faulty, and after several attempts to diagnose my car, they'd tried it on one of the new 944s in the showroom, and it had diagnosed that as faulty. They'd had to return the diagnostic computer to Porsche UK Headquarters for replacement, before they could fault-find my car. Motto: "Who will diagnose the diagnostics?" Hugh Davies. (P.S. Oh yeah, the fault. New EMS, H.T. module and other bits and pieces. $1600. Sigh.)
The discussion to date has been long on complaints, but short on workable suggestions. I would like to put forward a system which was in use in Canada, back when I freelanced. Each bid was assigned points. There were points for having been in business a while (stability). There were points for having completed past contracts successfully (competence). There were points for having the necessary employees in hand. Note, by the way, that bids are not necessarily just dollar values. They may also contain specific plans, which differ in both the proposed method and the proposed result. These can be evaluated, and given merit points. For example, a bidder may have seen a way to reduce costs, or may have argued that a danger to the public safety must be reduced, or whatever. These proposals may try to justify a lower bid, or a higher bid. The proposals might be judged to have low merit, because e.g. the review board wasn't willing to cut costs in the proposed manner, or didn't want to buy that much public safety. Since this was the government, they could also assign points for "agenda" items: Canadian content, number of jobs created, minority and regional issues, being a small business, development of a useful technology, whatever. I never learned exact details, but then, I am putting this forwards as a workable general idea, not as some sort of precise formula. For example, it is possible to give security points to computer purchases. Bids proposing MSDOS would lose all of these points. However, if the product didn't require much security, then MSDOS might still be in the winning bid. Don
A friend of mine who has his own business has this quote from John Ruskin (the 19th Century English social critic) on his office door: "It's unwise to pay too much ... but it's worse to pay too little. When you pay too much, you lose a little money ... that is all. When you pay too little, you sometimes lose everything, because the thing you bought was incapable of doing the thing it was bought to do. The common law of business balance prohibits paying a little and getting a lot. It can't be done. If you deal with the lowest bidder, it is well to add someting for the risk you run. And if you do that, you will have enough to pay for something better." Bill Anderson, Xerox Corp.
Please report problems with the web pages to the maintainer