Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
I had not intended to get involved in the RISKS Forum discussions, but despite my great respect for John McCarthy's accomplishments, I just cannot let his latest message (RISKS-1.8, Sept. 8) pass without comment.
Some important points that need to be considered:
John McCarthy suggests that people ignore the risks of not using technology. I would suggest that it is not that these risks are ignored, but that they are known and we have learned to live with them while the risks of using new technology are often unknown and may involve great societal upheaval in learning to adapt to them. Yes, wood smoke may cause lung cancer, but note that recent studies in Great Britain show that the incidence of prostate cancer in men who work in atomic power plants is many times that of the general population. To delay introducing technology in order to assure that greater risks are not incurred than are currently borne by the population seems justifiable. Yes delay may cause someone's death, but the introduction may cause even more deaths and disruption in the long-run.
The solution is to develop ways to assess the risks accurately (so that intelligent, well-informed decision-making is possible) and to develop ways to reduce the risk as much as possible. Returning to the topic of computer risk, citizens and government agencies need to be able to make informed decisions about such things as the safety of fly-by-wire computer-controlled commercial aircraft or between computer-controlled Air Traffic Control with human-assistance vs. human-controlled Air Traffic Control with computer assistance. To do this, we need to be able to assess the risks and to accurately state what computers can and cannot do.
Forums like this one help to disseminate important information and promote the exchange of ideas, But we also need to start new initiatives in computer science research and practice. I have been writing and lecturing about this for some time. For example,
If using some of these techniques (or despite their use), it is determined that the software would be more risky than conventional systems or is above a minimum level of acceptable risk, then we can present decision makers with these facts and force them to consider them in the decision-making process. Just citing horror stories or past mistakes is not enough. We need ways of assessing the risk of our systems (which may involve historical data presented in a statistically proper manner) and ways to decrease that risk as much as possible. Then society can make intelligent decisions about which systems should and should not be built for reasons of acceptable or unacceptable risk.
The question of responsibilities for non-use of computers are largely meaningless in terms of law unless the dangers of non-use were known to substantially increase the probability of greater harm. In the case of your three short examples:
Again, a risks-of-computers organization can only present its case to court and people and, so long as no malfeasance is involved, cannot be held responsible for its failure to predict future consequences. There are far more important “unsymmetric” relationships than that of the press vs. thelegal system that pertain to issues of responsibility, namely, that of past vs. future and known vs. unknown. I feel that you are correct in pointing out how computer people would do well to apply their expertise to solving problems of society. In this case the moral imperitives are quite clear.
The problem with a forum on the risks of technology is that while the risks of not using some technology, e.g. computers, are real, it takes imagination to think of them…
You raise an interesting point that deserves more discussion. However, as one who is concerned that the major problem arises from an uncritical acceptance of technology, I strongly disagree with your suggestion that the scales are stacked AGAINST those who are “pro-technology“. The reason that anyone adopts any given technology is that it provides benefits that he can see; the problem is getting those individuals to see that there are costs as well. In other words, the bias in the system is towards acceptance of technology, not rejection of it. It is this general bias that “risks” people are trying to correct.
… Is a risk-of-computers organization that successfully sues to delay a use of computers either MORALLY or LEGALLY LIABLE if the delay causes someone's death? Is there any moral or legal requirement that such an organization prove that they have formally investigated whether their lawsuit will result in killing people? As the above examples indicate, the present legal situation and the present publicity situation are entirely unsymmetric.
There are such precedents; organizations can be held liable for using technology that that is not as up to date as is “generallyapplicable”.On the more general point, it is much harder toestablish liability as the result of someone's inaction as compared to the result of someone's action; this does happen, but it is harder to prove.
The harm caused by tape-to-tape batch processing as opposed to on-line systems.
I like this example; let's have more discussion of it. There was a time when on-line systems were totally unreliable, but I think that time has past.
Shouldn't computer professionals who pretend to social responsibility take an interest in an area where their knowledge might actually be relevant?
This is a cheap shot unworthy of pioneers in computer science. More than anyone else, computer professionals are the ones who are in the best position to assess the limits as well as the promises of technology. You once said that the responsibility of the scientist is to take his science whereever it may lead. I agreed with you then, and I agree with that now. But science is part of a social and cultural context, and cannot be separated as cleanly as one might imagine. If it is proper for the developers of science and technology to propose new applications of their work (because they see these potential applications as beneficial to society), it is also appropriate for them to suggest possible consequences of their use as well.
While I would prefer not to waste technological resources on such a thing, I would see some truth in the argument that the non-technological solutions also have a clear risk of failure.
Dave
Assuming, contrary to fact, that putting a faster clock in your AT would cause it to catch fire, the RISKS contributors are quite wrong about who would be sued. According to the new legal doctrine of bursae profundae, it isn't the poor BBOARD operator that would have to pay but rich IBM. It wouldn't take much of a lawyer to figure out that IBM should have anticipated that someone might do this and warned against it or even somehow made it impossible.
Changing the subject, consider the effect of the court decision that NOAA was liable for not fixing the buoy. Up to now weather predictionshave been offered as a non-guaranteed service with the user taking responsibility for the extent to which he relies on it. The court has said that this is no longer possible. Any institution with “deep pockets” cannot safely offer information on a user responsibility basis. What if Stanford University has negligently failed to replace a stolen book from its medical library, and someone dies who would have been saved had his doctor found the book? Stanford's lawyers should advise Stanford to deny access to its medical library to practicing physicians.
Another example of “good news/bad news” is the use of computerized axial tomography (CAT scans) as a substitute for exploratory surgery in the head and body.
Brint
My primary complaint about your otherwise interesting table is that it assumes independent failure modes. I think it is much more likely that the effects of coupled failures are larger. In particular, given the failure of one platform, it is more likely that more than one will fail.
… The example of squirreled control characters and escape characters that do not print but cause all sorts of wonderful actions was popular several years ago, and provides a very simple example of how a message can have horrible side-effects when it is read.
But if I have designed my operating system to display only what it receives, how is this possible?
The example of squirreled non-printing characters is of course just one example; that one was relatively easy to fix once it was recognized. But until you have discovered a flaw, you are vulnerable. And you never know how many flaws remain undiscovered.
Sure, you may be smart, but large operating systems generally have many security flaws. You probably aren't smart enough to design and build a system that has none. (In fact, your solution is not quite good enough by itself — without some other assumptions on the rest of your system.)
Peter
Sure, you may be smart, but large operating systems generally have many security flaws. You probably aren't smart enough to design and build a system that has none. (In fact, your solution is not quite good enough by itself &8212; without some other assumptions on the rest of your system.)
I certainly recognize that operating systems have security flaws in them, having been an OS hacker myself at one time. But for the problem of messages (as opposed to programs) doing bad things to my system, I guess I never figured out a way of doing that.
My naive model is that I have a special program that intercepts the raw bit stream that comes in from my communications port. It then translates this into ASCII, and then prints it on my screen.
If this is all that my program does, I can't see what harm can be done.
Now, if my program writes the message to disk, then I can see potential problems. So I simply count the characters that are sent to me over the line, and save exactly that many characters on disk.
What am I missing?
Herb, Indeed your model is a bit naive, in that you are looking at the problem much to narrowly, and assuming that everything else works fine. Suppose that your special program would work as you wish in intercepting the raw bit stream. Suppose, however, that there is an operating system flaw that lets me change your special program to do what I wish! (There are many examples of how this might happen.) Now your program can be effectively bypassed. The point is NOT whether you can seal off one hole, but rather that you are dealing with the tip of an iceberg and there may be titanic holes you don't even know about. Besides, as I said earlier, the message squirreling is only one example — and hopefully completely cured. So I hope you don't reply that you could use seals to guarantee that your program is unchanged. That would still miss the broader point. PGN
Please report problems with the web pages to the maintainer