Forum on Risks to the Public in Computers and Related Systems
ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
Volume 12: Issue 52
Monday 21 October 1991
Contents
The Future is Here- Amos Shapir
The_RISKS_of_Geraldo- Andy Hawks
Re: Police raid wrong house -- for second time- Amos Shapir
Re: TRW- Bob Colwell
Anthony DeBoer
Steve Hollasch
Re: buggy software- Mark R Cornwell
James B. Shearer
Byron Rakitzis
Richard Hanlon
Stephen G. Smith
Bob Wilson
David Parnas
David Chase
Licensing Software Engineers- Christopher E Fulmer
Info on RISKS (comp.risks)
The Future is Here
amos shapir <amos@cs.huji.ac.il>
Thu, 17 Oct 91 17:45:36 +0200
The Israeli Broadcasting Authority (IBA) is an independently budgeted government agency, financed by a special tax (sometimes called, for historical reasons, "TV license fee"). Since it does its own collection, it's sometimes even more zealous than the IRS. The following is a true incident that happened to a friend of mine: She received a notice from the IBA to pay back due taxes. The strange thing about it was that it was sent to an address she moved into just ten days before, and was sure nobody but the landlord knew about. While she was in the IBA's office to settle the matter, she'd found out that due to recent unification of government databases, the following information about her was retrieved at a touch of a clerk's terminal key: - Her new address (probably from the city's municipality); - The type of each TV she'd owned since 1984 (dealers and importers are required to report that to the IBA); - The dates in which she'd left and entered the country during that time (probably from the border police passport control records). All this was just information necessary for the specific case at hand; that Big Computer probably knows a lot more about us. In short, the future is here, and it looks more Orwellian than Orwell could have ever imagined. Amos Shapir, The Hebrew Univ. of Jerusalem, Dept. of Comp.Science. +972 2 585706
The_RISKS_of_Geraldo
Andy Hawks <ahawks@isis.cs.du.edu>
Wed, 16 Oct 91 18:41:23 MDT
I'm sure many of you saw or have heard/read about Geraldo Rivera's Now It Can Be Told Program which featured a show on hackers a couple of weeks or so ago. Well, by airing this program, it appears that Geraldo (or actually the producers/editors of the show) have put at least one military computer at risk. One segment of the program featured a "home video" of Dutch teenagers hacking. This home video would occasionally focus in on the computer screen as the hackers hacked. As reporter Krista Bradford describes what is going on, the screen shows: > | quit | 221 Goodbye. | rugrcx> | telnet tracer.army.mil | Trying 192.33.5.135.... | Connected to tracer.army.mil | Escape character is '^]'. | | Xenix K3-4 (tracer.army.mil) | | login: | dquayle | Password:_ > Then we learn that previously, the hackers have gained superuser privileges to the system. As Krista Bradford is describing the superuser access, we see the computer screen again and the hackers are attempting to login to the same site with the 'sync' login (so, this is apparently how they gained superuser access). Later in the show (about 1 minute or so after the hackers have gained superuser privileges) Emmanuel Goldstein (2600) states that the hackers proceeded to create a new account. The account they create is 'dquayle' (Dan Quayle) and has superuser privileges. Then, the screen focuses in on the new record in /etc/passwd for 'dquayle', and Mr. Goldstein tells us that the new account has no password (the screen focuses in on: "dquayle::") Thus, anyone who has telnet access could've repeated this same process, logging in to this tracer.army.mil site with the username 'dquayle' (and no password) and would have gained superuser access. It is obvious that in this situation, whoever allowed the show to be aired in its final form had no knowledge of the Internet, otherwise this definite "how to hack" security breach would have been omitted. Thanks Geraldo, for showing all of us how to hack into military computers. (Note: I avoided sending this in for submission earlier to prevent any other hackers from repeating the same experiment. Hopefully, tracer.army.mil has now had enough time to plug up the obvious hole.)
Re: Police raid wrong house -- for second time
amos shapir <amos@shum.huji.ac.il>
Thu, 17 Oct 91 17:39:41 +0200
[Quoted from referenced article by dbenson@yoda.eecs.wsu.edu (David B. Benson)] > The officers didn't leave until Dean Krussel showed them >Callahan's letter. "This thing just won't go away," he said This seems to be a Law of Nature in computer systems: Nothing ever goes away. Many databases keep each datum as an initial entry enhanced by a set of update records (I suspect some even run through the whole update process every time they're rebooted). Every now and then a system crashes, someone loads a wrong backup tape, etc., and voila! your magazine is being sent to an address you have left 12 years ago, or your house is being raided by the police... :-( Amos Shapir The Hebrew Univ. of Jerusalem, Dept. of Comp. Science. 972 2 585706
TRW in chaos?
<colwell@ichips.intel.com>
Thu, 17 Oct 91 09:08:57 -0700
No darn wonder the general public thinks the techies are clueless and screwing
up the world. In OMNI's August 1991 issue, Kenneth Hey says (pg. 84):
"While human misuse of technology has reached bothersome levels [personally,
biased and twisted thinking like this article bother me a lot more], technology
can also assume a worrisome life of its own. TRW computer designers expressed
surprise when a large network of computers they created began exhibiting
'strange, unpredictable' behavior. During these periods, the system could not
perform specific tasks as requested. TRW suspected 'chaos', an uncontrollable
but natural mathematical phenomenon, which mysteriously attacks complex
computer systems. Scientists at the Xerox Palo Alto Research Center conducted a
series of experiments and discovered that, indeed, large aggregates of
connected computers can exhibit unpredictably wild oscillations and unstable
behavior, generating unwanted actions in the system. The reality of computer
instability -- that is, the real potential for chaotic behavior -- has raised
professional concerns about the appropriate level of computer dependence for
military, corporate, and informational systems."
Maybe the TRW system was just busy writing this article for OMNI...nah, it
would have turned out a lot better. Anybody from TRW care to shed any light on
what this person was talking about?
503-696-4550
Bob Colwell, Intel Corp, 5200 NE Elam Young Parkway, Hillsboro, OR 97124
Re: TRW misreports local taxes (Seecof, RISKS-12.50)
Anthony DeBoer <adeboer@gjetor.geac.com>
Thu, 17 Oct 1991 09:28:57 -0400
Why is it that inaccurately negative credit reporting [doesn't | shouldn't] constitute libel under the law? It would seem that if someone told a third party all kinds of horrible things about me that weren't true, at a certain point a line would be crossed and they'd be liable for damages. What is TRW's defense for this?
TRW Credit Reports
Steve Hollasch <hollasch@kpc.com>
Thu, 17 Oct 91 11:00:34 PDT
In RISKS 12.51, Rob Spray gave the procedure necessary to obtain a free
credit report from TRW (TRW recently ``decided'' to send free credit reports to
people who wish to see their file, as maintained by TRW).
Included in the description of the procedure to obtain this report is the
following list of information that TRW needs before it will send it to you:
- Full name
- Spouse's first name
- Addresses with zip codes for last five years
- SSN
- DOB
It seems like catch-22 that people who are concerned about privacy are
required to send this information in order to check their records. On thinking
about this list, though, it also becomes apparent that TRW must request some
private information to verify that the requester is authentic.
Is it reasonable to assume that TRW already has all this information? If
this is true, then my recent attempts to keep my SSN private seem rather
futile. Why bother keeping something private when it is available in a public
database (albeit for sale)? What information is available to those who
subscribe to this service?
Steve Hollasch, Kubota Pacific Computer, Inc., Santa Clara, California
Re: buggy software (RISKS-12.50) and Steinmetz
Mark R Cornwell -- Mind Tools Corp <cornwell@rock.concert.net>
Thu, 17 Oct 91 13:32:26 -0400
In recent exchanges between David Parnas (12.45,50) and James Shearer (12.49)
the issue of who should assume the risk of software bugs has arisen. Let me
say at the outset that I dislike the term "bug". I will use the term "error".
It is not necessary to make a blanket judgement about who should assume the
risk of programming errors. A software license can be structured so that
either the client or the vendor assumes this risk. The decision is best left
to them. The public then has recourse against the party who has agreed to
assume the risk.
That said, I think that it would be better for the software profession if
standard practice were that the vendor assume the risk of his programming
errors. I know of no better way to create an incentive for vendors to provide
quality software. If software vendors chose to be accountable for programming
errors in their products, they might be willing to try more "crackpot" ideas in
place of a process that few believe is working well.
Last night I was at a gathering of local entrepeneurs speaking with an
independent software developer. He is president of his own corporation. He
writes programs. He described his work with XWindows and Motif on the latest
workstations in manufacturing. He asked me what kind of mathematics I studied
and I started to tell him about correctness proofs of programs. When I told
him that a program could be viewed as a function from states to states he was
fascinated. He said he had never though of programs like that before, but the
idea appealed to him.
I felt like Steinmetz must have felt talking to an early engineer who built
electric power systems. "You can think of the current plotted against time as
a sine wave".
"Very interesting, I'd never thought to look at it that way before."
--Mark Cornwell
[OK. Time to blow the whistle on this
subject, after the following messages... PGN]
Re: buggy software (RISKS-12.49,12.50)
<jbs@watson.ibm.com>
Wed, 16 Oct 91 20:22:22 EDT
In reply to my post suggesting buggy software is not a major threat to
the republic David Parnas writes in part: "We could use the "mature adult"
excuse to get rid of all of these regulations, but we would all be worse off
for doing so. Your apartment could be destroyed because one of your "mature
adult" neighbours bought an appliance that was not properly designed. Your
child could be injured because one of your "mature adult" neighbours bought a
car with defective brakes. Further, every time you bought one of those
products you would have to determine its safety for yourself, whether you knew
enough to do so or not."
And what, pray tell, terrible disaster will befall me when my neighbor
buys buggy software for his pc? Also if Mr. Parnas believes that the
regulation of autos and electrical appliances means there is no need for a
buyer to consider their safety he is sadly mistaken. Tens of thousands of
people are killed using autos every year. This is somewhat more than are
killed using computer software.
David Parnas also writes: "Those who object to the suggestion that
software products should be subject to safety requirements and that software
manufacturers should be held responsible for the results of any negligence seem
to believe that we are asking for special treatment of software. Au contraire!
We are asking that software be treated like other products, produced by
registered or licenced engineers, and that software manufacturers be treated
like other manufacturers. Now, because of the supposedly non-physical nature
of software, programmer's products seem to have special exemption. If cars
were as buggy as the software on the market today, the automobile manufacturers
would have long ago been sued into bankruptcy."
Mr. Parnas appears to have things backward. So far as I know software
is currently treated like any other product in the uniform commercial code and
other general laws regulating commerce. Mr. Parnas is asking that software be
subject to additional special regulation like that imposed on certain hazardous
products such as cars despite the fact that in most cases software defects pose
no equivalent direct danger.
James B. Shearer
Re: buggy software (Parnas, RISKS-12.50)
Byron Rakitzis <byron@archone.tamu.edu>
Wed, 16 Oct 1991 23:47:55 -0500
Were it not for the hauteur in this posting, I would have let this go by. But let me just state: there are a number of us who believe that the product liability laws have gone way past any reasonable point. What used to be governed by contract law is now covered by the law of torts in the US, with the pervasive motif being "it's the rich guy's fault". Please keep the hysterics to a minimum and try to assess what product liability laws have to offer: there is some added factor of safety at a huge cost. Witness the skyrocketing costs of medical insurance. Witness the fact that pharmaceutical companies are most reluctant to release new products, and for example have all but halted research on contraceptives. Another snag is that liability is not determined by experts, it is determined by a JURY (at least in these United States). How is a jury going to make a reasonable decision on the alleged defectiveness of FooNix? And if some such laws come to pass, what software developers in their right minds will market products like FooNix? Will we all not be somewhat the poorer for this? No doubt the intentions of such laws are noble: to protect the ignorant (your implication, not mine) public from being duped by unscrupulous business. However, such protection comes at a huge cost in liberty, money and time, and this cost should not be so callously dismissed.
self regulations for buggy software instead of gov. regs
<d3e198@bsip54.pnl.gov>
Thu, 17 Oct 91 12:45:56 -0700
Although I believe that both Mr. Kempe (RISKS-12.51) and Mr. Parnas (RISKS-12.50, 51) have good points, I believe Mr. Thomas (RISKS-12.50) points out a greater risk that our profession may be painting itself into a corner. After the recent failures of computer systems (AT&T's phone outage causing airline travelers to be delayed, the stock market problem which I can't remember when it happened, plane crashes, etc.) it would only take a headline like Computer Glitch Causes Nuclear Power Plant to Meltdown, XXX People Evacuated, State Declared Off-Limits for Next 10,000 Years, or Computer Glitch Causes Missile Launch, XXX People Killed before the world's public would demand a reckoning (witch-burning comes to mind). Instead, perhaps our profession should try to become more _self_ regulating. If we clean up our own act, _before_ the government can step in, then it will be possible to set the regulations ourselves, instead of the *all knowing* committees of governmental bureaucracy. Just as doctors are mostly self-regulating, so should the computer science field be. For example, as Mr. Massey suggests, the term software is used as a conglomerate of a large, diverse genre of products. But I feel that the computer science field itself should break it apart and do the categorizing. If we do not, someone else surely will, and will probably not do it in a way that most of the rest of us like. Computers now control major parts of our lives e.g. airline safety, automobiles, medical systems, nuclear power plants, etc. (Whether this control is good or bad, or whether the manner in which they do the controlling and interacting with humans is good or bad is another topic.) They can easily do massive damage through negligence on the part of the software, design, or through hardware failure. This risk of disrupting (even endangering) our lives makes a great need for regulation as Mr. Parnas suggests. The public has a reasonable expectation of safety and reliability, and the only way to meet this expectation is through some sort of standardization. But as Mr. Kempe points out, pointing a gun at a programmer's head does NOT produce necessarily good code. Forcing people to do things usually ends in failure. The government is definitely not the body to do any regulating. I further agree that software engineering is an art. The U.S. Court systems have been treating software as books in the regards to the copyright laws. Many license agreements liken to their software to books. [Therefore you must treat this software just like a _book_ with the following single exception. ... make archival copies of the software for the sole purpose of backing-up your software and protecting your investment from loss.] To avoid the problem of *Big Brother* watching the programmers, and to avoid taking the art out of the software engineering, a self-adopted code of *ethics* can satisfy both the regulations _and_ the art. If one feels pride in the following of a such a code, then there is a greater chance that the person will continue to follow this code willingly. It becomes a matter of personal integrity. This system is not fool proof, e.g. how many doctors are sued in mal-practices suits each year, but nobody quits going to the doctor because he then believes that all doctors are quacks. Self-adopted codes would need validation of course. This could be provided with a minimal intrusion by the government, making it a law to provide free bug fixes (there's ALWAYS at least one more bug). From there on, the economic laws of supply and demand based on quality should enforce adherence to the regulations. For example, if company X develops a reputation as having very reliable software the first time around, then people will tend to buy X's software, just as is done in the market for other products. Richard Hanlon
Re: buggy software
Stephen G. Smith <sgs@grebyn.com>
Thu, 17 Oct 91 16:38:40 -0400
In RISKS DIGEST 12.51, skewes@CAD.MCC.COM (Ernesto Pacas-Skewes) writes: >I'm just saying that registration and license like so many other >things aren't always what they seem. Ain't it the truth! In particular, I see registration, licensing, etc, as being an attempt by the companies that write software to limit their liability. By having a "licensed professional" sign off on their software, they hope (IMHO!) to be able to say "This software was produced according to standard industry practice" when they produce the next Therac-25. Unfortunately, there is no "standard industry practice" that will provide an assurance of good software, other than the blanket "good engineering practice". The main causes that I've seen for bad software are management issues, rather than technical issues. In particular: 1. Software always seems to be produced under *extremely* high stress. Only medicine and (possibly) law require high performance under higher stress conditions. This is usually caused by assorted forms of bad scheduling. 2. Many software managers fancy themselves as techies, despite the fact that they may not have ever written a line of production code. The urge to micromanage seems to be irresistable. 3. Specifications (IMHO, the most important part of the project) are often confusing, conflicting, or incomplete. "Formal verification" against a bad (English!) specification is, at best, a waste of time. So how can we improve management? I dunno. Most of the outfits that I've worked for were strictly "top-down" -- directives come down from the top and status information goes up. Statements from the troops like "No way can this get done on time" tend to get lost. Steve Smith, Agincourt Computing sgs@grebyn.com (301) 681 7395
Re: Bart Massey on "Buggy Software"
Bob Wilson <wilson@math.wisc.edu>
Thu, 17 Oct 91 16:21:11 CDT
Bart Massey believes you could make a case for licensing software production for medical software, etc., for which he sees a "threat of bodily harm", but he goes on that "it is clear that" no safety supervision is needed for computer games or word processors because they can make no such threat. (Of course "it is clear that" is a dangerous phrase to dangle in front of a mathematician...) He may be right about a threat of BODILY HARM, but there are other very real threats. RISKS has had several examples of potential harms from word processors, to take one of his examples. Many of us surely write memos or letters we need to go back and tone down: The original, if published, might be dangerous at least to our economic health. We have seen here in the last several years examples of commercial word processors which would retain in the disk version of your document what you had deleted from the printed version, or for that matter what might have been in a disk block unrelated to the present document. Those contents are not hard to look at, if your employer sends somebody in to see what you have been saying about your boss or the company. The controversy over Prodigy and whether it was "stealing" copies of things from your system represents a RISK in a system frequently used for game playing. Regardless of whether Prodigy was doing it, the RISK is there for some other communications related software to exploit. I don't like the idea of licensing software engineers, but I think it is too simplistic to think the only dangerously RISKy software is that used for medical instruments, nuclear power systems, and vehicle controls. Those make nice examples because they are so far beyond question, but by the same token they also get more scrutiny. That scrutiny may not be enough, but it surely doesn't mean we can ignore the RISKS in more mundane applications. Bob Wilson, Math Dep't, Univ. of Wisconsin
Re: buggy software (RISKS-12.51)
David Parnas <parnas@qusunt.eng.McMaster.CA>
Fri, 18 Oct 91 17:42:39 EDT
Bart Massey claims there is no risk of harm from buggy wordprocessing software. If that software is used to produce the manual for a dangerous device,` and deletes important warnings, there is a risk. I agree with Mr. Massey's attempts to draw lines and think it is obvious that there are shades of grey in this area, but the analysis is more difficult than it looks. I am aware of all the weaknesses of licensing and registration raised by Ernesto Pacas-Skewes. I sometimes have to drive a car and observe what licensed drivers do with their properly registered (but inadequately equipped) cars. Nonetheless, when given a choice, I prefer drivers who did pass a driving examination to those who never were able to do so. Perhaps we should stop arguing about whether some regulation is ever needed and start to think hard about what should be regulated, how it should be regulated, and who should do the regulation. When we do, I think we will find useful ideas in other fields of engineering. Dave Parnas parnas@sscvax.cis.mcmaster.ca
Re: buggy software (Parnas, RISKS-12.51)
<jbs@watson.ibm.com>
Thu, 17 Oct 91 19:58:31 EDT
I suspect "variety of external pressures" means "competition", "if the
market were better controlled" means "if the competition was put out of
business" and "users were better informed about products" means "users stopped
worrying about minor factors such as cost, per- formance, timeliness and
function".
James B. Shearer
re: buggy software (bart, RISKS-12.51)
David Chase <David.Chase@eng.sun.com>
Thu, 17 Oct 91 17:20:54 PDT
> ... There is probably some intermediate class of
> software applications where a UL-like oversight body would be the
> appropriate answer.
This is a mere "difference of opinion", but I have certain expectations even of
games and word processors. In particular, I expect that there are no bugs in
the program which might result in the destruction of unrelated data stored on
the same computer. (At this point, of course, the OS vendor and the games
vendor engage in heated finger-pointing, and the customer is left grumbling.)
This is a far cry from the reliability I expect from an airplane, or an EFT
point-of-sale-terminal, but it is more than none at all, and I'd rather not
verify it for myself. Furthermore, past experience indicates that
"verification" is difficult and time-consuming, and not something that a
customer is interested in doing.
There is the second problem of "the use of this software". Both software and
physical devices are both put to unintended uses. I can't think of any really
juicy software examples just this instant, but once upon a time someone I know
did use a bicycle cone wrench in place of a 70-amp slow-blow fuse (that had
blown).
David Chase, Sun
Licensing Software Engineers (still more)
Christopher E Fulmer <fulme-ce@lea.csc.ncsu.edu>
Fri, 18 Oct 91 13:26:51 -0400
With all of the discussion going on about the RISKS associated with not having a mechanism for licensing software engineers, it seems to me that we are ignoring a few basic points..... 1. Licensing software engineers has the effect of reducing the number of people who are involved in software engineering. While this can be good (In the case of eliminating those who are incompetent), it also has the effect of putting the decision of "Who is competent" in the hands of some governing authority. And, that authority may have alterior motives for making their decisions. The authority would have to be made up of practicing software engineers. (Who else is qualified to judge?),who may very well desire to keep the field of software engineers down, by making the standards tougher, thus increasing their own marketability. (Dry cleaners have managed to do this by pushing through a "Certification of Dry Cleaners" law in some states.) 2. The market does tend to push out poorly-designed products. However, for some products, it may not be desirable to wait for the market to decide. After all, Audi's sales dropped after the problems with "Instant Acceleration" were found by real people, not before. 3. A mechanism which is typically used by Government and Commercial bodies is the idea of the "Contract model," wherein certain specifications are set up, and products which do not meet those specifications are not paid for. So, perhaps the solution to all this is not for the government to license softwre engineers, but to instead set up minimal specifications for safety critical applications. Or, perhaps, to provide for the independent licensing of the products, and not of the people who design them. In addition to solving the problems posed by #1 & #2 above, this also solves problems caused by bad software written by good people. Heck, certified engineers make mistakes all the time. What makes us think that programmers are different? So, in conclusion, It's my opinion that we're trying to solve the problem of poor software quality by looking at the people who wrote it, instead of looking at the produ of their work. Poor programmers, given enough time, can write perfectly good software, just as fantastic programmers in a rush can write terrible software. So, it's essential that we gauge the quality of the work, and not the quality of those who produced it. Chris Fulmer fulme-ce@lea.csc.ncsu.edu

Report problems with the web pages to the maintainer