Forum on Risks to the Public in Computers and Related Systems
ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
Volume 3: Issue 53
Wednesday, 10 September 1986
Contents
Hardware/software interface and risks- Mike Brown
More on Upside down F-16s- Mike Brown
"Unreasonable behavior" and software- Gary Chapman
Re: supermarket crashes- Scott Preece
Info on RISKS (comp.risks)
Hardware/software interface and risks
<mlbrown@nswc-wo.ARPA>
Tue, 9 Sep 86 09:48:31 edt
In RISKS 3.51 Bill Janssen writes of errors made in failing to consider the interaction of the hardware and the software under design. This failing was all too common in the writing of assembly and machine code during the early days of programming. Discrete wired machines often had OP codes that were not generally well known (i.e. the computer designers kept it secret). Interestingly, these unknown OP codes were included when more modern machines emulated the original discrete design. An excellent example is the old IBM 4-PI CP-3 and the IBM 360. The hardware/software interface within the machine can create significant problems from both a software safety and software security standpoint. Software designers will have to have an increasingly detailed knowledge of the total system to produce the safe, secure software that critical systems will require.
More on Upside down F-16s
<mlbrown@nswc-wo.ARPA>
Tue, 9 Sep 86 10:00:00 edt
In RISKS 3.52 Jon Jacky writes: >..it sounds like the right solution is to remind the pilots not to attempt > obviously destructive maneuvers. ...if you take the approach that the > computer is supposed to check for and prevent any incorrect behavior, then > you have saddled yourself with the task enumerating every possible thing the > system should not do." Perhaps a solution is to remind the pilots not to attempt obviously destruc- tive maneuvers however, relying on procedures to eliminate or reduce the risk of hazards is the least acceptable way. Pilots are human and as such are prone to making errors. Look at the safety record for general aviation and the Navy - both are dismal and are often reported to be due to pilot error. Its fine to tell the pilot "Lower your wheels before you land, not after" but we still have gear up landings. We should not concern ourselves with checking for and preventing any incorrect behavior but we should preclude that behavior which will result in damage to or loss of the aircraft or the pilot. We do not need to anticipate every possible mistake that he can make in this regard either - all we need to do are to identify the hazardous operational modes and prevent their occurrence. Mike Brown, Chairman Triservice Software Systems Safety Working Group
"Unreasonable behavior" and software
Gary Chapman <chapman@russell.stanford.edu>
Tue, 9 Sep 86 14:28:24 pdt
Jon Jacky wrote:
I detect a notion floating about that software should
prevent any unreasonable behavior. This way lies mad-
ness. Do we have to include code to prevent the speed
[of an F-16] from exceeding 55 mph while taxiing down
an interstate highway?
I certainly agree with the thrust of this. But we should note that there is
plenty of evidence that coding in prohibitions on unreasonable behavior will
be required, particularly in the development of "autonomous" weapons that
are meant to combat the enemy without human "operators" on the scene.
Here's a description of a contract let by the United States Army Training and
Doctrine Command (TRADOC), Field Artillery Division, for something called a
"Terminal Homing Munition" (THM):
Information about targets can be placed into the munitions
processor prior to firing along with updates on meteorologi-
cal conditions and terrain. Warhead functioning can also be
selected as variable options will be available. The intro-
duction of VHSIC processors will give the terminal homing
munitions the capability of distinguishing between enemy and
friendly systems and finite target type selection. Since
the decision of which target to attack is made on board the
weapon, the THM will approach human intelligence in this area.
The design criteria is pointed toward one munition per target
kill.
(I scratched my head along with the rest of you when I saw this; I've always
thought if you fire a bullet or a shell out of a tube it goes until it hits
something, preferably something you're aiming at. But maybe the Army has
some new theories of ballistics we don't know about yet.)
As Nancy Leveson notes, we make tradeoffs in design and functionality for
safety, and how many and what kinds of tradeoffs are made depends on ethical,
political and cost considerations, among other things. Since, as Jon Jacky
notes, trying to prohibit all unreasonable situations in code is itself un-
reasonable, then one wonders what sorts of things will be left out of the code
of terminal homing munitions? What sorts of things will we have to take into
account in the code of a "warhead" that is supposed to find its own targets?
What level of confidence would we have to give soldiers (human soldiers--we
may have to get used to using that caveat) operating at close proximity to
THMs that the things are "safe"?
I was once a participant in an artillery briefing by a young, smart artillery
corps major. This officer told us (a bunch of grunts) that we no longer needed
"forward observers," or guys attached to patrols to call in the ranges on
artillery strikes. In fact, said the major, we don't need to call in our
artillery stikes at all--his methods had become so advanced would
just know where and when we needed support. We all looked at him like he had
gone stark raving mad. An old grizzled master sergeant who had been in the Army
since Valley Forge I think, got up and said, "Sir, with all due respect, if I
find out you're in charge of the artillery in my sector, I will personally come
back and shoot you right between the eyes." (His own form of THM "approaching
human intelligence", no doubt.) (I wouldn't be surprised if this major wrote
the language above.)
What is "unreasonable" behavior to take into account in coding software? The
major's or the sergeant's?
-- Gary Chapman
Re: supermarket crashes
"Scott E. Preece" <preece%ccvaxa@GSWD-VMS.ARPA>
Mon, 8 Sep 86 09:54:06 cdt
> From: mogul@decwrl.DEC.COM (Jeffrey Mogul) > I don't often shop at that market, partly because the markets I do use > have cashiers who know what things are rather than relying on the > computer. Some day, just for fun, I might mark a pound of pecans with > the code number for walnuts, and see if I can save some money. ---------- Does the word "fraud" mean anything to you? Even if your pet cashier can tell at sight a pound of peanuts from a pound of walnuts, I don't see any reason to assume he would know what the correct price of either was or even which was more expensive on a particular day. The cashier is just as dependent on price stickers in a piece marked store as the scanner is on the UPC label in a scanner store. If I were designing a cash register, I'd make sure it could retain the current session through a power outage (no re-ringing the stuff already in the bags), but I don't think I'd require it to work while the power was off. Personally, if I were in a store when the power went out, I would leave quickly. If power loss is COMMON in the area where the store is built, the designers should work around it (perhaps by providing battery-powered scanners or emergency backup power); in my neighborhood I think it's reasonable to write off as a minor inconvenience -- the speed and efficiency of the scanners when the power is on is a more than reasonable trade for the inconvenience the tiny part of the time it isn't. -- scott preece gould/csd - urbana uucp: ihnp4!uiucdcs!ccvaxa!preece arpa: preece@gswd-vms

Report problems with the web pages to the maintainer