The RISKS Digest
Volume 6 Issue 72

Thursday, 28th April 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Yet another skunk in the squirrel story
Rick Jaffe
Garbage ($20) in, garbage ($20) out
Joel Kirsh
Re: KAL 007
Steve Philipson
Civil aviation risks
Jon Jacky
Re: Creating alternatives to whistleblowing
John Gilmore
Re: textual tampering
John Gilmore
DoD (and the rest of us) protecting ourselves against viruses
John Gilmore
Re: Computer Viral Center for Disease Control?
Prentiss Riddle
Re:Fault tolerant systems…
Hugh Davies Andrew Klossner
Info on RISKS (comp.risks)

Yet another skunk in the squirrel story

Rick Jaffe <umix!oxtrap!rsj@rutgers.edu>
Wed, 27 Apr 88 14:02:29 edt

I hadn't previously seen this particular risk relating to the story of “the squirrel that skunked NASDAQ”.

(from “SIAC Preps Net for DP Backup Site”, _Network World_, vol. 5, no. 17)

“Unfortunately, when NASDAQ switched data centers, it learned that most of its largest customers didn't have communications lines connecting them with the alternate site.”


Garbage ($20) in, garbage ($20) out

Joel Kirsh <KIRSH@NUACC.ACNS.NWU.Edu>
Wed, 27 Apr 88 15:00 CDT

(without permission from The Chicago Tribune, April 27th: )

NEW YORK (AP) "… Because some hapless employee loaded an canister of $20 bills into the slot for $5 bills, the First Federal Savings and Loan Association of Rochester's branch at 1st Avenue and 14th Street launched an accidental exercise in income redistribution.

“Although the cash machine panel has a 24-hour telephone for reporting problems … the response was … ‘one or two calls,’ according to bank spokesman Robert Nolan.”

“Instead, a line of eager card holders quickly formed at the machine. …”

 “Nolan said the machine's records would show who used it and how largea withdrawal each person requested. He said customer accounts would be charged for the amount overpaid.”

“…But it was unclear whether the bank would be able to prove that all the bills in the $5 slot were really $20s.”

“…Overpayments like Sunday's are said to be extremely rare.”

“‘It's much more common for the reverse to happen - a customer is shortchanged,’ said John Love of Bank Network News, an industry newsletter.”

[If the Post Office has automatic stamp dispensers that can discriminate between $1s, $5s etc., why don't ATM's have a similar test at the output? JK]


Re: KAL 007 (RISKS-6.70)

Steve Philipson <steve@ames-aurora.arpa>
Wed, 27 Apr 88 11:15:32 PDT

The article in RISKS 6.70 by Clifford Johnson sent me reeling. I don't have direct access to any primary sources of information on the KAL007 incident, but this story sounds like bunk to me. Here's an example of a major error:

To this day there has been no public congressional investigation into the KAL007 incident, even though the Air Force irregularly destroyed radar tapes of the flight, and even though Japanese tapes of the incident, et alia, strongly indicate that the course of KAL007 was deliberate. A statutorily required investigation by the National Transport Safety Board was inexplicably cancelled, documents lost, and gag orders placed on all civilian employees.

Let's begin with part of the last sentence. “statutorily required investigation by the [NTSB] was inexplicably cancelled”. To quote NTSB Part 830.1 Applicability:

This part contains rules pertaining to:
(a) Notification and reporting aircraft accidents and incidents and certain other occurrences in the operation of aircraft when they involve CIVIL AIRCRAFT OF THE UNITED STATES wherever they occur, or FOREIGN CIVIL AIRCRAFT WHEN SUCH EVENTS OCCUR IN THE UNITED STATES, ITS TERRITORIES OR POSSESSIONS. [emphasis added]

The KAL 007 incident does thus not even require a report. To my knowledge, there is no US statute requiring investigation of military actions against nor accidents involving aircraft of US manufacture. As for “radar tapes”, it seems unlikely that such tapes would have been useful,as the flight was outside of the coverage range of both US and Japanese ground radars.

The rest of the article proceeds with various claims that are counter to information printed in a host of reliable publications including the New York Times and Aviation Week. Johnson refers to _Shootdown_ by R.W. Johnson, who provides “astonishing” evidence that KAL007 was onan espionage mission. This certainly is astonishing, as all other available information leads away from this conclusion.

What we had here was a civilian aircraft blundering into airspace that is a military espionage playground. The Soviets appear to have demonstrated incompetence in shooting down a civilian aircraft when they were after a US military intelligence aircraft.

What has all this to do with RISKS? If we classify a massive error as a deliberate act, we dismiss the need for investigation as to why the error occured, and remove all possibility of discovering and/or correcting any problems. The “deliberate act” explanation is a variation on “pilot error”.If an accident is simply hand-waved away as “pilot error”,we lose the opportunity to understand what in the system allowed that error to occur, and we do nothing to decrease risk and the possibility that the error will occur again. The really interesting things that have come up in the investigation of this incident are the multiplicity of ways that such an error could occur. It has given us much food for thought in designing systems that are more safe.


Civil aviation risks (not computers, interesting anyway)

Jon Jacky <jon@june.cs.washington.edu>
Wed, 27 Apr 88 09:13:48 PDT

Here is a story about manufacturing defects in commercial airliners and how they were discovered and fixed. It is excerpted from

FAA, BOEING AND PROBLEM-SOLVING by Polly Lane, SEATTLE TIMES Sun Apr 17 88

“Maintenance being performed on an American Airlines 767 in the carrier's Tulsa maintenance center was fairly routine, until a mechanic discovered that cargo fire-extinguisher lines were crossed. The swapped lines meant trouble. Should a pilot discover an in-flight fire in the rear cargo compartment, he would immediately tigger the extinguisher system - but it would go off in the front compartment instead.

The mechanic reported his find to a Boeing Co. representative at American's center and to the Federal Aviation Administration. The Boeing rep called Boeing officials here (in Seattle) later that day, March 3, and followed uyp with a telex the following morning, a Friday. By Friday afternoon, inspectors were looking at 767's on the assembly line at Everett to determine whether it was an isolated case … They found some repeat instances — they didn't say how many — during inspections the following week.

On March 9, Boeing reported the findings to the FAA. The next day, a week after the discovery in Tulsa, Boeing sent a service letter advising customers of the potential problem.

The FAA backed up Boeing's letter by issuing a telegram, known as an airworthiness directive, to owners and operators of 767's. After a worldwide check it was determined that 27 of the 190 767's in service had fire-extinguishing hoses that were swapped. …

The FAA telegram was the result of a system dictated by Federal law. … The directive to fix the 767 fire-extinguishing system was relatively urgent, but not serious enough for the FAA to ground the airplanes until corrections were made. That hasn't happened since 1979, after an American Airlines DC-10 crashed at Chicago, killing 275. …

In the case of the 767 fire-extinguishing system, Boeing changed the size of the hose connections so lines to the front and rear were different. The change would help prevent future mistaken connections. … Designers also suggested the lines be separated so there is no chance of a repeat misconnection. …“

(I know it isn't a computer-related incident, but I was impressed by several lessons:

  1. Mistakes can be made during assembly; it is not valid to assume that the product that is delivered is the one that was designed.
  2. Systems that are used infrequently are hiding places for latent errors.
  3. It is important to have in place a responsive error reporting and correcting system. )

- Jon Jacky, University of Washington


Re: Creating alternatives to whistleblowing [RISKS-6.65]

John Gilmore <hoptoad.UUCP!gnu@cgl.ucsf.edu>
Wed, 27 Apr 88 00:08:46 PDT

The week I left Sun Microsystems (years ago), I was the featured speaker at the regular weekly software meeting. I offerred some suggestions to ‘dissidents’ who were having trouble with management. (Of course, since my efforts to be a dissident and remain at Sun had failed, perhaps nobody took them seriously.) If enough RISKS folks care, I will transcribe the relevant parts of the tape.

For me the ethical issues were around things like:

Note that the net itself forms a communications medium for whistleblowers; many people report problems they're having with a company's equipment to the net, when they can't get satisfaction from the company in private discussions. Sun's fixes to the TFTP security hole, and to install subnetting, were both done in response to publicity on the net.


Re: textual tampering

John Gilmore <hoptoad.UUCP!gnu@cgl.ucsf.edu>
Wed, 27 Apr 88 00:29:06 PDT
> In our copy of RISKS DIGEST 6.60, occurrences of "ments" have been replaced
> with “<newline><newline>w”.

This is a common problem when compressed text files are damaged in transit. Compress works by remembering common strings of bytes, and replacing each with a 9-to-16-bit code. The decoding process uses the text as it is produced to rebuild a copy of the string table built by the encoding process. If one of the codes is altered, it changes the table entry involved, and future references to that code will be translated to bad data. Not all copies of the string will necessarily be affected, depending where the encoding algorithm breaks the text into strings.

The reason for the "#! rnews 682" at the end is because netnews is packaged into batches, separated by #! rnews lines containing a byte count. Since the RISKS article shrunk from the decompression problem, the beginning of the next article was grabbed [a RISK of counted byte strings]. The news software notices that there is no #! rnews after the article, but it has already processed the corrupted message; it skips forward looking for another #! rnews.

I have seen cases where uucp's checksums did not detect errors introduced by horrible phone lines, and TCP-IP is recently full of horror stories about the UDP and TCP checksum algorithms, so this happens often enough to be able to see the pattern.


DoD (and the rest of us) protecting ourselves against viruses

John Gilmore <hoptoad.UUCP!gnu@cgl.ucsf.edu>
Wed, 27 Apr 88 01:31:30 PDT

The first thing anybody who wants protection against viruses should do is to stop buying computers that don't have, or don't use, memory protection. There is NO protection in a system where main memory, the operating system, and I/O devices and drivers are all open to subversion by any random user program.

Of course any machine containing an 8088 or 8086 is wide open. Any 68000, 68010, or 68020 without an MMU, ditto. This cuts out all the existing micros except high end ones running Unix.

Note that even if you install an MMU into a Mac-2, the MacOS will not use it; you have to run A/UX [Unix] to get memory protection.

Note that OS/2 is not a protected environment, since it runs MSDOS programs in “real mode”, even on an 80386. Real mode basically means fullaccess to the bare metal. It is also easy to circumvent system security in protected mode; protected mode virus programs can get permission to do I/O instructions by claiming to need high speed access to a graphics board or other special hardware. At this point the system is wide open again; they could write some data out to a disk drive and then instruct the disk drive to read it back into any location in physical memory — say, over the interrupt vectors or the global memory protection table.

It may be possible to run a castrated version of OS/2 that does not permit I/O instructions and does not run MSDOS programs, but then why would you bother running it? It's just another incompatible, proprietary OS. Unix already runs well protected on the same hardware, there are plenty more applications for Unix than OS/2, and Unix provides the same programming and user environment from the 8088 all the way up to Amdahls and Crays.

This is not to say that operating systems that provide memory protection are secure; it's just saying that if you want security, memory protection is step #1, without which everything else is useless.


Re: Computer Viral Center for Disease Control? (RISKS 6.70)

Prentiss Riddle <ut-sally!im4u!woton!riddle@uunet.uu.net>
27 Apr 88 15:47:11 GMT

A computer virus CDC is not a bad idea. If it is ever implemented, let's hope that it is part of the private nonprofit sector, or at least in some relatively open part of the government well removed from the security agencies — otherwise the center will be subject to the real or imagined RISK that it is a front for computer “germ warfare”research. (Visions of another DES scandal readily come to mind.)

— Prentiss Riddle (“Aprendiz de todo, maestro de nada.”)
— Opinions expressed are not necessarily those of my employer.
— riddle%woton.uucp@cs.utexas.edu  {ihnp4,uunet}!ut-sally!im4u!woton!riddle

Re:Fault tolerant systems…

<"hugh_davies.WGC1RX"@Xerox.COM>
27 Apr 88 01:25:31 PDT (Wednesday)

I have read this story in several places in the UK computer press. Regrettably I have long since trashed the source material, but I'm fairly sure about it..

Tandem make a fault tolerant computer system which is very popular with financial institutions. It has a lot of redundant hardware, so that failure of one subsystem doesn't bring down the whole machine. One of the favourite ‘tricks’ whilst demonstrating this feature is to get a bystander to point at a (random) board in the machine and then pull it out, proclaiming ‘Look, it's till up!!!’.

Unfortunately, DP managers at customer sites were doing this to impress their friends (colleagues, bosses?). So the story goes, the machine was then dialling Tandem (by itself) to report the ‘failure’ resulting in a deluge of spurious fault reports at Tandems HQ. The story continues that Tandem have now put in a timer to stop the machine dialling until the DP man has had a chance to plug the board back in.

eugene@ames-aurora.ARPA asked about strange benchmarking type stories. When we first got our (well, perhaps I'd better not say) supermini, we were plagued with problems where random chunks of files would have their contents swapped, so you'd end up with things like ‘ekil sgniht htiw pu dne d'uoy’ - only hundreds (sometimes thousands) of bytes. The hardware men blamed the software and the software men blamed the hardware (as usual). After about 6 weeks of fixing files, we finally discovered we were running microcode for a machine without an FPP, and ours had an FPP. As soon as we corrected that, the problem went away. We never did discover what floating point arithmetic had to do with swapping bytes in files…

Hugh Davies, Rank Xerox, England.

Avoiding fault tolerance of broken floating point unit

Andrew Klossner <andrew%frip.gwd.tek.com@RELAY.CS.NET>
Tue, 26 Apr 88 16:25:01 PDT
 “There was also provision for the PROM to contain a list of attached equipment;  the boot ROM could then check to make sure that it had found everything that was supposed to be there. Unfortunately HP decided that the custom PROMs added too much to manufacturing cost.”

The engineers of the Tektronix 6130 workstation devised yet another solution to this problem. After the diagnostics (boot ROM and friends) finish looking over the system, they compare the list of attached equipment with the previous list, stored on disk. If they don't match, a message is printed and system boot won't procede until the operator keys an acknowledgement, at which point the disk list is updated.

The bad points are: you have to use other methods to be sure that everything works the first time you boot (when there is not yet an equipment list on disk); and, if the configuration changes (either because you unplugged something or because a component failed), the system won't reboot itself back to fully operational state after a power failure.

  -=- Andrew Klossner   (decvax!tektronix!tekecs!andrew)       [UUCP]
                        (andrew%tekecs.tek.com@relay.cs.net)   [ARPA]

Please report problems with the web pages to the maintainer

x
Top