The RISKS Digest
Volume 17 Issue 25

Friday, 11th August 1995

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

"Computer" gets exam results wrong: Rounding error
Donald Mackie
The Royal Majesty revisited
Bob Frankston
Re: An unusual off-by-one error
William R. Ward
Re: Total surveillance on the highway
Steve Branam
Re: False zero reading possible on voltmeter
Keith Gershon via Mark Brader
Re: Warning on Using Win95
Nandakumar Sankaran and Mike Goldsman
Re: Oakland ATC Problem
Joel Runes and Paul M. Karagianis
Peter Ladkin
Tracy Pettit
Re: Australia next to ban PGP
Simson L. Garfinkel
Re: Australia next to ban (good crypto)
A. Padgett Peterson
IMA conference on the Mathematics of Dependable Systems: Program
Victoria Stavridou
ABRIDGED Info on RISKS (comp.risks) [See other issues for full info]

"Computer" gets exam results wrong: Rounding error

Donald Mackie <donald@iconz.co.nz>
Fri, 11 Aug 1995 22:16:23 +1200 (NZST)

The Australia and New Zealand College of Anaesthetists is the professional body regulating training and qualification in anesthesiology in this area. To become a Fellow of the College trainees take two exams. Each is composed of three parts: a multiple choice paper, essays, and an oral exam. The oral is held some weeks after the written papers. Candidates are told their results at the end of the day of oral examinations.

At a recent sitting of the primary exam a number of candidates failed. "Three or four" (out of an estimated sixty sitting the oral) were contacted a few days later to be told that they had, in fact, passed.

The "explanation" they were given was as follows. The exam format had changed. Instead of three essay questions, there were ten. The essays were weighted (as before) at 30% of the overall mark. A new computer system (not specified) was used to calculate the scores. "It" had rounded all figures down. The unlucky/lucky candidates marks had been rounded below the pass/fail threshold after division. Rounding was not a problem before, as the raw essay scores were merely added to the exam total (30 marks max = 30%).

We look forward to a fuller explanation in the next examiners' bulletin of how the College came to use such a foolish computer program.

Don Mackie MB ChB FRCA FANZCA Dept. of Anaesthesia Middlemore Hospital
Auckland, NZ +64 9 276 0168 Donald@iconz.co.nz CompuServe 100237,3534

The Royal Majesty revisited (RISKS-17.19)

<Bob_Frankston@frankston.com>
Fri, 11 Aug 1995 12:10 -0400

The Royal Majesty was in _The Boston Globe_ again today (11 Aug 1995). The explanation given for its being off course is that the GPS antenna failed and the alarm was not loud enough to alert the crew to switch to Loran. So it wasn't "our" fault. The software worked perfectly, and the software methodology worked perfectly.

Yeah, sure. The problem is that just churning out code is the trivial aspect of the problem of designing systems. A much much harder problem is the ad-hoc integration of disparate systems. Software methodologies do little good unless we can deal with real-world interactions and unless the system design takes the right responsibility. Seems like no system really took responsibility for navigation. Of course, the Captain is responsible, but that's moot if the subsystems are reliable enough that constantly watching a perfect system is counterproductive.

How do we design systems that can take responsibility for component failures and how to do we get the components to cooperate? We don't have any good answers. Ironically, if we do manage to get a handle on the integration and responsibility problems, much of the methodology used to build individual systems become less important since, theoretically, the individual systems should become simpler and less critical.

What work is being done to deal with the issues that arise as systems interact? There is work being done on component integration and building standard building blocks but rigid integration is an issue. What happens when subsystems fail? The problem is that failures are not well-defined.. How does one report a failure that isn't in the semantics of the client? The more serious problem with failures is that that they tend to occur when things are going wrong (of course) and thus frustrate attempts towards neat solutions. The user of a GPS system might not even realize that there is a flag that can switch reporting between magnetic and true North? What happens when the switch flips? (*67 is another example of this (That's implicit reference, but readers of Risks should recognize the "toggle caller ID" problem).

As a long-time reader of RISKS, I see repeated complaints about how systems are not reliable, but I view reliability as a crutch. The problem is not assuring that all components are perfectly reliable, but rather delivering a sufficient service out of unreliable components. Any navigation system is an unreliable component.

Sufficient in the case of the Royal Majesty is first assuring that it doesn't sink and secondly that it gets to the right destination on the right course. The rest are details. At least it didn't sink in the middle of the ocean.


Re: An unusual off-by-one error (Brader, RISKS-17.24)

William R. Ward <hermit@cats.ucsc.edu>
11 Aug 1995 01:48:16 GMT

) ) I emailed the list's maintainer, Alex Lopez-Ortiz, to point out what I
) assumed was a typo. He replied to say that actually the error was the
) result of a bug — he had used a program to remove LaTeX math formatting
) codes from an online copy of the paragraph, and it had apparently shifted
) each formula by one position in the text!

This is particularly interesting, because the book has as one of its co-authors Donald Knuth, whom as we all know was the originator of TeX, from which LaTeX is derived. Perhaps the program was getting back at its author in some way?

William R Ward Bay View Consulting Services 1803 Mission St. #339
Santa Cruz CA 95060 USA +1 408/479-4072 hermit@bayview.com

Re: Total surveillance on the highway (RISKS-17.23)

Steve Branam - Hub Products Eng 226-6043 <branam@dechub.lkg.dec.com>
Thu, 10 Aug 95 13:21:49 EDT

Phil Agre writes:

> As deployment of these systems accelerates, some of the transportation
> authorities have begun to recognize the advantages of anonymous toll
> collection technologies. For example, if you don't have any individually
> identifiable records then you won't have to respond to a flood of
> subpoenas for them.

This is an interesting point. It brings to mind the issue of regulation and legal responsibilities of database owners.

Are there any efforts in the US to make database owners responsible and legally liable for the accuracy of their data, the access to it, and the uses of it? Thus, a database owner might be subject to criminal or civil liability if it contained inaccurate information regarding an individual (regardless of the purpose of the database); if someone gained unauthorized access to it or abused authorized access (these might be regarded as failure by the owner to take adequate security measures); or if someone used that data to commit a crime.

One could imagine an all-encompassing scenario where a database record incorrectly listed a home address, a stalker gains unauthorized access to the record, goes to the wrong address and murders the wrong victim in the dark (of course, this might happen just by using the phone book or other public records, but people know that these records are not always accurate and up to date, whereas they assume computerized records are). The database owner might justifiably argue that they cannot be responsible for the inappropriate use of that data when it is used without their knowledge, but should be liable on the other two counts. If, on the other hand, the data is knowingly made available to someone engaged in illegal activity, the owner should be liable.

Database owners would probably argue that such liability would be burdensome and would simply be bureaucratic invasion of their enterprise. The irony of this argument would probably never occur to them. However, there might be several benefits to society. First, a database owner who was not diligent in ensuring accuracy or protection of the data could be held accountable. How many people's lives have been disrupted, sometimes severely, because some municipal database couldn't handle someone with the same name? And the response of the database owner? Gee, we're sorry, maybe you should change your name. Second, it would be another charge to use against someone engaged in illegal activity, ensuring that they are punished even if they were able to evade other charges (remember why Al Capone went to jail?). Third, it could discourage database owners from storing such detailed data in the first place, in favor of anonymous systems. Make it a hassle for them if they can't justify it.

Right now massive collection of incidental data takes place because it is so easy, it just costs storage space, which is cheap. There is no risk in it. Put some legal risk into it and people will start being a little more careful, storing only what is truly necessary and keeping a closer eye on it. An analogy is a financial institution, where people voluntarily give over access to and control of their funds. What bank would be silly enough to leave the cash lying out in the open? What bank would stay in business if they did not take adequate measures to protect that money? What bank would be permitted to knowingly launder money for illegal activities? If we simply replace money with information (which is generally already the case in the age of the plastic card!), we have a good initial model for responsibility and liability.

Far be it for me to propose greater bureaucracy, but the apparent total lack of responsibility of database owners cries out for regulation. I infer from several items about database abuse in the UK that they require databases to be registered in some way with the government. I don't know if that has worked out well or not, but it's not a bad idea when backed up with some teeth. A database can be a dangerous weapon.


Re: False zero reading possible on voltmeter

Mark Brader <msb@sq.com>
Fri, 11 Aug 95 02:31:08 EDT

[Here is a further forwarding from sci.engr.safety, where the item was posted by Keith Gershon (kdgershon@lbl.gov). I have cleaned up the formatting a bit, and deleted the quoting of the previous message, which Risks readers can retrieve from the archives. — Mark Brader]

Jeez, I was afraid of this...

I do not know this poster, but I originated the cautionary notice re Fluke meters to a very small group of electrical safety people, and it is spreading out of control. The info was VERY PRELIMINARY at the time, & I do not want to set off any kind of anti-Fluke stampede. The facts quoted below are accurate for the most part. We did discover this flukey defect (pun intended) in one of our labs. Fluke has NOT totally duplicated the condition at the factory, but they have been able to get the meters to freeze up under limited conditions. Meanwhile, we have sent the 2 meters in question back to the factory for analysis. I will post the results to this newsgroup. I would like to emphasize that thus far, Fluke has been very cooperative, straightforward and concerned. Anyone in the business knows that their meters are top notch.

By the way, if anyone out there uses these particular meters to test 500-1000 vDC, I would appreciate it if you could do the experiment described [in the previous message] and forward the results to me. I will compile and forward to Fluke. Thanks-

Keith Gershon Electrical Safety Engineer Lawrence Berkeley Laboratory
510-486-7067 kdgershon@lbl.gov

Re: Warning on Using Win95 (RISKS-17.24)

SankaranN <nandu@longs.att.com>
Thu, 10 Aug 1995 12:32:45 -0600 (MDT)

!From: bradsi@microsoft.com (Brad Silverberg, Sr VP Microsoft Corp)
!
!The FACTS: These stories are NOT TRUE.

When I started to read the article, I understood the above line to mean: "The fact is that these stories are not true" and without further qualification, the word "these" referred to the following clarifications from Brad! By the end of the third clarification, I was led to believe that Brad was actually confirming the rumors I had come across. Until I read Brad's message and PGN's footnote.

Nandakumar Sankaran
[Also remarked on by cerebus@netcom.com (Mike Goldsman), who added: Hmmm. If these are NOT TRUE, then I assume that on-line registration is compulsory, the process automatically signs you up with MSN, and uploads a copy of every hard drive on your local network. No, I DIDN'T read the original article... Remember, there are those of us who are jumping into the middle of conversations. PGN]

Re: Oakland ATC Problem (RISKS-17.24)

<Joel_Runes@email.fpl.com>
Thu, 10 Aug 95 14:36:48 EST

In RISKS-17.24, Peter G. Neumann cited a _San Francisco Chronicle_ article that asserted that the Oakland ARTCC covers an area of 18-million square miles. This cannot be so, even if the airspace over the eastern Pacific Ocean is included. Of course the aircraft at greatest RISK were not evenly distributed. Aircraft in the terminal areas in Northern California and Reno were controlled by the Approach Control facilities or the Control Towers. For enroute aircraft, the matter of clear skies could have been more of a RISK than a help, since there are fewer aircraft (or none) purposefully flying under Visual Flight Rules when the weather is bad.

The Airman's Information Manual and Federal Aviation Regulations specify the procedures to be used for enroute aircraft when radio communication is lost. Those procedures are geared toward a problem with one aircraft rather than with the ARTCC, but, particularly with the abundance of military radar facilities in California and Nevada, most aircraft should have been able to establish contact with some other radar-equipped ATC facility.

[I thought about the seeming absurdity of the 18M-SqMile area when I first saw it in the Chron, and meant to flag it in RISKS-17.24 — but then forgot. "Paul M. Karagianis" <KARYPM@SJUVM.bitnet> noted that such a circle has a radius of over 2393 miles, and I observe that the Page 1 map appears to show a somewhat prolate shape with a pseudoradius of maybe 250 miles in the artist's conception, or something less than 200,000 square miles. Off by a factor of 90? Not bad at all. PGN]


Oakland Center Outage a RISK?

Peter Ladkin <ladkin@inrs-telecom.uquebec.ca>
Fri, 11 Aug 1995 17:22:36 -0400

In RISKS-17.24, Peter Neumann notes the outage at the Oakland En-Route Air-Traffic Control Center. He reports:

Fortunately, the weather over northern California was good, and en-route flights could operate visually.

A commercial flight is under en-route control, the least demanding flight regime, when it is not near its airport of take-off or landing. The air-traffic control system is designed to be safe in the event of total communications failure. Airplanes have altitude and routing requirements (`clearances'), along with information on future expected clearances, along with precise procedures to follow, that define the entire flight at any time. Flight rules require that the FAA guarantee that this space-time slot is free from conflict. Controllers lose efficiency significantly under communications failures by losing the ability dynamically to modify clearances, but in principle safety is not compromised. It's not clear there would be any computer-related safety RISK.

In positive control areas at lower altitude covering airports of take-off and landing, there may be a risk at the exact moment of outage, since airplanes are often given partial routing clearance (e.g. `radar vectors') to sequence them for landing and take-off. No such areas were affected in this incident.

The rules allow flights to operate `visually' only below 18,000 feet, usually attained by a commercial jet within, say, 10 minutes of take-off. Flights into and out of SFO operate under positive control of the terminal area controller at and below 12,000 feet, and this control remained available (I don't have an SFO chart or regs handy — I believe the positive control extends to 12,000 feet but I may be mistaken.) Between 12,000 feet and 18,000 feet, the crew must be looking out of the window to spot and avoid other planes flying under `visual' rules and not under positive control. They *must* do this, comm failure or not: the en-route controller may issue `traffic advisories' but these are not definitive information about other traffic flying under visual rules. TCAS may help. These intermediate altitudes are thus potential `trouble spots' and some thought has been given to providing some positive-control-only airspace for all altitudes on the ascent/descent routes in the vicinity of major airports, so that commercial flights may fly entirely in positive-control-only airspace.

It may have been more `fortunate' had the weather been bad at the intermediate altitudes. Potential conflicts with visual-rules traffic do not arise where visual flight is not possible.

Of course, all this presumes that everyone follows the rules.

Peter Ladkin

Re: Oakland ATC Problem (RISKS-17.24)

Tracy Pettit <tnpetti@nppdnet.com>
Thu, 10 Aug 95 17:11:29 PDT

While triggered by the news of the Oakland air-traffic control center outage, hopefully this can give RISKS readers something new to worry about also.

Both the article I read in my morning paper, and the item in RISKS regarding the outage at the Oakland air-traffic control center seemed to indicate that this was just another example of problems with the aged computers being used for air-traffic control.

In this particular case it would not matter what method or equipment was being used, the outage would have occurred. The only way to avoid it would be if the entire air-traffic control center was operated by a strictly mechanical means, as in with falling water turning wheels and moving levers. The outage was caused by a LOSS OF POWER (hence the loss of radar and radio contact, and probably lights also, is not surprising). The real problem being poor planning by the designers in making sure of a constant flow of electricity.

This hits home because my job is with a public power utility, keeping the computers going that keep the electricity on. The power grid across the country must constantly be kept in "tune", or it will fail. As people turn switches on or off, generators must change their output. As loads go up and down capacitors and reactors must be switched in and out of the system to keep power flowing efficiently. (this is an extreme simplification of electric power system control). Each company does this (or has it done by another utility) by monitoring the power system every few seconds using, you guessed it, computers.

While usually nondescript or at least unlabeled, you may have noticed at times a power company building resembling a small bunker with some combination of fencing and electronic surveillance. In other cases the building may look normal from the outside, but once inside the security becomes quite apparent. Their are two reasons for this. The first is that the control operation must continue no matter what the weather. Our building is designed to take a direct hit by a tornado and not effect the control room personnel. The second reason is to keep the bad guys out. You could quite easily black out large areas of the immediate control area (to disable alarm systems and screw up police and other emergency communications?) and theoretcally pull down the entire grid (eastern or western) from within one of the control rooms of a larger utility.

In the past the computer systems have been electronically separated from the outside world. As the older systems (ours is late 1960's hardware with 70's software) are replaced, the tentacles of networking are getting hold. In order to insure we do not lose the operation of these systems we start with multiple computer systems, each of which will take over for the other(s) if a hardware or software problem arises. But now I come full circle back to the air-traffic control center. Our building has two separate power feeds, coming from sub-stations and transmission lines located miles apart. Should both of these fail, there is a backup generator with many days of fuel. If this fails there are two redundant battery UPSs (Uninterruptable Power Systems) that will each last several hours. In addition, internal switch gear (power distribution), and cooling systems are either redundant or sized such that remaining units can handle the load of a failed unit. We believe we have minimized the risk of a power outage inside of the building. This is expensive to build and maintain and comes out of your electric bill, but the alternative would be increased numbers and duration of power outages. Have you ever tried to live even a few hours, especially in extremes of weather, without electricity? Would you like to be flying into an airport without electricity? Why did the planners building the air-traffic control center not think a constant supply of power would be important?

And finally, why is it that when you have tons of machinery, a lot of highly volatile fuel and a couple of hundred people going several hundred miles an hour 40,000 feet in the sky (in direct opposition to gravity and friction), you are allowed to "remove your seat belt and move freely about the cabin". But when you are firmly planted on the ground creeping along at less than 10 miles per hour you are sternly warned to "keep your seat belt fastened until the plane has come to a complete stop".

Tracy Pettit Systems Development Supervisor Nebraska Public Power District
tnpetti@nppdnet.com

Re: Australia next to ban PGP [unverified info ...]

Simson L. Garfinkel <simsong@vineyard.net>
Fri, 11 Aug 1995 07:45:53 -0400

>p 34: `the needs of the majority of users of the infrastructure for
> privacy and smaller financial transactions can be met by lower
> level encryption which could withstand a normal but not
> sophisticated attack against it. Law enforcement agencies could
> develop the capability to mount such sophisticated attacks.
> Criminals who purchased the higher level encryption products
> would immediately attract attention to themselves.'

This is, unfortunately, not true. The only way to tell strong encryption from weak encryption is to actually go about trying to break the encrypted message. Thus, this policy would imply that *all* encrypted messages would be broken, in order to separate out the weak from the strong.

Faced with this as an alternative, I would certainly prefer the Clipper chip, which at least promises good encryption to all.

Simson L. Garfinkel PO Box 4188 10 Spring Street Vineyard Haven, MA 02568
508-696-7222 simsong@vineyard.net House tour: http://vineyard.net/10spring

Re: Australia next to ban (good crypto) - (Anderson, RISKS-17.24)

A. Padgett Peterson, Information Security <padgett@tccslr.dnet.mmc.com>
Thu, 10 Aug 95 17:43:15 -0400

Australia's proposed crypto policy:

  1. Banks will get key escrow
  2. Other Australian residents will be forced to use weak crypto
>p 34: `the needs of the majority of users of the infrastructure for
> privacy and smaller financial transactions can be met by lower
> level encryption which could withstand a normal but not
> sophisticated attack against it.

Politicians love vague phrases like "the needs of the majority..." the RISK here is that you can make such blanket statements without any statistics and if you choose the period properly (say 1950-1970) for determining the need, you may even be right.

> Criminals who purchased the higher level encryption products
> would immediately attract attention to themselves.'

This is the one that confuses me every time I see it (and I have seen it a lot) but it really seems to be popular among the clueless. WHY should good crypto immediately attract attention ? Consider the following:

  1. Person has pgp. Encrypts file.
  2. Person then double encrypts with "approved" crypto.
  3. Sends file.
Now the only way anyone is going to notice is if they routinely decrypt *everything*.

Also, this only works because a pgp-encrypted file advertises the fact. What if Steganography is used ? Message in the lower bits of a JPEG ? CD-ROM OTP ? Navajo ? or does the proposal include a requirement that only messages understood by the gov be sent ? Fact is that while it is fairly easy to separate plain text from "something else", it is very difficult to account for all the "something elses". IMNSHO, YAPBI.

Padgett

IMA conference on the Mathematics of Dependable Systems: Program

Victoria Stavridou <victoria@csl.sri.com>
Thu, 10 Aug 1995 11:35:30 -0700

Here is the programme of the 1995 IMA conference on the Mathematics of Dependable Systems (September, York, UK). If you would like to attend, please contact Pamela Bye at IMACRH@VAXE.ANGLIA-POLYTECHNIC.AC.UK .

2ND IMA CONFERENCE ON THE MATHEMATICS OF DEPENDABLE SYSTEMS

4th - 6th September, 1995 University of York, UK

PROVISIONAL PROGRAMME
[papers only; coffees, teas, sherries, etc. deleted by PGN]

MONDAY, 4TH SEPTEMBER, 1995

Invited Speaker: Mathematical Description, Specification, and Modelling of Software, D. Parnas (McMaster University, Ontario, Canada)
Consistent Composition and Refinement for Dependable Systems, L.E. Moser and P.M. Mellier-Smith (University of California, USA)
System Specification in VDM-SL, P. Mukherjee (Royal Holloway, University of London)
Key Management for Secure Communications, F. Piper (Royal Holloway, University of London)
Formal Verification of Fault-Tolerant Processors, G.M. Musyoka and G. Morgan (University of York)
Statecharts as Boolean Propositions, C. Brink, R. Harnett, *J. Peleska and M. Schrvnen (University of Cape Town and *University of Kiel)

TUESDAY, 5TH SEPTEMBER, 1995

Formal Methods for Computer Security - A Question of Fitness for Purpose, C.T. Sennett (DRA, Malvern)
Regular Path Algebra Applied to Non-Functional Properties of Critical Software, A. Burns, R. Chapman and A. Wellings (University of York)
Safety Specification in Deontic Logic, N. Nissanke (University of Reading)
Algebraic Models of Microprocessors: The Correctness and Verification of a Simple Computer, N.A. Harman and J.V. Tucker (University of Wales, Swansea)
From Program Proving to Formal Design: Lessons Drawn from SACEM, M.P. Chapront (GEC Alsthom, France)

WEDNESDAY, 6TH SEPTEMBER, 1995

Limitations of Mathematics in Software Engineering, J.C. Knight (University of Virginia, USA)
A Methodology for Reliability Analysis of Fault-Tolerant Systems with Repairable Subsytems, O. Bridal (Chalmers University of Technology, Sweden)
Reliability Models for Hard Real-Time Systems, C.S. Perkins and A.M. Tyrrell (University of York)
Applying Space Based Modelling Techniques to Dependable Systems, S. Haines and T. Longshaw (DRA, Malvern)
The Limits to the Levels of Reliability that can be claimed for Software-Based Systems, B. Littlewood (City University, London)

Please report problems with the web pages to the maintainer

x
Top