The RISKS Digest
Volume 17 Issue 46

Monday, 20th November 1995

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Village telephones cut due to “computer error”
Gordon Frank
Software failure in credit-card system authentication
Adrian Howard
Robotic justice?
George C. Kaplan
Robotic justice hoax!
George C. Kaplan
Software Pirate Nabbed in L.A. [Captain Blood]
Edupage
AOL Alerts Users to “Trojan Horse”
Edupage
A well-managed risk
Andrew Koenig
The little math error (1 in 1000)
Paul Bissex
Compendium of Commercial Fly-By-Wire Problems
Peter Ladkin
X–31 crash follow up
Martin Gomez and Andy Fuller via Steven Weller
Encryption vs. cryptanalysis difficulty scaling
Steve Witham
Faster computers will make security safer!
Adam=aba
Re: Writing solid code
Barton C. Massey
Peter da Silva
Roger E. Wolff
ABRIDGED info on RISKS (comp.risks)

Village telephones cut due to “computer error”

Gordon Frank <gordon@turing.cs.ucy.ac.cy>
Wed, 15 Nov 1995 15:59:43 +0200 (WET)

The following article was pointed out to me by one of my students, Elpidoforos Economou (Source: Alitheia newspaper, Cyprus, 13 Nov 1995)

On Thursday 9 November a whole village (population 3000) had its telephones disconnected for nonpayment of their bills. According to the newspaper report, a “computer error” showing all subscribers in the village to be in default was to blame. Reconnection took place when the problem was discovered a few hours later.


Software failure in credit-card system authentication

"Adrian Howard" <adrianh@cogs.susx.ac.uk>
Thu, 16 Nov 1995 11:12:38 +0000 (GMT)

[The following culled from a brief article in the Nov 4th Guardian (UK broadsheet)]:

The UK's two largest credit authorisation systems (Barclay's PDQ and NatWest's Streamline)failed on Saturday Oct 28th leaving retailers unable to verify customers cards.

In Barclay's case more than 40% of transactions failed due to “a bug in thesystem's software”. For NatWest the problem was a huge backlog of calls [reason unstated in article]which delayed card authentication.

Both systems had contingency systems which allow retailers to telephone in authentication requests.However, due to the volume of sales, the lines rapidly became jammed.

Comments:

  1. It looks like the phone based contingency system was setup with the failure of small parts of the network in mind, rather than the failure of the system as a whole. I wonder if this was an oversite, or a deliberate cost/risk decision.
  2. To me the most interesting comment was on the use of final resort manual “zip-zap” machines used for recording card details. According to Barclay's many retailers had forgotten how to process paper-based transactions! Over reliance on tech strikes again (along with improper training).
Adrian adrianh@cogs.susx.ac.uk +44 (0)1273 678367
http://www.cogs.susx.ac.uk/users/adrianh/

Robotic justice?

"George C. Kaplan" <gckaplan@cea.Berkeley.EDU>
Mon, 13 Nov 1995 17:38:36 -0800

A jaw-dropping item in the “Legal Grounds” column by Reynolds Holding in the 13 Nov 1995 San Francisco Chronicle:

A group of researchers at New York University Law School have been working on something called the “Solomon Project” to develop software to decide legal cases without bothering with juries.

Quoting Holding:

Oh, and the decision can't be appealed.

Gee, where do I start? Are polygraph (so-called “lie-detector”) results admissible in court? How about voice-stress analysis? Can it be fair to non-native English speakers? No appeals!?

A comment at the end of the column by trial consultant Karen Koonan sums it up: “It's pretty scary.”

George C. Kaplan gckaplan@cea.berkeley.edu 1-510-643-5651

Robotic justice hoax!

"George C. Kaplan" <gckaplan@cea.Berkeley.EDU>
Wed, 15 Nov 1995 08:44:42 -0800

My earlier message reported an item in the “Legal Grounds” column in the S.F. Chronicle aboutthe “Solomon Project” from New York University Law School. This is a grand andimplausiblescheme to decide legal cases by computer without bothering with a jury.

A couple of calls to NYU revealed:

Looks as if it's a hoax.
George C. Kaplan gckaplan@cea.berkeley.edu 1-510-643-5651

Software Pirate Nabbed in L.A. [Captain Blood] (Edupage, 16 Nov 1995)

Educom <educom@elanor.oit.unc.edu>
Fri, 17 Nov 1995 02:26:00 -0500 (EST)

A big-time software pirate was arrested in Los Angeles last week and charged with two felony counts of fraud and trademark violations. Authorities seized an estimated $1 million in illegally copied software, high-speed duplicating equipment and $15,000 in cash. Thomas Nick Alefantes, who calls himself “Captain Blood,” allegedly sold his wares through advertising in trade publications and a mail order business. (Wall Street Journal 15 Nov 95 B10)


AOL Alerts Users to “Trojan Horse” (Edupage, 16 November 1995)

Educom <educom@elanor.oit.unc.edu>
Fri, 17 Nov 1995 02:26:00 -0500 (EST)

America Online issued a warning to its users about a destructive file attached to an e-mailmessage that has been circulating through its service and also over the Internet. The messageitself is okay, but trying to run an attached “Trojan Horse” file called AOLGold or “install.exe” could crash a hard drive. (Atlanta Journal-Constitution 16 Nov 95 F7)


A well-managed risk

Andrew Koenig <ark@research.att.com>
Wed, 15 Nov 1995 10:43:18 EST

This note is intended as a break from the usual horror stories.

I just returned from Tokyo as a passenger on a non-stop flight from there to Newark.Twelve hours in the air in a Boeing 747–400.

Along the way I got a look at the flight plan and chatted with one of the flight crew about it. On this particular flight, we were filed with a destination of Detroit and with Cleveland as our alternate (All airline operations must state a primary and alternate airport; the alternate must be forecast to have weather substantially better than what is nominally required for landing. There must be enough fuel on board to fly to the primary airport, do a missed approach there, fly to the alternate, and land with 45 minutes of fuel still on board), with the intention of revising the flight plan enroute to have a destination of Newark and an alternate of Boston.

Why the revision? Because the fuel consumption estimates on which the flight plan is based have 10% padding. On a 12-hour flight, that means an extra 1.2 hours of fuel on board. That's a lot. So they plan to land in Detroit, which is an hour or so closer than Newark. Then, when they're over central Canada somewhere, they check again how much fuel they really have. If it's what they expect, the 10% padding is now a much smaller amount and, as expected, they have enough to continue to Newark. If, on the other hand, they can't prove that nothing is wrong, then they have a safe place to land with plenty of fuel on board.

I think that's a nice example of risk management and foresight.

Andrew Koenig ark@research.att.com

The little math error (1 in 1000) (RISKS-17.44)

Paul Bissex <biscuit@well.com>
Mon, 13 Nov 1995 22:58:42 +0500

Glad to see mention of this — but it's 2,000 times the current level oftapping rather than 2,000% thereof (Freeh's “correction” notwithstanding).


Compendium of Commercial Fly-By-Wire Problems

<ladkin@techfak.uni-bielefeld.de>
Sun, 12 Nov 1995 21:45:56 +0100

There has been much recent discussion in RISKS of accidents and incidents involving the commercialfly-by-wire aircraft, the Airbus A319/320/321/330/340 series and the Boeing B777. I've collectedthe RISKS discussions since September 1993, along with synopses and a little commentary, in a hypertext compendium, entitled ‘Incidents and Accidents with Fly-by-Wire Commercial Airplanes’, accessible through my home page

http://www.techfak.uni-bielefeld.de/~ladkin/

I intend that more RISKS and other discussion will appear as time goes by.Comments, additions and corrections welcome. If you have technical material that really should be linked, please let me know.

Peter Ladkin

X–31 crash follow up (Martin Gomez and Andy Fuller)

Steven Weller <stevenw@best.com>
Thu, 16 Nov 1995 17:42:09 -0800

I forwarded the X–31 crash info to a friend (Martin Gomez martin@hiflight.com) who copied me the following correspondence.

Steven Weller +1 415 390 9732 stevenw@best.com

A friend forwarded your note on the X–31 crash, and it made me grind my teeth a little, so I thought I'd reply directly to you.

I am a private pilot, and I design flight control software for a living. Based on this experience, please allow me to disagree with a couple of points you made. 1) “It is apparent to me that this crash is the result of a bug in the flight control software and sensor hardware of the X–31 aircraft. The computer was unable to compensate for a loss of the airspeed indicator.”

That is a non-sequitur. A “bug” is not the same as a missing feature.Microsoft Word 6.0 can't do my taxes for me, but that's not a bug. To me, abug is when the computer does something other than what the designer intended. If the designer intended the computer to detect and compensate for the loss of a sensor, then you're right, it was a bug.My understanding, from talking to engineers on the X–31 project at Dryden, is that the flight computerwas NOT meant to detect such failures. It therefore acted correctly, in the sense that it did whatit was designed to do. Obviously, it would have been nice if it could detect that the airspeedsensor had failed, but it's inability to do that is not a “bug” as you suggest.

“(a reading of zero is pretty impossible while the aircraft is flying)”

That's certainly true, but I don't think that's what happened to the X–31. First of all, this aircraft had a Kiel tube, not a pitot tube. No, I don't know the difference either! Assuming they work under the same principle (the measurement of dynamic pressure) then, like pitot tubes, they are subject to the same two forms of icing: 1) the total pressure port gets clogged, or 2) the static pressure port gets clogged. You may already know all this, but …

In case 1, the airspeed reading will not change as the airspeed changes. The air in the pitot tube (or Kiel tube, presumably) gets trapped. As the ALTITUDE changes, though, the static pressure will change, and the airspeed reading will decrease as the altitude decreases. In case 2, the indication will react correctly to changes in airspeed, but changes in altitude will have an odd effect: going lower will cause the airspeed to read higher, and vice-versa.

I heard that case 1 is what occurred to the X–31 that day. The pitot side iced up at altitude, and as the aircraft descended to return to base, the reading decreased, with the results you mentioned.

“We need to understand these differences before we use computers to take over the job of a human.”

That's true, too. I assume your article (I'm unaware of the context, since it was forwarded) is a warning to software designers to be careful what they assume, etc. You must realize, though, that “itwas a software problem” is the perennial cry of the hardware engineer who asked for the wrong software, and got what he asked for.

This is the reason that my company hired a pilot to design flight control software…I usuallyknow what the computer “should” do. In most cases, though, software designers haveto assume the specs they are given are valid, and take all failure modesinto account. Is it a software problem that the X–31 can't land vertically? Or thatit carries less than one hour of fuel? If a pilot crashes the remaining X–31 because hetried to land it vertically, or to fly coast-to-coast, that's not a software problem, so neither is failure to detect a sensor error.

Martin Gomez

[Guilty as charged. I agree with you, the X–31 crashed because of a SYSTEM DESIGN error, not a software bug. It would have taken more than a software change to prevent the loss of control.

I have some flight experience (1.5 hours solo before marriage and a house changed financial priority) and I have spent a couple of years writing flight navigation and autopilot software, as well as flight simulator software. I believe that my private pilot training had a significant impact on my ability to write autopilot code. I'm glad to hear that companies are recognizing that pilots write better flight control software (and this hopefully will extend to other disciplines, as well).

I didn't think through the complete implications of pitot tube icing (although I did remember that often the effect of clogging the dynamic pressure sensor is to cause airspeed indication to remain constant). It is an interesting point that IAS would gradually decrease as the altitude decreased. This would make for an interesting control problem. My copy of the Private Pilot Manual (Jeppesen Sanderson, 1990) notes a third possibility: If the pitot tube is blocked and its drain hole remains open, the airspeed reading will drop to zero (usually gradually). This failure mode may or may not be applicable to a Kiel tube.

The root problem remains: an instrument critical to the operation of the aircraft can fail in a way that is not detected by the system. Clogged airspeed indicators are a common problem, and they appear to cause catastrophic failure of the X–31. I see a recurring problem of software designers (and system designers) getting inadequate requirements specifications.

The real point is that we are “pushing the envelope” in using computers to control complex systems. When the systems are dynamic (like flight control) they are difficult to specify and difficult to test. We learn from our mistakes. Some mistakes have bigger consequences than others: our X–31 pilot was pretty lucky! Presumably, the next spec for the X–31 flight control system will have a line that reads: “failure of the airspeed indicator shall not cause loss of control of the aircraft”, or “failure of ANY sensor shall not cause loss of control of the aircraft” (although I would NOT like to be on the team that had to test that latter requirement!).

Andrew C. Fuller, E-Systems, ECI Div. Box 12248, St. Petersburg, FL 33733
(813)381-2000 x3194 acfa@eci-esyst.com Fax:(813)381-3329

Encryption vs. Cryptanalysis difficulty scaling

Steve Witham <sw@truesoft.com>
Fri, 17 Nov 95 13:15:34 EST

Re: Faster computers will never make security safer! (Re: Palme, 17.45)

If you ignore issues other than the encryption (& assume it's RSA, say), and ignore whatquantum computers might be able to do, then the difficulty of encrypting goes up withsomething like the cube of the size of the key, but the difficulty of cracking goes up with something like the exponential of the size of the key.

That's the whole reason encryption can work. If cracking were only a constant factor harder than encryption then the safety margin would never be enough. It has to actually scale differently—and (with the qualifications above) as far as we know it does.

The question of building encryption into all communications involves another scaling issue though: the speed and cost of communications vs. processors. For quite a while we've had 10Mbps ethernets while processors kept getting faster. If this were to continue, pretty soon we'd all be happy to let the processor spend a small part of its time encrypting all traffic. But if communication speed/cost catches up again then we'll still be tempted to send at least some things in the clear.

—Steve

Faster computers will make security safer! (Re: Palme, RISKS-17.45)

<aba@atlas.ex.ac.uk>
Thu, 16 Nov 95 12:04:01 GMT
> The problem is that CPU time becomes less costly at the same ratio
> for the encrypter as for the encryption breaker.

This assumption is wrong.

Quite clearly an encryption algorithm that is as cheap to break as to encrypt is worthless — encryption systems are by definition designed to be harder to break than to encrypt.

However Jacob was comparing ratios, and a less intuitive though demonstrable fact is that while increasing CPU speeds helps both encryptor and breaker, the performance boost helps the encryptor more than the breaker. This is because the encrypt and break
functions are of different complexity.

For example the RSA public key cipher used for key exchange, when small public exponents are used the encryption process is of O(n^2), and decryption O(n^3), but the time to break is a function running in super polynomial time involving factoring a product of two large primes.

So, some estimates of time to break (factor RSA modulus using GNFS) (extracted from a post Bruce Schneier's made on cypherpunks):

# of bits mips-years required to factor

1024 3*10^11
2048 3*10^20

And (real figures - PGP) encrypt and decrypt for the same key sizes (100Mhz R4600 based Indy workstation ($5000 workstation)):

# of bits encrypt decrypt

1024 0.06 s 1.2 s
2048 0.11 s 7.7 s

So say for arguments sake that a particular encryption system was using 1024 bit keys, and that CPU speed increased by 10 fold.

Now the encryptor could use 2048 bit keys in place of 1024 bit keys, with the same (or slightly better) real time response.

The breaker however is faced with a 10^9 increase in MIPs-years required to break, and only a 10 fold increase in CPU to speed the attack. The breaker is 10^8 times worse off, and the encryptor is marginally better off.

As CPU speeds increase, the code breaker loses.

Adam

Re: Writing solid code (Beatty, RISKS 17.45)

Barton C. Massey <bart@past.cirl.uoregon.edu>
13 Nov 1995 20:40:48 GMT

In RISKS-17.45, Derek Lee Beatty comments about Microsoft Press's _Writing_Solid_Code_,specifically its claim that. “assertions that check for programming errors […] can be removed from the shipped versions for better performance.”His comment is that this idea is “neither new nor so entertaining to RISKS readers, but it's sound advice.”

Ouch. Hopefully most RISKS readers reacted immediately to this statement.

I'm fond of the saying that “When your program's internal state becomes invalid, the best thing that can happen is that it crashes immediately.” Programmers make mistakes; failing an internal assertion will prevent a mistake from propagating through a program execution and leading to undetected severe execution errors. For example:

  • I'd rather have my bank's software crash than lose track of all my money.
  • I'd rather have my hospital heart-rate monitor crash than report that my heart is still beating when it has, in fact, stopped.
  • I'd rather have my word-processor crash than silently change my boss's name from “Bud” to “Bug” in my document.
In addition, failed assertions typically can cause a program to “crash gracefully”, yielding information useful in fixing the bug, and cleaning up enough internal state to prevent major disasters: buffers can be flushed, files can be closed, etc.

There are probably a few assertions which belong only in development code, because they are so expensive to check that they make the product unusable. In my experience, this is surely not the common case. If this is “the Microsoft way,” please choose an alternate route.

Bart Massey bart@cs.uoregon.edu

Re: Writing solid code (Beatty, RISKS 17.45)

Peter da Silva <peter@nmti.com>
14 Nov 1995 22:47:50 GMT

Derek Lee Beatty's support of Microsoft's code development is amazingly naive. Remove assertionsthat check for programming errors? Rigorous testing can not prove a program bug free, andremoving these tests just makes the bug harder to find. The bug, when it finally manifests, may end up being blamed on a completely separate product…

This is of course to Microsoft's advantage. It lets them make claims like “there areno bugs in Microsoft products that the majority of users want fixed”… like the wayMicrosoft Project for Windows crashes under NT 3.5, or the way the POSIX subsystem in NT hangs unexpectedly…

(no, I haven't reported these to Microsoft. Why should I? They're not going to fixit… we're going to have to upgrade to 3.51 just to get around this bug, for allthat Bill Gates claimed “A bug fix is the worst possible reason to release an upgrade” or words to that effect).

Sun's maze of patches might be tough to figure out, but at least they admitthey've got bugs and do their best to fix them. Bill Gates, ruler of the free world, remains in his state of denial.

(on another tack, the Java interpreter runs with no security enabled, for performance reasons. All thesecurity is implemented in theorem provers and other test mechanisms prior to execution. This is muchlike the old Burroughs boxes, where the compiler guaranteed that no operation that violated the security policy was generated. It's mucheasier to do that than check the output of an untrusted compiler. I would much rather they implementedstrict runtime checks at the interpreter level at the cost of some performance.)

Peter da Silva, Bailey Network Management, 1601 Industrial Boulevard, Sugar Land, TX 77487-5013 +1 713 274 5180 (NIC: PJD2)

Re: Writing solid code (Beatty, RISKS 17.45)

<R.E.Wolff@et.tudelft.nl>
Sat, 18 Nov 1995 17:56:01 +0100 (MET)

So now the book and Dekek Lee Beatty go hand in hand giving bad advice.

Solid code keeps the checks for programming errors! You can never ever guarantee that noprogramming errors are going to surface in a production version. Keeping the code that checksfor “programming errors” is going to catch lots of errors earlier on.

Moreover it is going to catch errors that aren't programming errors. Continuing with aninconsistent internal data representation is a bad idea. Whatever the cause.

Off course you might remove some of the ‘code to find bug xyz’. The internal consistency checks (This pointer shouldn't be “NULL”, this “switch” needs no “default”) should remain.

Roger

Please report problems with the web pages to the maintainer

x
Top