The RISKS Digest
Volume 7 Issue 40

Thursday, 25th August 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Car engines become target for hackers
Jerome H. Saltzer
Re: IL car emissions testing process and enforcement errors
Will Martin
Re: Danger of Sensitive Car Electronics
Henry Schaffer
Automobile computer modifications
George Tomasevich
Statistical reliability estimation criticized
Jon Jacky
Can current CAD/simulation methods handle long-term fatigue analysis?
Gerry Kokodyniak
Boundary Cases
James Peterson
John Bruner
Mother's maiden name == arbitrary password
Walter Smith
Risks of EFT agreements
Doug Claar
Chile con backbones
Joe McMahon via Martin Minow from VIRUS-L
An item by Mark Garvin on SoftGuard and the Trojan horse "SUG"
from VIRUS-L
Info on RISKS (comp.risks)

Car engines become target for hackers (RISKS-7.39)

Jerome H. Saltzer <Saltzer@ATHENA.MIT.EDU>
Wed, 24 Aug 88 18:23:33 EDT
> False instructions will be detected and the owner told the change has
> invalidated the guarantee.

That approach has an interesting implication.  It seems to mean that in order
to do a new microcode release the manufacturer will have to make sure that
every service agency gets that release before any cars using the new release
roll off the production line.  In the past, delay in getting the latest update
to a service agency would simply mean that the service agency didn't have
enough information to do certain kinds of service on your new car.  Now, it may
mean that they will challenge your warranty.  I wonder if the high-level
executives who thought up that idea have checked it through with the people who
do microcode releases.
                                    Jerry


Re: IL car emissions testing process and enforcement errors

Will Martin — AMXAL-RI <wmartin@ALMSA-1.ARPA>
Thu, 25 Aug 88 9:52:38 CDT
When I first heard of this, it did seem a poorly-designed system. It relates a
variable which could be greater than 1, the number of cars a person may own,
with a single item — the owner's driver's license.  Suppose you own three
cars. One of them fails the test, and you may decide to stop driving that one
for a while before getting it fixed.  Meanwhile, you drive the other(s), which
passed the test. The law would STILL suspend your driver's license due to that
one car's failure. How do they handle cars being restored or otherwise in a
perpetual state of disrepair, anyway? The emissions-test failure should be
related to the specific vehicle, not to the owner. (What about cars that are
owned by corporations or otherwise not tied to an individual, also?)

Again, there is an obvious way around this. The registered owners of private
cars in IL should be non-drivers. Put your car(s) in your child's name, or in
the name of your dog or a made-up name at your address. Then, the emissions
test failures would relate to a name which did not match any driver's license,
so the driver would not have his/her license suspended no matter the result of
the test. As long as the taxes on the car are paid, and the license plates
and/or stickers bought, it is still perfectly legal. I can see there being a
bit of extra effort when you sell the car, as you would have to transfer
ownership back to yourself before the sale took place, but that could probably
be done with a single notarized document.

When 30% or so of the computer-matches for owners of cars that failed the tests
come back with "no license on record", I guarantee the badly-designed law will
be changed! (Of course, no guarantee that it will be replaced with anything
better, given the history of incompetence in legislation through the ages...
:-) At least the people can strike back at the system for a while, using this
tactic.
                                        Will Martin


re: Danger of Sensitive Car Electronics

Henry Schaffer <hes@uncecs.edu>
Thu, 25 Aug 88 09:31:36 edt
  Will Martin <wmartin@ALMSA-1.ARPA> mentioned a catalog "Electrical
Noise and Interference Control" with a cover picture of a car going
through a guard rail on a cliff.  The heading by the car is "ANOTHER
EMI INCIDENT?".   This is certainly a dramatic and provocative
picture.  The "About the Cover" explanation in the catalog (quoted
below) is interesting both for what it says and for what it 
implies:

  "The installation of microprocessors for controlling critical
automotive functions (engine control, braking, acceleration) has
dramatically increased the modern vehicle's vulnerability to
interference.  Even the malfunction of non-critical functions (power
sun roofs and windows, hood and trunk releases) has caused property
damage, injury and even death.

  "Although automotive travel has become safer, there are a growing
number of complaints related to "unintended' acceleration as recorded
by the National Highway Traffic Safety Administration.  Dr. Roger L.
McCarthy, P.E., of Failure Analysis Associates expects the trend to
increase since the "overwhelming majority of vehicles produced since
1981 have had computer-controled engines."  Complaints of braking
system failure, sometimes simultaneous with  unintended acceleration,
and the inability to confirm the validity of these complaints in
subsequent tests make the problem difficult to investigate or
substantiate.  Dr. McCarthy asserts, however, that "it would be
equally erroneous to dismiss all such complaints as untrue.  The
problems associated with detecting and reproducing all types of
transient electrical phenomena are well known, and phenomena such as
single event upsets, where a transient electromagnetic disturbance
changes the system state, only complicate things further."

  "The detection and measurement of, design against, and retrofit to prevent
electromagnetic interference is a science and a necessary developmental phase
of any successful product — commercial, industrial or military.  This catalog
is your guide to the latest in theory application and pragmatic approaches to
the control of electrical noise interference.
                                              --henry schaffer  n c state univ


Automobile computer modifications (Re: RISKS-7.39)

<att!homxb!twitch!grt@ucbvax.Berkeley.EDU>
Thu, 25 Aug 88 07:25:04 PDT
Virtually all issues of Porsche Panorama have ads for ROM modifications
for Porsches.  They usually claim some number or percentage increase
of horsepower, and they frequently contain disclaimers or warnings in
small print, especially with respect to street legality in California.
I drive at race tracks, but I don't know if people are using modified ROMs.
There are certainly some rather hot cars around.  The 944 has a switch that
tells the computer when full throttle is applied.  Someone told me that
I could get more power if I disconnected that switch.  A more ordinary
situation is drivability problems when the computer malfunctions.  Sometimes
I wish my car had a logic analyzer so I could figure out what is happening.
The computer costs $1400, so I do not want to carry a spare one just to avoid
getting stranded somewhere.

George Tomasevich, att!twitch!grt       AT&T Bell Laboratories, Holmdel, NJ


Statistical reliability estimation criticized (COMPASS '88 report)

Jon Jacky <jon@june.cs.washington.edu>
Wed, 24 Aug 88 16:34:21 PDT
> (Henry Spencer writes) ... there is no limit to the money that can be
> spent adding 9's to the end of 99.9999% reliability.

An important question asks whether those extra 9's are meaningful at all.  One
sometimes hears statements like, "for safety critical systems the probability
of failure should be less than 10 ** -9" (ten to the minus nine power, or one
in a billion).  One even hears claims that the probability of failure of some
system _actually is_ less than 10 ** -9 per hour.  Is it meaningful to make
such requirements for computer systems, or to claim that such a requirement has
been met?  The apparent consensus at COMPASS '88 (a meeting devoted to the
safety and security aspects of computer systems held last June) was no, it is
not.

Sal Bavuso of NASA-Langley Research Center recalled that the 10 ** -9 figure
was derived from historical accident data for airframe failures: the
probability of things like wings breaking off was observed to be about 10 ** -8
per hour of flight, so it seemed reasonable to require that control system
failures not add significantly to the risk.  Mike DeWalt of the Federal
Aviation Administration pointed out that that 10 ** -9 figure was meant to
apply to things that break or wear out, where it is reasonable to expect
failures to appear randomly. He explained it was never intended to apply to
design errors, which are what software errors are.  John Cullyer of the British
Royal Signals and Radar Establishment also said that the 10 ** -9 figure was
meaningful when applied to analog computers composed of operational amplifiers,
which are used in some autolander systems, but is not applicable to digital
systems.

Douglas R. Miller gave a talk titled, "The Role of Statistical Methods in
Software Safety Assurance" (it does not appear in the COMPASS proceedings but
is available from the author at George Washington University).  To explain his
rather negative view of claims of very low failure probabilites, he postulated
a system in which the probability of failing a test was random and was
distributed in a Poisson fashion.  He did a simple derivation to determine how
many consecutive failure-free tests would be needed to establish with 99
percent confidence that the failure probability was less than some number. For
example, how many successful tests must be run to establish with 99 percent
confidence that the probabilitiy of failure is less than 1 in a billion? Common
sense suggests that it must be at least a billion, perhaps more. Miller derived
that you actually need around 4.61 billlion, and presented the rule of thumb
that to obtain confidence that the probability of failure is less than 10 ** -
-N, you need about 10 ** +(N + 0.5) trials. He pointed out that in most cases
it is only practical to test up to around 10 ** 5 trials, which can only reveal
bugs that appear with frequency 10 ** -4.5 or greater.

Miller said that people sometimes say that good engineering practices ensure that the probability of failure is much less than 10 ** -4.5.  But, he said,
this is rather illogical if testing reveals any errors at all.  If the tests
reveal frequent bugs, why should you believe that your good engineering
practices have prevented the subtle ones?

Cullyer said, "Let's throw out the 10 ** -9" - and many of the audience
responded with enthusiastic applause.  Someone asked if he would accept a
failure probability of only 10 ** -4 or 10 ** -5 for nuclear weapons safety.
He responded, "In the weapons area there should be no room for probability.  If
something is unthinkable, don't let it happen.  You either certify it or you
don't - one or zero."

Nevertheless, statistical claims are very ingrained among some systems safety
practitioners.  Nancy Leveson of the University of California at Irvine
recalled a conversation with an engineer who kept pointing to a box on fault
tree labelled, "software failure."  He wanted to know what number to fill in
for that probability of that event.  Leveson tried to explain that there was no
meaningful number, but he persisted.  Finally she answered, "Just write 1.0."

(This is an excerpt from a report on COMPASS '88 that will appear in the
October issue of ACM SOFTWARE ENGINEERING NOTES).

- - Jonathan Jacky, University of Washington


Can current CAD/simulation methods handle long-term fatigue analysis?

Gerry Kokodyniak <kokody2%me.toronto.edu@RELAY.CS.NET>
Thu, 25 Aug 88 13:35:47 EDT
Henry Spencer <henry@zoo.toronto.edu> in RISKS DIGEST 7.38 states:
# As I understand it, metal fatigue in general is poorly understood, and there
# is really no way of calculating it.

Metal fatigue can be calculated with a reasonable amount of accuracy. 

#              ..................      The whole area is still very much
# rule-of-thumb engineering plus empirical testing.  There are rules that
# give a rough idea of the fatigue life of an airframe, after which a big
# safety margin is added (we're talking factor of 2, not 10%).  Even this
# is only a tentative number. 

Most aircraft design use a 10% to 20% safety factor. A safety factor of
two would make an aircraft so heavy it would never leave the ground.
Adding a safety factor of two, without making a component bulkier would 
mean a shift towards high strength alloys. As a general rule of thumb, 
high strength alloys tend to be more brittle and therefore less damage 
tolerant. These "high strength" alloys could catastrophically fail due 
to microscopic sized cracks. A very bad feature in regards to inspection.

The problem is not so much that fatigue behaviour is not understood very
well as that the actual loading conditions can never be acurately modeled.
An example is a turbulent flight vs. a smooth flight; poor engine maintenance;
airlines allowing luggage past the allowed weight limits; rough landings by
"rough" pilots. There are many factors that influence the actual loading
conditions.  Because the load cannot be modeled accurately, any technique 
i.e. F.E.M to just using a stress formula will be out. Regular inspections
are meant to catch cracks (caused from abnormal/normal loads before they
have a chance to propagate to dangerous sizes i.e. the critical crack length).

Without Computer Aided Engineering / Finite Element Methods, the space
shuttle would never have flown at all. The FEM was used in applications
from modeling fluid dynamic flow over the shuttle to stress analysis
to thermal modeling. FEM can be used in coordination with fracture
mechanics models to model cracks and to determine the damage tolerance
of the component being modeled.

Last point, CAE & FEM can be very powerful tools when coordinated with
Fracture Mechanics data obtained from experiments. There are many FEM
codes that allow one to enter such data and have elements that model
cracks and crack propagation very well. These can be powerful tools if 
used properly. 
                                    Gerry Kokodyniak

Gerry Kokodyniak, Ph.D. Student, Dept of Mechanical Engineering, U. of Toronto
USENET: kokody2@me.toronto.edu               Structural Integrity Fatigue and 
BITNET: kokody2@ME.UTORONTO                    Fracture Research Laboratory
UUCP:   {linus,allegra,decvax,floyd}!utcsri!me!kokody2  (416) 978-6853


Boundary Cases

James Peterson <peterson%sw.MCC.COM@MCC.COM>
Tue, 23 Aug 88 17:37:33 CDT
Vol 7, Issue 38 contained two articles that are more related than they might
seem:  Tom Lane mentioned a problem with a water billing system that ignored
the wrap-around case of a meter that went from 998 to 2 and David Sherman
mentioned a Florida couple that got a $5,062,599.57 electric bill.

The wrap-around problem is when a current meter reading is less than the
previous one.  In Tom's case it was because of the limited number of digits on
the meter.  His system was programmed to ignore it, possibly as an assumed
input error.  "Obviously" in this case, it should have been wrap around.

A few years ago, however, my meter was replaced.  The reading one month was
0456 and the next month it was 0002.  The billing program assumed it was wrap
around and charged me for 9546 units — about 2000 times my normal usage.  Last
month, however I had an actual misreading — the previous reading was 0680 and
the current reading was 0660 (the 0680 we suspect should have been 0630).
Service has improved however, because they caught that one somehow and simply
sent me a corrected bill with a credit for the misbilling.

The problem is how to identify a wrap-around as different from a misreading or
a new meter.  The only solution I can figure is to keep a history on-line of
what the recent periodic bills have been. When new bills are calculated, the
new bill is compared with the on-line history.  Bills which are way out of line
(like the Florida case) can be easily caught that way.  This is a simple sanity
check and seems to be basically what people do.

Does anyone know if this scheme is used in periodic billings (like utilities or
charge cards?).

There can admittedly be problems — during start-up there is no past history to
work from (at one job, the operator entered my yearly salary as a monthly
salary and my first paycheck was $12,000) and sometimes things simply change
(like the change in a credit card usage for the yearly vacation), but it would
seem to be a valuable way to catch a lot of the outlandish billing problems
that you see blamed on computers.
                                                    jim


Re: Another boundary case bug

John Bruner <nlp3!jdb@mordor.s1.gov>
Thu, 25 Aug 88 15:42:45 PDT
Tom Lane's problem with his water meter calls to mind a problem I had
about a year ago with mine.  My new home had a brand-new water meter
with a digital (odometer-style) readout.  The water bill for my first
month was over $400.

When I checked my meter it appeared to me that the meter reader had
misread it by a factor of ten.  I called the water company.  They
said yes, the bill did seem high, but they sent someone else out to
double-check it and I really was using that much water.  The water
consumption on my bill was given in units of CCF.  I asked if this
represented 100 cubic feet.  They didn't know.  (Recently I noticed
that they've added an line to the bill that does indeed define CCF
as 100 cubic feet, or about 750 gallons.)  I could not reason with
them about this — their attitude was that I clearly did not know
what I was talking about (!).

Finally I convinced them to send someone around to look at it with me
present.  With the company representative watching the meter I flushed
a toilet.  By my calculations the toilet consumed 4 gallons.  By
theirs it consumed 40.  I pointed out that the meter was clearly
labelled CUBIC FEET, that it read in units of 1 cubic foot, and that
to read it they needed to discard the last two digits.

Finally I found out the problem: the older digital meters read 10's of
cubic feet, and the 10's digit was white-on-black, so all the meter
reader had to do was copy down the digits that were black-on-white.
In my case, though, because he didn't know what the billing units
were, he couldn't convert cubic feet to CCF.  All of the digits on my
meter were black-on-white, so he just guessed.  A considerable effort
on my part was required to undo the effects of his guess.

The solution to my problem: the water company replaced the meter with
one of the older ones.

John D. Bruner      Natural Language Incorporated
nlp3!jdb        1786 Fifth Street, Berkeley CA  94710   (415) 841-3500


Mother's maiden name == arbitrary password

Walter Smith <wrs@apple.com>
Wed, 24 Aug 88 20:14:23 PDT
Institutions have used "your mother's maiden name" as a password for years.
The wonderful thing is that you can *lie* with no ill effects.  When you
see "Mother's maiden name" on a form, think of it as "Password (must be
a last name)".
                      [This was also noted by several other contributors.  PGN]

The really frightening "factoid" coming into common use is the last N
digits of your social security number.  I've seen two or three touch-tone
operated banking systems that use it.  One of them even said something like
"Your account is secure from unauthorized access, because your personal
code number must be entered first.  The code is the last four digits of
your social security number."

- Walt
  Apple Computer Inc., 20525 Mariani Ave. MS 46-A, Cupertino CA 95014


Risks of EFT agreements

Doug Claar <dclaar%hpda@sde.hp.com>
Thu, 25 Aug 88 12:49:00 pdt
I recently received an application to sign up for our Credit Union's 
phone-based electronic funds transfer system. The application required
three items: my account number, my self-assigned PIN, and my signature
agreeing to be responsible for any transactions completed with my PIN!
To make matters worse, the application itself is a fold-in-half and 
mail thing, with pre-paid postage on one part of the outside, and big
advertisements of what is inside on the other side. Finally, by signing
the agreement, you agree to be governed by the Credit Union's rules
regarding EFT, which you are not given, but which will be sent later.
So many risks from one little application...

Doug Claar, HP Information Software Division
UUCP: { ihnp4 | mcvax!decvax }!hplabs!hpda!dclaar -or- ucbvax!hpda!dclaar
ARPA: dclaar%hpda@hplabs.HP.COM


Chile con backbones

Martin Minow THUNDR::MINOW ML3-5/U26 223-9922 <minow%thundr.DEC@decwrl.dec.com>
24 Aug 88 14:25
Anyone who has been running with the University of Chile as their closest
backbone server may have noticed bizarre things lately. There were some
problems; the newest node list changes the weights of the link to try to keep
North American mail from going to South America first (and getting delayed).
--- Joe M.


An item by Mark Garvin on Softguard and the Trojan horse "SUG"

<Neumann@csl.sri.com>
Thursday, 25 Aug 88 18:00:05 PDT
A rather extraordinary message from ZDABADE%VAX1.CC.LEHIGH.EDU@CUNYVM.CUNY.EDU
appeared on VIRUS-L, describing Trojan horses (often named "SUG") that have
been promoted as SoftGuard/SuperLock UNPROTECTORS (lock-breakers).  The file is
rather long (about the size of a typical RISKS issue) and of particular
interest to those concerned with Trojan horses and legal implications.  The
full text can be FTPed from KL.SRI.COM stripe:<risks>risks-7.40softguard.  PGN]

Please report problems with the web pages to the maintainer

x
Top