The RISKS Digest
Volume 21 Issue 28

Tuesday, 20th March 2001

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Aasta train crash might have been caused by a safety-critical error
Anton Setzer
Lax security found in IRS electronic filing system
Dave Stringer-Calvert
Dow Jones Industrial Average reported at 0.20
Lindsay F. Marshall
More on the importance of safeguarding private crypto keys
David Kennedy
Risks of self-induced false alarms
Graystreak
Using automation software without accounting for possible scenarios
Tony Yip
Another "secure" e-book seems unlikely
Moz
The risks of accidentally becoming a customer for life
Jim Youll
NSF study: "Internet Voting is no 'Magic Ballot'"
Terry Carroll
On-line elections
Sarr Blumson
Smart Bombs - Old Story
Bruce E. Wampler
Re: Smart bombs miss again
Richard Schroeppel
Christophe Augier
Pekka Pihlajasaari
Michael Nelson
Bill Stewart
Wm. Randolph Franklin
Info on RISKS (comp.risks)

Aasta train crash might have been caused by a safety-critical error

<anton setzer <a.g.setzer@swansea.ac.uk>>
Sun, 12 Nov 2000 01:09:06 GMT

  [Archive note: overstruck ringed "A" in "Asta" lost in transit.  PGN]

An extremely detailed report on the Aasta train crash in Norway,
4 Jan 2000, in which 19 people died is now available via
  http://odin.dep.no/jd/norsk/publ/rapporter/aasta/

Most of it is in Norwegian, but a Summary (pages 275 ff in "12. Sammandrag"
at "Del 4") and in the appendices ("Vedlegg") Vedlegg 4 and Vedlegg 5 are
very detailed reports on the Signalling installations there in English.

To remind you of the accident (which is described in the summary in detail):
the accident happened on a single-track line. Two passenger trains, a fast
train and a small local train were about to meet.  Both left at stations
where trains can bypass each other.  The fast train had a green signal. The
local train was supposed to have a red signal.  The trains crashed in between.
What was tragic was that both trains were on collision for 4 minutes which
was indicated by the train controlling system, but the controllers didn't
realize it in time and then didn't have the correct mobile telephone numbers
of the trains.

Some points I found interesting are:

On  page 277 of the main report one can read:
"In the light of the above, the commission cannot state with certainty what
signals were showing on the northbound line at Rudstad station on 4 January
2000. From a technical point of view, it would seem highly likely that a red
exit signal was showing. At the same time, the design of the safety system
makes the potential for error so great that the commission cannot with
certainty exclude malfunction situations that may have produced a different
signal aspect."

In the report by SINTEF (appendix 4, English version page 53 ff) a long list
of known deficiencies is listed.

4 incidents are listed: one in which a signal showed green, although it
should have shown red (page 56 ff in the report by SINTEF), one occurring on
18 Apr 2000, after the train accident.  3 of them seem to indicate that
under certain circumstances for a short time erroneously a green light is
shown.  (One incident seem to have been caused by mechanical problems).

SINTEF apparently did a by hand analysis of the accident and couldn't find
an error.

In attachment 5, the report is assessed by Railcert, and SINTEF's report is
criticized (section 6.1):

"We feel that Sintef's conclusions 2, 4 and 6 suggest that a technical
cause, related to the signaling installation, for the accident can be ruled
out. We support this conclusion only inasmuch as it applies to a steady
state, single cause failure. We do however stress the need to look beyond
such "simple causes".

"In fact Sintef's studies have revealed a number of deficiencies in the
design of NSB87 (and NSI-63) as well as serious hiatus in the collection and
safeguarding of possibly vital evidence immediately after the accident. A
number of known reports of anomalies in similar installations exist. Based
on these, we have been able to construct theoretical scenarios where the
behaviour of the signaling installations might at least have contributed to
the causes of the accident. These scenarios as well as effects of
combinations of several known deficiencies could neither been proven, nor
disproved by the evidence in hand, or the result of Sintef's analyses and
studies.

*******

When looking at the data I found the following interesting:

- 3 of the incidents seem to have to do with the fact that the signal system
  sometimes shows green light for short amount of time although it should
  show a red signal (one further incident has to do with hanging green
  light).

- It is very strange that the local train which according to the log must
  have driven over a red light left 3 minutes earlier.

- It might be that a train drives over a red light while running. In the
  situation in question however, the local train was waiting at a station,
  and when waiting in front of a red light, it is unlikely to drive over it.

- The fast train and the local train left at almost the same time: The fast
  train passes the main exit signal at Rena at 13:06:15.  The local train
  leaves Rudstad platform at 13:06:17 and passes the main exit signal at
  Rudstad at 13:06:58.

From this I conclude that the following scenario might have happened:

- When switching to green, under certain circumstances, the signalling
  system erroneously issues for a short moment a green signal to the
  opposite signal of the block as well.

- The driver of the local train interprets this as an indication that he
  should drive now in order to bypass the other train in time at the other
  station. Rudstad.  The driver doesn't check the signal again, which
  probably, when passing the exit signal, already has switched back to red.

- If this was the case, this accident is due to a software error.

Of course the above is highly speculative and I haven't read the report in
detail (especially the Norwegian part). I can imagine as well that the
driver of the local train behaved abnormally caused by sleepiness, mental
problems, irritation by sun light.

I think it would be very interesting to try to find the cause of this
accident, in which a software error led to the loss of 19 lives.

Anton Setzer, Computer Science, University of Wales Swansea, Swansea SA2 8PP
UK  http://www-compsci.swan.ac.uk/~csetzer/  +44 1792 205678 ext 4518


<Dave Stringer-Calvert <dave_sc@csl.sri.com>>
Fri, 16 Mar 2001 07:27:00 -0800

Even as the IRS was assuring taxpayers last year that electronic filing of
tax returns was secure, serious shortcomings existed that could have allowed
hackers to view and even change information on returns, a government
watchdog agency said. The General Accounting Office found no evidence that
hacking had occurred, but it said its investigators were able to gain
unauthorized access to the tax agency's electronic filing system, which will
handle a third of all federal returns this year. The GAO cited the IRS for
lax security controls and for not requiring encryption of electronic
returns. The report also said the IRS sent out $2.1 billion in refunds to
taxpayers whose returns were not properly authorized.

http://www.latimes.com/business/20010315/t000022659.html
http://www.msnbc.com/news/TECH_Front.asp
http://www.usatoday.com/life/cyber/tech/2001-03-15-e-filing-risks.htm
http://www.cnn.com/2001/US/03/08/taxes.electronic.filing/


Dow Jones Industrial Average reported at 0.20

<"Lindsay F. Marshall" <Lindsay.Marshall@newcastle.ac.uk>>
Mon, 19 Mar 2001 19:46:37 +0000 (GMT)

Oops!  Lindsay  http://catless.ncl.ac.uk/Lindsay

  (From DATEK online:)
  Monday, 19 March 2001, 4:53:47am
  ----------------------------------
  DJIA          0.20       -10031.08
  NASDAQ     1890.91          -49.80
  S&P 500    1150.53          -23.03

  [Source: Article by Kieren McCarthy, 19 Mar 2001, *The Register*:
  http://www.theregister.co.uk/content/28/17700.html; excerpted by PGN]


More on the importance of safeguarding private crypto keys

<David Kennedy CISSP <david.kennedy@acm.org>>
Tue, 20 Mar 2001 16:08:59 -0500

Cryptologists from Czech company ICZ detected serious security vulnerability
of an international magnitude.  http://www.i.cz/en/onas/tisk4.html

> A bug has been found in worldwide used security format OpenPGP. The bug
> can lead to discovery of user's private keys used in digital signature
> systems. OpenPGP format is widely used in many applications used
> worldwide, including extremely popular programs like PGP(TM), GNU Privacy
> Guard, and others. The bug detection comes on the right time, as Philip
> Zimmermann, the creator of PGP program, has left Network Associates,
> Inc. and aims to boost OpenPGP format in other products for privacy
> security on Internet.  From the scientific point of view, the discovery
> goes far beyond actual programs - it has wider theoretical and practical
> impact.

> A slight modification of the private key file followed by capturing a
> signed message is enough to break the private key.  These tasks can be
> performed without knowledge of the user's passphrase. After that, a
> special program can be run on any office PC. Based on the captured
> message,the program is able to calculate the user's private key in half a
> second. The attacker can then sign any messages instead of the attacked
> user.  Despite of very quick calculation, the program is based on a
> special cryptographic know-how.

> Similar vulnerabilities can be expected in other asymmetrical
> cryptographic systems, including systems based on elliptic curves.

DSA and RSA keys are reportedly equally vulnerable.

DMK Comment: A detailed report was supposed to be "released shortly" but has
not appeared so far.  The press release does not specify whether diddling
the private key results in any error messages.  I hope this does not spawn
another round of "PGP is cracked/cracking/crackable" media hysteria.  The
importance of key management has always been critical and this would seem to
only add to the reasons why.  There are viruses that try to steal PGP's
secret key, there are trojans that make it possible to steal PGP's secret
key.  Storing keys on shared/networked workstations has always been
recognized as a problem with PGP.  The comp.security.pgp FAQ includes: Can I
put PGP on a multi-user system like a network or a mainframe?
<http://www.uk.pgp.net/pgpnet/pgp-faq/faq-03.html#3.18>

David Kennedy CISSP, Director of Research Services, TruSecure Corp.
http://www.trusecure.com


Risks of self-induced false alarms

<Graystreak <wex@media.mit.edu>>
Thu, 8 Mar 2001 13:57:54 -0500

http://washingtonpost.com/wp-dyn/articles/A38625-2001Mar7.html

FBI Director Louis J. Freeh said he and his wife had been baffled by a
series of false alarms from the security system in their Great Falls area
home. Fairfax County police responded each time, but no suspects had been
nabbed.

It seems that two of his six sons, then ages 5 and 4, had been amusing
themselves by making their 2-year-old brother run in circles in the basement
to set off the motion detector. "They would sit and watch for the police to
come," Freeh said.

[AW notes: no discussion of why the motion detector was on in the basement
while the children were home, nor why the police didn't adopt a "call before
responding" policy after some number of false alarms.]


Using automation software without accounting for possible scenarios

<Tony Yip <tonyy@chancery.com>>
Wed, 7 Mar 2001 12:48:36 -0800

>From www.macfixit.com 7 Mar 2001:

Many [Macfixit] readers sent us copies of a letter they received yesterday
from the Apple Store apologizing "for the delay in fulfilling" their Mac OS
X order. This seemed a bit odd, as Mac OS X won't ship until March 24. So
there can be no delay at present. Are they anticipating a delay starting
March 24? Or was the message sent in error, probably as the result of some
software that automatically triggered the mailing when it detected that the
order had not yet been fulfilled? We suspect the latter, as it makes more
sense.

Consistent with our theory, Evan Chaney writes: " I called the Apple Store
and talked to a sales rep who said he thinks this e-mail is invalid and that
he thinks it was sent just because the system sends out a backlog e-mail if
the product hasn't been shipped after 20 days. Apparently, it doesn't
account for pre-ordered products." We have no word from Apple on this as
yet.

The RISKS: automation is good; but you need to take all scenarios into
account; especially when you created the scenario yourself by accepting
orders way before the product is due to be shipped.


Another "secure" e-book seems unlikely

<risky@moz.net.nz>
Mon, 12 Mar 2001 18:28:57 -0700

I just went through an SSL page to purchase an online book from
www.mightybooks.com, with slightly bizarre results. They use the "secure"
Acrobat Reader to deliver content, which is a known risk to them (the secure
format is anything but).

More concerning is their apparent use of a simple counter in their download
URL. My URL was of the form:
  https://shop.mightywords.com/servlet/com.mighty.download.
    HabitatRequestServlet?saleId=xxxx
where xxxx is a small integer.

Unfortunately I can't test the surmise that trying the next (or prior) few
integers might net me more books, since it requires me to have the 128 bit
encryption "upgrade" installed on Windows 2000 (confusingly, their FAQ
claims that Service Pack 1 will also fix this problem, but I have that
installed). Of more concern is that I could complete the entire sale process
on their secure site, only to fail at the actual download stage because that
requires a higher level encryption than the rest of the sale.

There is no "download a trial document" link on the site (I looked!), so it
seems impossible to verify the problem without actually making a purchase
(or attempting a theft by plugging numbers into the URL above).

Moz


The risks of accidentally becoming a customer for life

<Jim Youll <jim@media.mit.edu>>
Thu, 15 Mar 2001 08:50:24 -0500

In 1998 I helped a company computerize its shipping department. While
testing documentation and processes, I signed up as a "shipper" in the UPS
online system. I neglected to remove myself when the project was finished,
and in 2001, I was still receiving promotional UPS e-mail. The messages do
not offer an "unsubscribe" hint.

So I called UPS. "All you need to do," the rep said, "is to go to 'your
page' on the Web site, enter the user ID and password, and clear the correct
checkbox" to make the e-mail stop. Unfortunately, I didn't know my user ID
and password. UPS insisted it had no way to look up an account name if given
an e-mail address. They pointed out that it was really my fault for
"forgetting" the user ID I created in 1998 and insisted that "clearing the
checkbox" was the only way to make the mail stop. Without a user id, there
was "no way to get to the checkbox."

After more complaints, they finally contacted the people who run the e-mail
database. Turns out I could not possibly have forgotten "the user ID I
created in 1998" because in 1998, the system did not employ "user IDs" on
accounts — my records didn't even have one, making them totally
inaccessible except to their systems people! Sometime after 1998 the system
was changed. UPS says my records are corrected but I don't know if that
means I have a "user ID" now, or whether the account was deleted. I wonder
about the disposition of the thousands of other early adopters whose
accounts lack user IDs.

It's still expensive to use version 1.0, even on the Web.


NSF study: "Internet Voting is no 'Magic Ballot'"

<Terry Carroll <carroll@tjc.com>>
Fri, 16 Mar 2001 14:24:58 -0800 (PST)

RISKS has previously had discussions of the risks associated with going to
computerized voting (especially Internet-based voting) as an attempted
panacea for the types of problems we saw in the last US presidential
election.

The National Science Foundation recently released a study that it
commissioned from the Internet Policy Institute on problems associated
with Internet voting.  The NSF's press release on the study may be found
at <http://www.nsf.gov/od/lpa/news/press/01/pr0118.htm>.  The IPI has a
page devoted to the study (including a link to the report itself) at
<http://www.internetpolicy.org/research/results.html>.

The NSF highlights the following findings with respect to the feasibility
of Internet voting:

- Poll site Internet voting systems offer some benefits and could be
  responsibly deployed within the next several election cycles;

- The next step beyond poll-site voting would be to deploy kiosk voting
  terminals in non-traditional public voting sites;

- Remote Internet voting systems pose significant risk and should not be
  used in public elections until substantial technical and social
  science issues are addressed; and

- Internet-based voter registration poses significant risk to the integrity
  of the voting process, and should not be implemented for the foreseeable
  future.

Terry Carroll, Santa Clara, CA <carroll@tjc.com>

  [These results are rather similar to the findings of the California
  commission.  Interested readers should also dig up the recent Caltech/MIT
  report, which states that lever machines, hand-counted paper ballots, and
  optically scanned ballots are all significantly more accurate than
  direct-recording voting machines (DREs) and Internet voting schemes.  PGN]


On line elections

<"Sarr Blumson" <sarr.blumson@alum.dartmouth.org>>
Fri, 16 Mar 2001 11:28:20 -0500

The college I attended is running the election for alumni appointed
trustee with a Web voting option through election.com. So I went to cast
my vote, and got in response:

Microsoft OLE DB Provider for SQL Server error '80040e14'

  The log file for database 'electnet' is full. Back up the transaction
  log for the database to free up some log space.
  /dartmouth2001/confirmation.asp, line 92

It's happened twice. It let me vote successfully a few hours later; I'm
assuming/hoping it only recorded my vote once.

Not I'm imagining trying to explain to the poll watchers in a real
election that this message means they should let me vote again.

Sarr Blumson, JSTOR, University of Michigan, 301 E Liberty, Ann Arbor, MI
48109-2262  http://www-personal.umich.edu/~sarr/  +1 734 764 0253


Smart Bombs - Old Story

<"Bruce E. Wampler" <bruce@objectcentral.com>>
Wed, 07 Mar 2001 14:19:12 -0700

I've been recently been reading "A War to Be Won: Fighting the Second World
War" by Murray and Millet (ISBN 0-674-00163-x), and have gotten a bit of
perspective on the RISKS of developing high tech weapons that seems to apply
to the recent poor performance of the Navy's Joint Standoff Weapon (smart
bomb) in Iraq.

It seems (CNN: http://www.cnn.com/2001/US/02/26/us.iraq.ap/index.html) that
much of the failure was due to not accounting for high winds, and that a
software fix that would have the bombs level off longer might make them
work. This is just another example of not finding weapon flaws until they
are actually used in the field.

This is, in fact, a very old story. Just one example from "A War To Be Won":
it seems that the United States submarine fleet was very ineffective during
the first year or more of the Pacific war because of defective
torpedoes. The US subs had the latest, high-tech torpedoes available. Fancy
magnetic fuses that didn't work, and a faulty guidance system that didn't
work either. Turns out, the Navy engineers had tested the torpedoes without
a full weight warhead, and so the sensors that measured the water depth were
improperly calibrated. Doesn't this sound familiar?

And there are more familiar lessons in the torpedo story. The submarine
crews knew the torpedoes didn't work, tried to get the Navy engineers to fix
the problem (who denied any problems for a long time) and ended up figuring
out how to turn off the magnetic fuses and use a contact fuse (that also had
design defects, but worked better anyway), and to field calibrate the depth
sensors.

I think these very similar stories - 60 years separated - bring up some
interesting points. First, using the latest technology is risky in itself,
whether it is new magnetic fuses and depth sensors or satellite guided bombs
that fail to account for wind.  It will remain impossible to really know how
the weapons will work until they are really used. It is not an option to
start a conflict just to test the weapons! Second, the engineers will always
say their weapons are different, and will work. There are no doubt many more
lessons, but for now, a final lesson to remember is that there are really
not all that many new RISKS around - it so often comes down to the people
involved in using and developing the technology - in 1941 or 2001.

Bruce E. Wampler, Ph.D., bruce@objectcentral.com


Re: Smart bombs miss again (Davis, RISKS-21.28)

<Richard Schroeppel <rcs@CS.Arizona.EDU>>
Thu, 15 Mar 2001 14:23:50 -0700 (MST)

For the last century or so, soldiers have been instructed not to take
the time to aim their guns: you do more damage by shooting faster.

I don't know what the numbers look like for bombs, but simply knowing
"miss/hit" statistics isn't enough information to deprecate the weapon.

Rich Schroeppel   rcs@cs.arizona.edu


Re: Smart bombs miss again (Davis, RISKS-21.28)

<"Christophe Augier" <augier@altran.com>>
Fri, 16 Mar 2001 12:11:16 -0500

  "Around 50% of smart bomb didn't functioned in the last NATO bombing."

Well, we shouldn't put it this way : 50% of the objectives are still
standing. 50% of the enemy facilities, radar, airfields or whatever are
still operational. In a "real" war that means retaliation. Usually, that
costs a lot more than a entire load of smart-bombs and their F18.

The risks are not in the bombs malfunctioning, but in the non-realization of
the military objectives. If the ponderated (one target may be more important
than another) targets destroyed represent, let's say, 65% of the targets,
NATO could be satisfied with this objective. It is all relative.

There is another point I wanted to "laser-light" : The cost of the bombing.
Technical risk analysis is ok, but you have to deal with financial risks
analysis (e.g. in a long war the risk of issuing money to support the war
effort. See Germany in WWI and WWII. ...or the risk of losing your next
election because of too much tax money spent :).

Well, you may multiply the % of awaited destroyed targets (they surely have
this kind of statistics for all bombs ; let's say 50% for smart bombs? or it
could be calculated with the "circular error probable"*precision of the aim)
by the overall costs of the bombing. You will be able to compute the total
amount of bombs/money to reach your objectives, and then choose your
optimized solution : B52 carpet bombing, smart-bombs, artillery, a mix of
them (gulf war), etc.

Of course, all this thinking, does not take in consideration "side effects"
as civilian casualties, soldiers/pilots casualties, or destroyed embassies.
:)

On resume: As we don't know the objectives of the last NATO bombing, nor the
cost of it, I somehow agree with Randy's answer "In the absence of the
relevant numbers and relevant comparison points, the widely repeated "more
than 50%" is simply meaningless, no matter how melodramatic it sounds."

Christophe


Re: Smart bombs miss again (Davis, RISKS-21.27)

<Pekka Pihlajasaari <Pekka@data.co.za>>
Fri, 16 Mar 2001 07:57:26 +0200

The original quotation "most of the bombs ... missed their targets." is
semantically quite different from "bombs hit fewer than 50% of the targeted
radars."

In the original release the military indicated that the majority of their
ordnance did not achieve hits. The paraphrasing by Randy changes the
semantics to that of a minority of targets did not get hit.

It is quite likely that multiple bombs were targeted at a single radar, and
no estimate of the actual number of destroyed targets can be inferred from
the original press release. This is assuming that the original release was
not equally distorted.

The RISK is that moving even simple sounding numbers out of context can
distort the intent of the statement so much as to make it useless. All the
more reason for looking at the source material before drawing a conclusion.

Pekka Pihlajasaari <pekka@data.co.za>  Data Abstraction (Pty) Ltd


Re: Smart bombs miss again (Davis, RISKS-21.27)

<Nelson Michael A CNTR AMC/DOTR <MICHEAL.NELSON@SCOTT.AF.MIL>>
Fri, 16 Mar 2001 08:46:06 -0600

Randy Davis used the term "circular error probable" to describe the accuracy
of weapons delivery.  That phrase is a cryptic, almost opaque variant of the
more intuitive, original terminology "circle of equal probability," and can
lead to the casual reader asking two pointed questions:

* What exactly is "circular error"?
* If something called "circular error" does exist, what meaning does the
  word "probable" add to the concept?

This entire semantic discussion becomes moot with the use of the original
phrase.  It captures the underlying concept noted in Mr. Davis' message in a
much more meaningful way: for a given weapon system, the circle where, on
average, half of the weapons will land inside the circle and half outside
the circle.  Unfortunately, "circular error probable" is in widespread use,
in both technical and non-technical literature.

Michael A. Nelson, Aircrew Force Management Analytical Support  ARINC, Inc.


Re: Smart bombs miss again (Davis, RISKS-21.27)

<Bill Stewart <bill.stewart@pobox.com>>
Sat, 17 Mar 2001 12:33:53 -0800

[...] "One target, one smart bomb" would be fun, but it's unlikely.  During
the First Gulf War, about 97% of the bombs used were still dumb iron bombs.

Bill Stewart


Re: Smart bombs miss again (Davis, RISKS-21.27)

<(Wm. Randolph Franklin) <rfranklin@altavista.net>>
19 Mar 2001 13:10:51 -0500

I don't think the bombs were even that accurate until the end of WWII.
There was a British study around 1942 or so that said that most bombs were
more than 5 miles off.  IIRC, when Peenemunde was first bombed, the "after"
recon photos looked like the "before" photos.

This illustrates that accuracy can improve.

I have a problem with the argument that something is impossible merely
because it's difficult.  This seems to be a proxy for the argument about
whether we should do it, not whether we can do it.  There's the joke that
the opponents may be more afraid that it will succeed than that it will
fail.

Here's an example of how hard problems sometimes get solved.  It's not easy
to propel a rocket in a straight line by pushing it from the rear.  There is
a great movie of NACA and NASA rocket mishaps.  It has a rocket making a
U-turn immediately after launch (apparently a polarity error in the
gyroscope wiring), a rocket lifting off a little, then settling back on the
pad, a rocket gently tipping over, etc.

Now, we've solved all that.  Launches are 98% reliable.

Perhaps the THAAD is as fundamentally flawed as using a ladder to
get to the moon.  However, that hasn't been established yet.

(Wm. Randolph Franklin)    <rfranklin@altavista.net>

Please report problems with the web pages to the maintainer

x
Top