The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 4 Issue 7

Friday, 7 November 1986

Contents

o Risks of RISKS
PGN
o Details on the British Air Traffic Control computer outage
from Herb Hecht
o Re: UK computer security audit
Robert Stroud
o USS Liberty
Matthew P Wiener
o Grassroots sneak attack on NSA
Matthew P Wiener
o A variation of the Stanford breakin method
Arno Diehl
o Re: Subject: Computers and Medical Charts
Roy Smith
o DDN Net breakdown (?) on 6 Nov 86?
Will Martin
o Re: Linguistic decay
Matthew P Wiener
o Mechanical Aids to Writing
Earl Boebert
o Info on RISKS (comp.risks)

Risks of RISKS

Peter G. Neumann <Neumann@CSL.SRI.COM>
Fri 7 Nov 86 22:20:56-PST
"Nothing in the foregoing to the contrary notwithstanding," foresight is a
great thing.  I discovered a forgotten squirrelled safety copy of an
intermediate draft of RISKS-4.7 in another directory, and so am very happy
to be able to provide a recreation of RISKS-4.7 after all, despite the
previous message announcing what I thought was my first real panic in
running RISKS.  

    [BBOARD MANTAINERS:  PLEASE REMOVE PREVIOUS JUNK MESSAGES.  PGN]


Details on the British Air Traffic Control computer outage [6 Oct 86]

<Peter G. Neumann <Neumann@CSL.SRI.COM> [SnailMail from Herb Hecht]>
Thu 6 Nov 86 21:23:32-PST
On Monday, 6 October 1986, the British air traffic control system was put
into manual backup mode by the crash of an IBM 9020D system that was
responsible for flight-plan data.  The computer in the London ATC Centre at
West Drayton (near Heathrow) crashed and was down for two hours -- with
traffic at Heathrow, Gatwick, and Manchester (among others) encountering
delays of up to six hours.  Because this system is also used by the military
air-traffic control system in West Drayton, British defenses were also
affected.

Overnight, ATC computer staff ``had loaded a new version of the main program
containing routine updates.  Software for running air-traffic control has to
be changed regularly to take account of new routes, aircraft types and
operating procedures for controllers.'' (Major updates are done once a year
at the London center.)  ``The changeover to a "new load" is normally a
tricky business.''  (One million lines of code run on a six-processor
system, networked with at least 10 other systems.)  

Unknown to the system programming staff, the software contained an
"unexpected flaw".  ``The centre was planning to connect an additional
computer to the existing 9020D complex.  Provision for the machine had been
made in the new software.  But that morning the computer was not connected
to the 9020D system.  Unaware of this, the program began collecting data
which should have been sent to the non-existent computer.  Data backed up
until alarms were sounded and supervisors decided to stop the system.  Staff
raced to adjust the 9020D and reload the old software.  Two hours later, the
machine was back in action.''  Meanwhile, operation reverted to the manual
flight-strip operation.

[Drawn from New Scientist, 9 October 1986, p. 13.  Thanks to Herb Hecht of
SoHaR]


Re: UK computer security audit

Robert Stroud <robert%kelpie.newcastle.ac.uk@Cs.Ucl.AC.UK>
Fri, 7 Nov 86 13:05:21 gmt
There was an item in [the 6 Nov 1986] Guardian about the same report that my
[earlier] submission described, so I can give you a better reference. The
report is called "Computer Security in Practice" and is published by the
Risk Management Services division of Hogg Robinson Ltd. who are a firm of
insurance brokers and presumably have an office in London.

The Guardian article paints a bleak picture of just how ill-prepared for
disaster the 50 or so companies visited are. 80% are not adequately
protected against fire, 96% are not protected against flood, (the two
exceptions had only installed detectors after sustaining water damage
previously), 70% don't have a stand-by power supply, 97% don't have enough
stand-by power to keep the user areas going as well as the hardware, etc.
Only 4% had fully calculated the cost of a disaster while 6% thought they
had a plan but either couldn't find it or admitted that it was hopelessly
out of date.

The article concluded with the observation that if these findings were
typical, most companies were doing the equivalent of walking across the
North Circular* with their eyes shut. However, Hogg-Robinson thought that
these results were probably not typical because at least these firms had
asked for a security risk audit. What about all the others?

* For the benefit of American readers who have not driven in London,
I should explain that the North Circular is a notorious inner ring-road.

[Source: Guardian 6th November, p.13]

I would be interested to know of any similar studies of American companies.

Robert Stroud, Computing Laboratory, University of Newcastle upon Tyne.
UUCP ...!ukc!cheviot!robert

   [By the way, the Newcastle mailer apparently ran amok sending this
    message -- among others.  I received 20 COPIES.  I probably would have 
    received more had not John Rushby been having the same experience with
    a message from Tom Anderson at Newcastle.  He finally made a call to 
    Tom, who evidently initiated a rectification of the problem.  I wonder
    whether the presence of two simultaneous messages from Newcastle to
    CSL.SRI had anything to do with the infinite loop! PGN]


USS Liberty (RISKS-4.1)

Matthew P Wiener <weemba@brahms.berkeley.edu>
Fri, 7 Nov 86 01:17:57 PST
  REVISED SUMMARY ITEM FOR RISKS-4.1:
  $!! USS Liberty:  34 dead; injured; 3 warnings to withdraw lost? (SEN 11 5) 

There was a story, I believe in the Atlantic two years ago, giving some sort
of "official" Israeli explanation (as told by two highly respected Israeli
reporters) of how the Israelis came to "accidently" attack the USS Liberty,
involving sad coincidence after coincidence on their side, with things like
the properly identified US ship on the war map had its flag put aside
temporarily by General X, and then General Y took his place at that point,
and other such things.  While their version is almost certainly a complete
crock, it is intriguing that breakdowns in protocol are so freely invoked as
cover stories.

(Is this a new brand of computer/systems meta-risk?  That is, have we become
so inured to "computer error" that we will take such as an excuse blindly?
Note that I am not referring to using the computer as a scapegoat to avoid
blaming humans, just because there happened to be a computer in the
pipeline.  I wonder whether making a computer the catchall wholecloth
scapegoat on the principle that no one would check for the real story has
become SOP?)

In the long long run, by the way, the USS Liberty and the USS Pueblo
incident led to the scrapping of NSA's spy ship program, with unknowable
consequences.  Presumably the development of spy satellites and the like
filled the gap, but again, who really knows.  Trying to measure the risks
associated with intelligence can be well nigh impossible.

Actually, breakdowns in protocol are common in diplomacy.  There was a flap
some years ago about an anti-Israel vote by the US in the UN that was blamed
on such.  Cryptographic failure could have been responsible, but that would
never be admitted.

Speaking of which, successful cryptanalysis can lead to striking diplomatic
victories in sensitive treaties.  Of course, the military impact of
cryptanalysis is potentially unlimited.

These particular incidents do not really involve computers per se, although
the mentality is identical.  

ucbvax!brahms!weemba    Matthew P Wiener/UCB Math Dept/Berkeley CA 94720

                   [The above contribution was excerpted from two informal 
                    private communications (with permission).  It was not
                    originally intended as a RISKS message.   PGN]


Grassroots sneak attack on NSA

Matthew P Wiener <weemba@brahms.berkeley.edu>
Fri, 7 Nov 86 04:34:58 PST
This past week, a rather bizarre attempt to annoy NSA via computer has
begun on USENET.  Several people have started inserting cute words like
"crypt" or "terror" or "CIA" in their signatures in an attempt to over-
load NSA's automatic grep for cute words in overseas traffic.  Consider-
ing the minuteness of the added load, and the likelihood that NSA already
filters out obvious traffic like the net, the effort is nothing more than
a good old fashioned American form of protest.  Even though it is using
(a trivial amount of) OPM to pay for a symbolic sabotage, I love it.

But obviously uglier scenarios can be imagined.  Is a grep-bomb possible?

ucbvax!brahms!weemba    Matthew P Wiener/UCB Math Dept/Berkeley CA 94720


Arno Diehl <DIEHL%v750%germany.csnet@RELAY.CS.NET>
          security%red.rutgers.edu%germany.csnet@RELAY.CS.NET
Subject:  A variation of the Stanford breakin method

We just installed some SUN workstations (UNIX 4.2bsd) connected to 
an ethernet using TCP/IP protocols. 

We learned from the stanford breakin to be extremely careful when using
".rhosts". So we only entered such workstations into ".rhosts" located
in the office of trusted users.

One night a student operating a SINIX workstation experimented with
TCP/IP. He configured his machine to use the IP address of a trusted
host and he entered the username of a trusted user into "/etc/passwd" 
of his maschine. Then he rlogin'ed into a SUN-workstation as a trusted user.

==> Do not use ".rhosts" unless you have EVERY host and EVERY communication
    path totally under control!

Arno Diehl, University of Karlsruhe, West Germany


Re: Subject: Computers and Medical Charts (RISKS 4.5)

Roy Smith <allegra!phri!roy@seismo.CSS.GOV>
Thu, 6 Nov 86 11:54:44 est
> From: Christopher C. Stacy <CSTACY@JASPER.Palladian.COM>
> Subject: Computers and Medical Charts
> 
> So, the opinion of one medical records administrator seems to concur with
> that of Dr. Tessler; the people at that hospital probably were over-reacting
> inappropriately. [...] this situation presents the familiar risk of
> paranoid confusion.

    In my (limited) experience, the other problem is more common;
people under-reacting inappropriately to the security risks of storing data
in computers.  We are a biological research lab and use our computer
systems to store everything from mundane experimental results to patent
applications.  Somehow, people have gotten the impression that once it's in
the computer, it's safe.  It's hard enough to convince everybody to keep
their password secret, let alone read-protect their files or (God forbid!)
think about encryption or off-line storage when appropriate.  Even when we
had a rather sophisticated breakin a couple of months ago, and I sent
around what I intended to be a scare-the-blank-out-of-them memo, people
still trust the machine to safeguard their data more than is probably
prudent.

    It gets worse. There was recently an (apparently unrelated)
incident involving researchers at two nearby research institutes where one
researcher (call him thief) stole some important data from a competitor
(victim).  I got the original story from a mutual competitor of those two
who works here (fool).  When I spoke with victim to get the whole story, he
admitted it was purely his fault.  Victim was 1) using the same system as
thief to store his data and 2) didn't read-protect his files because he
wanted certain other people to be able to read them (not thief or fool,
however).  I then went back to fool and told him what had happened and
urged him to take at least some simple precautions -- change his password
for example.  He refused, saying that 1) he thought his data was safe
enough and 2) he couldn't imagine that anybody would/could break in.  Even
when I reminded fool that he had just had a big fight with one of his
post-docs and ended up firing the post-doc, he wouldn't believe me that
there might be people out there with the motive and capability to steal or
destroy his data!

    So, what am I supposed to do?  Here we have a person who, in the
face of overwhelming evidence that his data might be in peril, insists on
clinging to his belief that if it's in the computer, it must be safe.  In
my opinion, this is a far more dangerous situation than what CSTACY@JASPER
reports.


DDN Net breakdown (?) on 6 Nov 86?

Will Martin -- AMXAL-RI <wmartin@ALMSA-1.ARPA>
Fri, 7 Nov 86 12:20:38 CST
Since the well-known ARPANET breakdown is one of the RISKS archive items,
I was wondering if anyone on the list could contribute information about
what seemed to have been a DDN (or maybe just MILNET?) breakdown that
happened yesterday, 6 Nov 86? All I know of it was that our data
communications people got a call from Army Communications to let them
know that the reason we were off the net was not just a local area or
in-house problem, but some sort of general malaise or trouble all over
the network. I know no more details as to the nature or true extent of
the problem(s) and would like to read details or at least a description
of the symptoms. It was cleared up within hours, so was not as severe as
the historic ARPANET collapse, but it would probably be worthy of
mention in RISKS.

Will Martin

    [I asked Ole Jorgen Jacobsen <OLE@SRI-NIC.ARPA> of the Network
     Information Center whether he had heard anything.  ``The only thing 
     that comes to mind is the TAC problems we had yesterday, where
     lots of TACs gave "bad login" and needed to be reloaded.''  PGN]


Re: Linguistic decay (RISKS-4.4)

Matthew P Wiener <weemba@brahms.berkeley.edu>
Fri, 7 Nov 86 01:26:38 PST
There was a discussion in mod.comp-soc when it was still a mailing list
last spring on word processors => linguistic decay.  As someone who loves
the language for the sake of language, it is depressing to contemplate.

ucbvax!brahms!weemba    Matthew P Wiener/UCB Math Dept/Berkeley CA 94720


Mechanical Aids to Writing

<Boebert@HI-MULTICS.ARPA>
Fri, 7 Nov 86 19:25 CST
I couldn't resist, after reading M. Minow's quoting of the redoubtable Burgess.

  Headline:  "Reporters Should Cultivate the Use of the Fountain Pen"

  "In a recent address delivered at Columbia University, Mr. Edward W.
  Townsend, newspaper and magazine writer and Congressman, expressed the
  opinion that it was a misfortune that the typewriter had come to be so
  generally used in newspaper rooms, because it made the translation of
  thought into copy somewhat too easy.  The view point is that the somewhat
  slower and more careful handwriting of any article or news item is better,
  clearer thought and is always better constructed when written with a
  fountain pen than when rambled off on a typewriter..."

This from the "Pen Prophet", the house organ of the Waterman pen company,
Volume XII, No. 1, June 1914.  So there, Red Smith, Ernie Pyle, and E. B.
White.

            [A well-known exponent of pens is Edsger W. Dijkstra, much of 
             whose EWD series is still written very carefully in pen.  PGN]

Please report problems with the web pages to the maintainer

Top