The RISKS Digest
Volume 4 Issue 35

Saturday, 3rd January 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Computer Gets Stage Fright
Chuck Youman
Still More on PhoneCards
PGN
Miscarriages Up in Women Exposed In Computer-Chip Process
Martin Minow
Across the Atlantic with Cast Iron
Earl Boebert
Heisenbugs — Two more examples
Maj. Doug Hardie
Risks Involved in Campus Network-building
Rich Kulawiec
Update on Swedish Vulnerability Board Report
Martin Minow
DES cracked?
Dave Platt
Info on RISKS (comp.risks)

Computer Gets Stage Fright

Chuck Youman <m14817@mitre.ARPA>
Fri, 02 Jan 87 10:14:36 -0500
The Washington Post reported on December 29 and 30th that the Sunday matinee
and evening performances of Les Miserables at the Kennedy Center Opera House
were cancelled due to a malfunction of a massive rotating stage that is used
in the production.  An estimated 4,600 theatergoers had paid between $22.50
and $40 for their tickets would get a refund or have their tickets exchanged
for another show.  (The show is sold out through the remainder of its run,
however).  Some patrons were reported to be angry because they thought they
would be unable to get a refund for their parking ($4) in the lot in the
center.  It was reported the next day however, that the parking fees would
also be refunded.  It was estimated that each cancelled show could result in
losses of up to $60,000.

The failure was reported to be in a computer that controls the turntable.
The turntable covers most of a 40-foot-wide stage, revolves both clockwise
and counter-clockwise, and at various speeds.  When the components in the
circuitry are not working properly, it can take off at full speed.  It is
used at one point to hold two huge scenery pieces each weighing more than
three tons, not counting the cast members standing on it.  Because they are
computer controlled and so hefty, technicians were unable to arrange a safe
method of manually moving them around the stage.  (I'm not sure I would call
the automated method safe, however.)  The reported problem was a faulty
electronic circuit card that interfaces the computer with the turntable
drive mechanism.  The nearest replacement card was in Chicago.  It arrived
Monday and Monday's performance went on as scheduled.

Charles Youman (youman@mitre)

    [It is apparently not true that To the Victor, Hugo, Go the spoils!  PGN]


Still More on PhoneCards

Peter G. Neumann <Neumann@CSL.SRI.COM>
Wed 24 Dec 86 09:36:03-PST
I had a call from Colin Sex at British Telecom at 5PM Christmas Eve GMT.  He
stated that "The card itself is completely secure."  They indeed do a
READ-AFTER-WRITE check (along with some other checking), so that part of it
looks OK.  However, there are problems with physical damage to the laser
reader/writer.  In the case at hand, nail polish had been caked onto the
card, and gummed up the works.  But in such cases the unit is supposed
either to reject the card, or else keep the card if it cannot eject it --
and then shut down.  I think they are still vulnerable to some active-card
attacks, but on the whole they think they protect themselves well against
the man on the street.


Miscarriages Up in Women Exposed In Computer-Chip Process

Martin Minow, MSD A/D, THUNDR::MINOW <minow%bolt.DEC@decwrl.DEC.COM>
27-Dec-1986 2323
(For the record, this item does not represent the opinions of my employer.
Martin Minow)

Associated Press Wed 24-DEC-1986

Digital Miscarriages Study: 
Miscarriages Up In Women Exposed In Computer Chip Process

   HUDSON, Mass. (AP) - Significantly more miscarriages have been found
among women production workers at a semiconductor plant than those not
exposed to processes used in making computer chips, a study has found.
   In one principal area of production, the level of miscarriages was twice
that of non-production workers, according to the University of Massachusetts'
School of Public Health study commissioned by Digital Equipment Corp.
   The findings, believed to be the first of its kind in the computer
industry, has broad implications for the computer chip industry, which employs
more than 55,000 U.S. production workers, with most believed to be women.
   The study, which found no evidence of a wide range of other major health
disorders such as birth defects and infertility, surveyed 744 of Digital's
nearly 2,000 workers at the Hudson semiconductor plant. Of those studied,
294 were production-line workers and the rest were non-production workers.
   The study, based on the history of the workers at the plant for five
years, was designed to measure a wide range of possible health problems
among women and men. In all, 471 women were studied and 273 men.
   Among the non-production workers, the study found that 18 percent of the
pregnancies resulted in miscarriages, similar to the general population.
   The incidence of miscarriages among production workers involved in what
is known as photolithography, however, was 29 percent. A variety of solvents
are used in the process, which involves printing circuits on computer chips.
   Among workers in a phase of production that uses acids in an etching
process, researchers found a miscarriage rate of 39 percent, twice that of
the control group.
   Digital said it immediately passed along the findings to its workers.
   ``We've kept our employees informed all along,'' spokesman Jeffrey Gibson
said Tuesday. He said Digital adopted a policy during the study of
encouraging pregnant production workers to seek transfers.
   As a further precaution, Gibson said Digital also is offering to transfer
any female production worker of child-bearing age to non-production work if
they have concerns about future pregnancy.
   Gibson said Digital decided to do a study after employees began noticing
increased cases of miscarriages among their colleagues.
   Digital and the researchers stressed that the link between production-line 
work and increased miscarriages was only a statistical one and that no causal 
relationship between the health and specific chemicals had been established.
   The Semiconductor Industry Association, headquartered south of San
Francisco, said Digital sent it a summary of the findings and that the
information was passed along to 60 of its computer chip manufacturer members.
   ``The reaction (of manufacturers) was that the firms all felt an
obligation to communicate the information about the study to their
employees,'' said Shelia Sandow, association spokeswoman.
   The full study, conducted by Harris Pastides, an associate professor of
public health at the University of Massachusetts in Amherst, and Edward
Calabrese, a professor of toxicology, is still going through review before
publication in a medical journal.
   But Digital officials said they received a copy of the study last month,
and felt, along with its authors, a responsibility to release at least a
summary of the findings because of the health concerns.


Across the Atlantic with Cast Iron

<Boebert@HI-MULTICS.ARPA>
Wed, 31 Dec 86 09:53 CST
I am appealing to RISKS readers because this is clearly the polymath's forum
...  I am collecting instances of generic pathologies in engineering project
management, such as cutting the budget for tools (example:  Brunel's ship
the Great Eastern, stranded on the banks of the Thames for months because
the money men would finance the ship but not the launching equipment.  This
was the Victorian equivalent of funding the software but cutting out the
debugger.)  In this vein, I recall seeing a classic case of Victorian
Vaporware, to wit, a proposed cast iron bridge over (I believe) the North
Atlantic.  This was in a book titled "Great Dreams of Victorian Engineers,"
or some such.  Anybody else recall this?  When I get the instances together
I will submit them to this list as an aid to separating risks which are
computer-specific from those which have been around since the dawn of
engineering.


Heisenbugs — Two more examples

"Maj. Doug Hardie" <Hardie@DOCKMASTER.ARPA>
Wed, 24 Dec 86 11:12 EST
I am reminded by the chain of discussions on Heisenbugs of two interesting
occurrences that I have been involved with.  The first occurred while in
college with an IBM 1620 (the last one IBM maintained).  One day while the
system was running student jobs, the operator was helping me prepare for a
microbiology test (flunkout class), the disk drive stopped functioning.  The
entire system locked up and we investigated.  There was nothing detectably
wrong, the system just wouldn't make the disk work.  Since it was under full
IBM maintenance, we called them.  However, the only person they who had ever
worked on that type of machine was a senior manager and was out of the area
on vacation, they sent the next best.  This tech arrived some time later and
began to try and figure out how it was supposed to work and what was going
on.  Since I was an EE, I "helped" him.  I learned a lot, he learned that
there was nothing wrong - it just didn't work.  After several hours, he
finally gave up and came and sat on a bench by me where I had returned to
microbiology.  All of a sudden, the disk heads jumped, the process picked up
as if nothing had happened, and the system was back in operation.  We tried
everything imaginable to make it fail again.  It continued to work fine for
several hours.  At that point, the tech packed up his tools, tore up his
time card, and left with the statement that he had never been there.

The second occurred a few years later on a military program that used a
militarized processor.  I had a contractor developing software and as usual
they were quite late.  So they took the step of scheduling work around the
clock.  One Monday morning a programmer came in complaining that he had lost
his weekend time.  He was scheduled from 1200 - 1300 on Sat.  Just as he got
on at 1200, the machine started slowing down.  The lights on the front panel
blinked slower and slower until they stopped.  Nothing he did made it start
running again, until 1300 when it started back up as if nothing ever
happened.  Needless to say, his management was not convinced.  However, when
someone else came in the next Monday with the same story, they decided to
investigate.  The next week they reported it to me with the same lack of
appreciation.  However, since the machine was GFE, we were responsible for
its proper operation.  So I got some higher-ups to contact the vendor to fix
it.  The vendor stated that such was absolutely not possible.  It took
several weeks to force them to send a tech out.  Sure enough when they did,
it performed exactly as advertised.  After 1300 when it came back up, the
tech started to leave without saying anything.  We cornered him by the front
door.  All he would say was, we've seen this before - it will go away.  He
was right, it went away after a few more weeks.


Risks Involved in Campus Network-building

"Wombat" <rsk@j.cc.purdue.edu>
Wed, 24 Dec 86 09:44:10 EST
This little scenario popped into my mind after reading Chris Koenigsberg's
comments on plug-compatible plugs in RISKS 4-28.

Imagine a university campus utilizing local area networking in academic
buildings, dormitories, and other locations.  Now picture someone with a
reasonable aptitude for understanding the principles of LANs, and with
motivation to subvert the campus LAN...and whose dorm room contains a wall
socket marked "Southwest Campus Ethernet".

What can this person do, assuming that other people are using this same
physical network, and perhaps that this group of people extends beyond
those whose nodes are actually on the network to those whose nodes are
sending or receiving packets that are being routed over this network
(without their knowledge, assuming that they don't monitor packet routing)?

It seems quite plausible to me that such a person could tap into the
Ethernet and grab interesting packets (the person down the hall's report on
its way to a central printer; private correspondence between two residents;
perhaps a test in preparation being sent from a TA to a prof), and send
interesting packets (same as above, with slight modifications). 
                               [Lots of passwords are flowing as well...  PGN]

Further, it doesn't seem too unlikely that this scenario could be
extended; what could two or more people do in cooperation?  What
goodies could be pulled off the wire if one used a semi-smart program
(say, a keyword searcher) to examine traffic for interesting items?
Could an entire campus network be crippled by a few malicious users
with access to the hardware?  (I think the answer to this is "yes".)

The human consequences could be widespread and difficult to cope with; what
recourse does the student whose term paper disappeared off the network have?
How does one show that a student cheated on a test by gaining a copy the
night before via the network?  What obligation does the university have to
ensure the privacy of electronic mail over a network it designs, builds,
maintains, and supports for student use?  [Side question: could the campus
police monitor electronic mail for suspicious actions without a warrant?
After all, the senders of mail put their letters on a public (withing the
university) network...]

My opinion is that the kind of widespread network-building that's going on
at some colleges and universities is premature; it's a nice idea to build an
electronic village on campus, but peaceful villages have a habit of getting
overrun by barbarian hordes from time to time.  I'm waiting for the day when
the news comes that someone at CMU or Brown or wherever has done something
very antisocial with the campus network.  (Note that I distinguish between
those academic networks where access to the hardware is not provided, or is
at least made difficult to obtain, and those which purposefully provide
hardware access in many places.)

Rich Kulawiec, rsk@j.cc.purdue.edu, j.cc.purdue.edu!rsk
Purdue University Computing Center

Update on Swedish Vulnerability Board Report (RISKS 3.85)

Chuck Youman <m14817@mitre.ARPA>
Fri, 02 Jan 87 15:56:10 -0500
In RISKS-3:85 I referred to an article that appeared in Signal magazine
on "Computers, Vulnerability, and Security in Sweden."  I have since
written to the author of that article, Thomas Osvald, and he sent me an
English summary of a report by the Swedish Vulnerability Board titled
"The Vulnerability of the Computerized Society:  Considerations and
Proposals."  The report was published in December 1979.  The complete
report is only available in Swedish.  If anyone is interested in obtaining
the complete report I now have a mailing address to obtain publications
made by the Vulnerability Board (which no longer exists).  

The vulnerability factors considered by the Board included:
 -Criminal acts
 -Misuse for political purposes
 -Acts of war
 -Registers [i.e., databases] containing information of a confidential nature
 -Functionally sensitive systems
 -Concentration [geographic and functional]
 -Integration and interdependence
 -Processing possibilities in conjunction with the accumulation of large
  quantities of data
 -Deficient education
 -Defective quality of hardware and software
 -Documentation
 -Emergency planning

The original article in Signal magazine mentioned a project by the Board that
addressed the vulnerability problems associated with the complexity of EDP
systems.  This particular study is not mentioned in the summary.  However,
Mr. Osvald also sent me a copy of a position paper he authored on the
subject titled "Systems Design and Data Security Strategy."  Some excerpts
from the paper follow:

 Whether we like it or not our society is rapidly becoming more complicated,
 not the least as a consequence of the extremely rapid development of
 information processing and data communication.  Our times are also 
 characterized by increasingly large scale and long range decisions and 
 effects.  Unfortunately, this development does not correspond to a similar
 progress in our human ability to make wise decisions.  It is therefore
 important that we recognize the limits of the human mind and our ability to 
 to understand and process complicated, long range, decision problems.
 If complexity is not understood and kept within reasonable limits we will not 
 be able to control developments and we will become slaves rather than masters
 of our information systems.

 What are the characteristics of excessively super-complex systems?  One
 important symptom is that even experts find it hard or impossible to 
 understand or comprehend the totality of such a system.  The inability
 to comprehend is not an absolute criterion that does or does not exist
 but rather a vague feeling - mainly of uncertainly.  This basically goes
 back to the well-known fact that the human mind cannot deal with or keep
 track of more than about seven objects, entities or concepts at a time.
 Above that number, errors in the understanding and problem solving process
 increase disproportionately.

 Why are such systems designed?  I can think of three possible reasons.  The
 first is a strategy error of systems development that may be called
 "revolutionary change" or "giant step approach."  During the seventies some
 large, administrative government systems were re-designed in order to take
 advantage of new data processing and communication technology.  At the same
 time, as part of a huge "total" project, organization and administration
 were redesigned - all in one giant revolutionary change.  A better and more
 successful approach would have been - as it always is - to follow a 
 step-by-step master plan where each step is based on previous experience and 
 available resources.

 The second reason is the sometimes uncontrolled, almost cancer-like growth
 of large administrative systems, without a master plan and without clear
 lines of authority and responsibility, in efforts to integrate and to 
 exploit common data.

 The third reason is the inability of systems designers to identify the 
 problems of system complexity and our own inability to handle complex
 systems and to set a limit to growth and integration.

Charles Youman (youman@mitre.arpa)


DES cracked?

Dave Platt <dplatt@teknowledge-vaxc.ARPA>
Fri, 2 Jan 87 17:51:56 pst
There's an interesting article in the 1/87 issue of Radio-Electronics which
states that the Videocypher II television-scrambling system has been
cracked.  As Videocypher depends in some part on the DES cyphering
algorithm, this may have some major implications for computer-system
security (if it's true).

According to the article, "perhaps as many as several dozen persons or
groups have, independent of one another, cracked Videocypher II and we
have seen systems in operation.  Their problem now concerns what they
should do with their knowledge."

As I recall (and I may well be wrong), M/A-Com's Videocypher II system uses
two different scrambling methods: the video signal is passed through a
sync-inverter (or some similar analog-waveform-distorter), while the audio
is digitized and passed through a DES encryption.  Information needed to
decrypt the digital-audio is passed to the subscriber's decoder box in the
one of the "reserved" video lines.  The actual decryption key is not
transmitted; instead, an encyphered key (which itself uses the box's
"subscriber number" as a key) is transmitted, decrypted by the decoder box,
and used to decrypt the audio signal.

I've heard that it's not too difficult (in theory and in practice) to
clean up the video signal, but that un-DES'ing the audio is supposed
to be one of those "unfeasibly-difficult" problems.

I can think of three ways in which the Videocypher II system might be
"cracked".  Two of these ways don't actually involve "breaking" DES,
and thus aren't all that interesting;  the third way does.

Way #1:  someone has found a way of assigning a different "subscriber
number" to an otherwise-legitimate M/A-Com decoder, and has identified
one or more subscriber numbers that are valid for many (most?)
broadcasts.  They might even have found a "reserved" number, or series
of numbers, that are always authorized to receive all broadcasts.

This is a rather minimal "crack"; the satellite companies could defeat it by
performing a recall of all subscriber boxes, and/or by terminating any
reserved subscriber numbers that have "view all" access.

Way #2:  someone has found a way of altering a decoder's subscriber
number, and has implemented a high-speed "search for valid numbers"
circuit.  This could be done (in theory) by stepping through the
complete set of subscriber numbers, and looking for one that would
begin successfully decoding audio within a few seconds.  It should be
pretty easy to distinguish decoded audio from undecoded...

This way would be harder for the satellite companies to defeat;
they'd have to spread the set of assigned subscriber numbers out over
a larger range, so that the search for a usable number would take an
unacceptable amount of time.

Way #3: someone's actually found a way of identifying the key of a DES
transmission, with (or possibly without) the unscrambled "plaintext"
audio as a starting point.

This I find very difficult to believe... it would be difficult enough for
one person or group to do, let alone "perhaps as many as several dozen...
independent" groups.  Naturally, this possibility has the most severe
implications for computer-, organizational- and national security.

I suspect that the reported "cracking" of Videocypher II is a case (or
more) of Method #2, and thus doesn't have immediate implications for
the computer industry (I think).

Has anyone out there heard of any other evidence that DES itself has
been cracked?

Disclaimer: I don't own a TRVO (or even get cable TV), and have no financial
interest in anything related to the TVRO or cable industries.

Second disclaimer: as the Radio-Electronics article points out, it's
horrendously illegal to own or use any piece of equipment that "tampers with
DES or attempts to profit from decoding it" (the article suggests that such
action would be legally equivalent to treason, as DES is/may be under the
protection of the NSA until 4/22/87).  I don't know where such devices might
be purchased.

Please report problems with the web pages to the maintainer

x
Top