The RISKS Digest
Volume 4 Issue 37

Wednesday, 7th January 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Re: vulnerability of campus LANs
Ted Lee
David Fetrow
Re: DES cracked?
Henry Spencer
Cellular risks
from Geoff Goodfellow via PGN
"Letters From a Deadman"
Rodney Hoffman
Stock Market Volatility
Randall Davis
Engineering ethics
Dick Karpinski
Computerized Discrimination
Ken Laws
Info on RISKS (comp.risks)

Re: vulnerability of campus LANs

<TMPLee@DOCKMASTER.ARPA>
Wed, 7 Jan 87 00:03 EST
Unless they're encrypted, of course they'll be busted wide open.  I can
remember in the late 60's the very first thing science or engineering
students did at MIT and Harvard once they found out about the telephone
tie lines was to see how far they could get (legally.)  (you see, from
Harvard you could get to MIT, from MIT to Mitre Bedford, from there to
Washington, ...)  (what got the freshman all excited was strange numbers
that only answered "extension 55" or just "Yes?") (And I'm not talking
about the blue-boxers either, which was big at the same time.)  The
mentality certainly hasn't changed ...


Risks Involved in Campus Network-building

David Fetrow <fetrow@entropy.ms.washington.edu>
Wed, 7 Jan 87 01:09:58 PST
  From: "Wombat" <rsk@j.cc.purdue.edu>
  > Imagine a university campus utilizing local area networking in academic
  > buildings, dormitories, and other locations.  Now picture someone with a
  > reasonable aptitude for understanding the principles of LANs, and with
  > motivation to subvert the campus LAN...and whose dorm room contains a wall
  > socket marked "Southwest Campus Ethernet".

 This particular scenario is partly avoidable by segmentizing the network:
Using Bridges to isolate sections of the cable so that packets that don't
need to be show up on the "dorm" cable, don't. (The Bridges must be secure
of course). This at least removes the temptation of ultra-casual attacks.

 Networking the campus may be "premature", in the sense we are courting a
certain amount of disaster and we know it. We also know we need a lot more
bandwidth than RS-232 can provide. In this case perhaps the right strategy
isn't so much trying to prevent disaster but preparing for it. We've been
here before (the easily cracked operating systems of the mid-70s'). The way
secure (relatively) systems happened was by learning how their non-secure
predecessors were attacked and fixing the holes just a little faster than
90% of the attackers found them.

-Dave "Very Worried" Fetrow-      


Re: DES cracked?

<hplabs!pyramid!utzoo!henry@ucbvax.Berkeley.EDU>
Tue, 6 Jan 87 16:51:35 pst
Rumor hath it that the Videocypher II cracking exploited defects in the
key-management scheme rather than a successful cryptanalysis of full DES.

> Second disclaimer: as the Radio-Electronics article points out, it's
> horrendously illegal to own or use any piece of equipment that "tampers with
> DES or attempts to profit from decoding it" (the article suggests that such
> action would be legally equivalent to treason, as DES is/may be under the
> protection of the NSA until 4/22/87)...

As has been discussed at some length in sci.crypt, this is utter nonsense.
There is nothing illegal about breaking DES in your back yard, although there
are various possible illegalities involved in *using* a DES-breaker for
purposes like watching encrypted TV.  DES is not under NSA's protection, and
never has been.  The R-E article notwithstanding, the US government does not
use DES for its own communications.  And the claim of treason is ludicrous:
treason requires open aid to the US's enemies, including at least one overt
act with multiple eyewitnesses.  Being convicted of treason for anything less
is literally unconstitutional — the US Constitution itself defines treason
to require these things.  M/A-Com is just trying to scare people.

                Henry Spencer @ U of Toronto Zoology
                {allegra,ihnp4,decvax,pyramid}!utzoo!henry


Cellular risks

<Neumann@CSL.SRI.COM>
6 Jan 1987 13:37-PST
A long time ago Geoff Goodfellow reported on the ease with which one could
spoof the cellular billing.  Here is a more recent comment from him.
(GEOFF@CSL.SRI.COM)

  Fraud and spoofing seem to be on the rise in cellular, 
  with one carrier reportedly suffering at the rate of $180K/mo.


"Letters From a Deadman"

<Hoffman.es@Xerox.COM>
7 Jan 87 12:49:05 PST (Wednesday)
According to an article by Howard Rosenberg in today's 'Los Angeles
Times', "Letters From a Deadman" is a Soviet-made movie about a nuclear
holocaust triggered by a critical computer error.  Dubbed in English,
the 85-minute film is scheduled to air Feb. 12, on WTBS, Ted Turner's
Atlanta-based cable super-station.                   From the article:

  The movie's central character is a man named Larsen, who is initially
  seen writing to his dead son from an underground bunker.  Larsen is the
  scientist who developed the computers whose error triggered a
  devastating missile exchange that destroyed his family and country.
  Whatever country that is.

  "It's set in Western Europe, " said Martin Killeen, the WTBS producer on
  the movie project.  "It could just as easily be Eastern Europe....
  Having it set in a Western country, I think, allows the film makers more
  freedom.  Obviously, in the Soviet mind, this [making a mistake that
  causes nuclear holocaust] is not something they would do.  I just can't
  see them doing a story about a computer error if it were in the Soviet
  Union."

-- Rodney Hoffman


Stock Market Volatility

Randall Davis <DAVIS%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Wed 7 Jan 87 12:23-EST
Add to the risks of computers the danger of wider and faster dissemination of
misinformation (or at least incomplete information): several postings in the
last few months have considered whether computerized stock trading might be
causing the wild volatility seen in the market recently.  But no one seems to
have asked an important question: was there in fact any markedly higher
volatility.  The answer may in fact be no.

The December 86 issue of Money has an interesting 1-page article with a graph
of stock market volatility, measured as "annualized monthly standard deviation
of the S&P 500", and there's the key issue: how to measure it.  On their
standard, the highest period is a clear peak around 1937, with lesser peaks
around '62, '70, and '74.  Since programmed trading began (in 1982, despite
all the newspaper articles that make it appear to have been invented
yesterday), volatility has in fact DIMINISHED and has only recently begun to
head upward again toward the level of the (smaller) '62 and '70 peaks.

Their interesting claim is that with programmed trading
    "... there is a risk that an innocuous market downturn may be
    greatly magnified.  So far, however, programmed trading has
    proved to have few lingering effects on stocks.  It can
    compress a market movement that would otherwise take a day --
    or even a week — into a period as short as 10 minutes.  But if
    a market move would not otherwise have occurred, it is likely
    to reverse itself within a few days.... while the market's
    volatility is a bit higher this year than it has been in the
    past three years, it remains quite normal by historical
    standards."

Note in particular the last seven words.

I am neither economist enough nor statistician enough to judge whether their
metric is appropriate, but there are several important overall issues here:

1) The issue requires non-trivial economic and statistical sophistication.
The half-assed analyses widely quoted are appallingly naive in part because
they never even question whether the issue may be deeper than watching the
daily averages and seeing meaningless records set.

2) The media in general want NEWS, something dramatic that has never happened
in the history of the universe and that may in the next 18 seconds lead to the
collapse of civilization.  The story is even better if it involves something
that a large number of people find inherently threatening, and technology --
particularly computer-related — is a favorite candidate (nuclear energy, gene
splicing and various diseases rank up there pretty high too).  All this, plus
the press of time to get to press lead to two serious faults:

a) not asking the obvious questions: "Has this happened before; is it really
unusual"  Often the answers are yes, and no, respectively.  But what a boring
story that would make.

B) not questioning the premises: the market drop of 86 points on September 11
was the LARGEST IS HISTORY, omigod!  Yes, but it was only the third largest in
terms of percentage.  And what's the right measure anyway?  Absolute points,
percentages?  And why 1 day?  What's sacred about the market's performance
over a 1-day trading cycle?  Why not a week or a month or a year or a business
cycle?  Why doesn't anyone worry about the biggest 1-hour drop on record or
the biggest 10 minute decline?  What is the relevant metric?  Is the alleged
phenomenon even real?

3) Our agenda in RISKS should be to debunk, not contribute to misinformation.
Where our technical skills are relevant, we can do that particularly well.
Where they are not (as in the need here for economic and statistical savvy),
we should tread quite carefully.  We too need to remember to question the
assumptions.

4) There's risk in incorrect and incomplete information; there's
computer-related risk when that information is widely disseminated
electronically: 
    the British telephone billing scam that apparently wasn't; 
    the automated bibliographic retrieval system that required keywords 
    in the article title (only it didn't);
    more recently the illegal cracking of DES that wasn't illegal and
    didn't happen;
    and perhaps the stock market volatility that isn't.  
We should be particularly aware of this misinformation risk since it is 
entirely under our control.


Engineering ethics

Dick Karpinski <dick@cca.ucsf.edu>
Wed, 7 Jan 87 17:43:36 PST
Cicero's rule notwithstanding, there are many cases of opposition twixt
risks of doing versus risks of not doing.  I recall, for example, that our
H.J. Kaiser offered to build troop carriers rather quickly using rivets
instead of welded seams.  I'm too young to remember whether his offer was
accepted, but it seems clear that he was not denounced for being prepared to
make less seaworthy ships, which therefor increased the risks of loss of
life during troop transport.  The alternative was increased risks of loss of
life at the front lines of WWII.

I am prepared to accept a dollar value on human life in order to discuss
these decisions in reasonable ways.  Many, even most, people are not so
prepared and would consider me to be a barbarian beast on just those
grounds.  Perhaps it will be necessary to do some heavy duty education (of
which side?) before consensus can be reached.  Incidentally, my guess is
that currently, we should value one human life somewhere between $100k and
$1m.  The risks of failing to do so are in the nature of making the
necessary choices on arbitrary or irrational grounds, or in hiding the
decision entirely from view (and finding scapegoats as needed).

Dick Karpinski    Manager of Unix Services, UCSF Computer Center
UUCP: ...!ucbvax!ucsfcgl!cca.ucsf!dick   (415) 476-4529 (11-7)
BITNET: dick@ucsfcca   Compuserve: 70215,1277  Telemail: RKarpinski
USPS: U-76 UCSF, San Francisco, CA 94143-0704


Computerized Discrimination

Ken Laws <LAWS@SRI-IU.ARPA>
Wed 7 Jan 87 15:54:13-PST
I just caught up with the Risks discussion and noticed two messages on
computerized discrimination against women and blacks applying to a medical
school.  Randall Davis made the implicit assumption that the discrimination
consisted of a rule subtracting some number of points for sex and race,
and questioned whether the programmer shouldn't have blown the whistle.

I think it much more likely that the decision function was a regression
equation that happened to include coefficients combining sex and
race with other predictor variables.  The programmer — or statistician,
probably — would have done this out of carelessness or simply to obtain
the best possible fit to the admissions decisions in the database.  The
school administration would have accepted the formula as valid, probably
without even examining it, if it correctly classified the past applicants
and performed reasonably on the new ones.  I'm not too surprised that
no one paid attention to the sign or magnitude of the coefficients.

So much for the mechanism of this computer (or statistical) risk.  Now
I'd like to put in a few words in defense of the statistical approach.

Suppose you had to screen equal numbers of male and female applicants
and you wanted to admit them equally.  Suppose further that women tended
to have higher verbal scores.  If you used only these scores, too many
women would be admitted.  It would be necessary for you to balance the
high scores, either by subtracting something for being female or by
boosting the coefficient for some male-dominated variable (e.g., math
scores).  This type of twiddling is exactly what a regression program
does.  It selects whichever adjustment (or combination of adjustments)
gives the best fit.  The program could produce exactly the same
results, or discrimination, even if you forced it to use <>positive<<
coefficients for female and black codes.

I'm not suggesting that the school's formula was a good one.  They
should have ignored sex and race unless they intended to set quotas.
By matching a database of past decisions they were undoubtedly
freezing any biases that had existed in the past; perhaps the formula
recorded these biases accurately.

I am suggesting that the individual coefficients in a regression
formula have little meaning unless you consider all of the
intercorrelations and do a proper sensitivity analysis.
The article said that this school had a good admissions record, so
people shouldn't be hasty in putting them down.  Let he who fully
understands his own database cast the first stone.

Also: statistical tools are powerful in the right hands, dangerous
in the wrong ones.  Don't assume that you can do a regression just
because your micro can do one.  If your data is worth being analyzed,
it is probably worth being analyzed by a professional.  And if you
really want good results, work with the professional from the start
instead of collecting the data and mailing it in for an analysis.

                    — Ken Laws

Please report problems with the web pages to the maintainer

x
Top