The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 6 Issue 69

Monday 25 April 1988

Contents

o Social INsecurity
Kenneth R. Jongsma
o Risks in momentum
Robert Adams
o BIX Ad (Risks of US Mail)
Henry Mensch
o At the tone, leave your message at your own risk
Mark Mandel
o A shortie on color blindness
Eugene Miya
o Suicidal bandwagon
Geraint Jones
o YAVR (Yet Another Virus Report) -- "Scores"
Fred Baube
o Requests for advice to the U.S. Congress on viruses
Herb Lin
o National Policy on Controlled Access Protection
Chris McDonald
o Re: Accountability
Henry Spencer
Jon Jacky
o Searching for interesting benchmark stories
Eugene Miya
o Info on RISKS (comp.risks)

Social INsecurity

<portal!cup.portal!Kenneth_R_Jongsma@Sun.COM>
Thu Apr 21 17:29:16 1988
The following is excerpted from the April 11 issue of Business Week article
entitled "Social Security's Big Surplus Was Just a Mirage".

Only a few weeks ago, Social Security experts were afraid Congress might use
the mounting surplus of the retirement system's trust fund to cut payroll taxes
or raise benifits...  [Some discussion on how new projections say there won't
be a surplus.]  In addition, Social Security Actuaries have found that a flawed
computer program overstated projected receipts.  [Followed by discussion on
what needs to be done.]

No additional detail was provided on what the nature of the computer flaw was.


Risks in momentum

<demo%somewhere%littlei.UUCP%reed.UUCP%reed%tektronix.tek.com@RELAY.CS.NET>
Fri Apr 22 11:52:52 1988
In RISKS 6.65 , Rob Horn <BBN!ulowell!infinet!rhorn@husc6.harvard.edu> writes:
> Much more important, but much harder, is understanding the human decision
> and organizational structures that lead to this momentum.  How do you
> destroy this overwhelming force to completion without destroying the will to
> succeed?

    I have several times considered writing one of those single topic books
entitled "Momentum Mangement".  Within any business organization, one
must both manage the momentum of the group (reactive) and direct the
group by creating and directing its momentum (proactive).

    "Momentum management" would not only be useful in business.  We didn't
get Michael Jackson T-shirts, coffee mugs, and TV trays just because
he was a good performer.  Everyone jumped on the bandwagon and the
momentum increased.  This happens in everything: art, music, UNIX\(tm,
X Window System\(tm, space shuttle, 

BIX Ad (Risks of US Mail)

Henry Mensch <henry@GARP.MIT.EDU>
Mon, 25 Apr 88 02:45:23 EDT
If I'm correct there is no risk here, since any unsolicited
merchandise which you receive via the US Mail can be considered a
gift.  Of course, this won't stop them from trying to collect :(

Henry Mensch  E40-379 MIT, Cambridge, MA
                      {ames,cca,decvax,rochester,harvard,mit-eddie}!garp!henry


At the tone, leave your message at your own risk

Mark Mandel <Mandel@BCO-MULTICS.ARPA>
Mon, 25 Apr 88 09:02 EDT
Last week I called someone with an important message, to call a third
person.  He wasn't at his desk and a secretary took it, along with my name
and number.  A few moments later she called me back and said, "I'm sorry,
but I was typing your message in, and when I hit ENTER it erased the name
and number of the man he was supposed to call.  Would you give them to me
again, please?"  Obviously she was using a computer-based message system, or
at least a word processor.  What would she have done if she'd lost MY name
and phone number as well?
                             [Have you ever not had a call returned?  PGN]


A shortie on color blindness

Eugene Miya <eugene@ames-nas.arpa>
Mon, 18 Apr 88 13:38:17 PDT
On color blindness, first I am not color blind, but an interesting prank
was fulled years ago at Caltech.  There is an infamous signal for a pedestrian
only crossing for California.  Why wait?  The undergrads reverse the red and
green filters.  Held traffic up a long time.  People crossed all the time.
Note: this would not have worked with color blind drivers (mostly male)
who use light position.

%T The Legends of Caltech
%A Available on request, my copy is at home.
%I
%D

--eugene miya


Suicidal bandwagon

Geraint Jones <geraint%prg.oxford.ac.uk@NSS.Cs.Ucl.AC.UK>
Mon, 25 Apr 88 21:20:16 BST
PGN (RISKS 6.67)  has picked up on another couple of deaths in Britain.  So now
you know that we are mortal.
    Does anyone happen to know how many people in Britain do (slightly defence-
related)  work with computers,  and how likely someone between twenty and fifty
and in that  sort of job is to die a violent death?  I do not know the figures,
but I cannot help  feeling that the  only thing that is  obviously  significant
about these deaths is that  there has been a spate of press reports about them.
    There is a `programmed trading' effect in newspaper stories too,  or hasn't
anyone else noticed  that that which is  `news' tends to be  that which is like
what was news yesterday.
                                                                             gj

[ btw, for the benefit of the San Fransisco Chronicle, the only thing that is
   `ultrasecret' about AERE Harwell is which buses one must catch to find it. ]

      [Someone else commented to the effect that the number 10 was probably 
      about average...  What made the first 8 strange was that almost all 
      involved people related to one set of projects and one company, within
      a short period of time, and were described in the press as potentially
      simulated suicides.  Given the supposed secrecy of the projects, it
      could be difficult to get much in the way of real details.  Sure,
      someone is indeed trying to sell newspapers, and this story is certainly
      grist for the would-be conspiracy theorists.  I thought it might be 
      worth noting here as a follow-up.  PGN]


Requests for advice to the U.S. Congress on viruses

<LIN@XX.LCS.MIT.EDU>
Mon, 25 Apr 1988 20:16 EDT
A part of the Defense Authorization Bill for FY 1989 is likely to direct the
Defense Department to report to the Congress on what it has done and plans to
do in order to cope with viruses in computer systems belonging to or used by
the DoD.

I am the Congressional staff person assigned to work this issue for the House
Armed Services Committee.  What should I insist that the report cover?

Herb Lin        e-mail LIN@XX.LCS.MIT.EDU      phone (202) 225-7740

House Armed Services Committee, 2120 Rayburn House Office Building,
Washington DC  20515

All replies will be kept in confidence.

     [Herb, I hope the identities of the replies will be kept in confidence,
     but not the replies themselves!  And I hope that it will cover Trojan
     horses and flawed operating systems, not just viruses.  Actually, the
     National Computer Security Center's Orange Book does provide some help.

     RISKS readers, please respond to Herb.  I would expect that he might 
     wish to anonymize the replies and get some feedback from you all.  PGN]


YAVR (Yet Another Virus Report) -- "Scores"

Fred Baube <fbaube@note.nsf.gov>
Mon, 18 Apr 88 16:26:40 -0500
"New `Virus' Infects NASA Macintoshes"
Washington Post, Mon 18 Apr 88, excerpted without permission

This reports a new virus at NASA offices in DC and other locations around
the country. Apple Conputer and the federales are trying "to track down the
virus' creator".

This one is called "Scores" and has not erased any data, but can cause
"malfunctions in printing and accessing files", "difficulty in running
Macintoshes' drawing program", and frequent crashes.

"The Scores virus can be detected by the altered symbols [in] Scrapbook and
Note Pad. Instead of the Macintosh logo, the user would see a symbol that
looks like a dog-eared piece of paper.  Two days after the virus is
transmitted, it is activated and begins to randomly infect applications .."

EDS saw the same virus a few weeks ago but isolated and eradicated it.
"Like most major corporations, EDS is reticent about discussing its ways of
fighting these viruses for fear that the creators will only modify the
program to avoid detection."
                                 [Sorry this is a week old.  
                                 It slipped through the crack.  PGN]


National Policy on Controlled Access Protection

Chris McDonald STEWS-SD 678-2814 <cmcdonal@wsmr10.ARPA>
Mon, 18 Apr 88 8:41:20 MST
I just received a copy of NTISSP No. 200, issued 15 July 1987--our pony express
takes a long time to get to New Mexico.  The policy applies to executive branch
agencies and departments of the Federal Government and their contractors who
process classified or sensitive unclassified information in automated
information systems.

Essentially the policy states:  "All automated information systems which are
accessed by more than one user, when those users do not have the same
authorization to use all of the classified or sensitive unclassified
information processed or maintained by the automated information system, shall
provide automated Controlled Access Protection for all classified and sensitive
unclassified information.  This minimum level of protection shall be provided
within five years of the promulgation of this policy."  The policy then defines
"Controlled Access Protection" as equivalent to the C2 level of protection
defined in the "Trusted Computer System Evaluation Criteria" or Orange Book.

Since I received the NTISSP after the passage of the Computer Security Act of
1988 (HR-145), I was wondering if the application of the NTISSP to
"unclassified systems" has been deferred or whether we in DoD are to implement
the policy as stated.

Thanks, Chris                                       White Sands Missile Range


Re: Accountability

<mnetor!utzoo!henry@uunet.UU.NET>
Wed, 20 Apr 88 10:45:56 EDT
> ... more indicative of a social failure than a true RISK ... because it's
> about the failure of a chain of command to control the situation.

I would diagnose it differently, unless you mean this in the broadest possible
sense.  The problem is not that the people on top are not properly in charge;
the problem is that the people on top do not *WANT* to be held responsible
for results (or lack thereof).  The more complex the organization, the
easier it is to point fingers at someone (anyone) else, until responsibility
is so diffused that nobody is ever really to blame when something goes wrong.

Particularly in that sort of setup, it is important to supply incentives
for doing it right that affect the whole organization rather than specific
individuals.  (Note that I am addressing pragmatic tactics here, not right
versus wrong.  I believe very strongly in individual responsibility, but
when dealing with, say, Morton Thiokol, it's not an easy notion to enforce.)
Major reductions in cash flow tend to get everyone's attention.

> -That cash is the only effective incentive for producing results is the 
> ultimate disaster of our times...

While I agree that it's an undesirable situation, I feel compelled to point
out that it's not a problem of "our times"; historically, life has always
been cheap.  Society has, on the whole, become considerably *more* humane
in recent times.

Henry Spencer @ U of Toronto Zoology    {ihnp4,decvax,uunet!mnetor}!utzoo!henry


re: Accountability

Jon Jacky <jon@june.cs.washington.edu>
Wed, 20 Apr 88 13:29:51 PDT
Several observers have suggested that something about computers - maybe
the way they are employed in organizations, maybe something intrinsic in
the way the interact with people's thoughts and feelings - tends to
diffuse accountability and makes people feel less responsible for the
consequences of their actions.  This view is expressed most eloquently
by Joseph Weizenbaum in his book COMPUTER POWER AND HUMAN REASON, WH Freeman,
1976.  In an interview with Marion Long in the LA TIMES' WEST magazine
supplement, (Jan 19, 1986, p. 4) Weizenbaum said,

"The dependence on computers is merely the most recent - and most extreme -
example of how man relies on technology in order to escape the burden of
acting as an independent agent; it helps him avoid the task of giving
meaning to his life, of deciding and pursuing what is truly valuable."

- Jon Jacky, University of Washington


Searching for interesting benchmark stories (RISKS of benchmarking)

Eugene Miya <eugene@ames-nas.arpa>
Fri, 22 Apr 88 11:54:58 PDT
I just saw Tom Lane's posting on benchmarking [RISKS-6.66], and it caught me
by surprise.

When hardware is delivered, we (users) expect it to run, and we also expect it
to run well.  The problem is when something runs badly there is a lot of finger
pointing and a tendency to "kill the bearer of bad news."  I present two
examples.

We had a supercomputer here for a while (now at another site) that is one
of those "vector architectures": supposed to run fast on vectors.  I was
running some simple tests, and I swore that it was running in the slower scalar
mode.  I approached the system folks, and sure enough for some reason, the
system libraries had been compiled into significantly slower scalar code.
They quickly recompiled the stuff and "we were back in business."
The machine is now at another site, but running a different OS.

In another case several years ago, I was running on one of the new generation
mini-supercomputers.  I noticed a strange behavior of a program. Pass 1
took time X, pass 2 took time 2X (twice as long), pass 3 took 3X.  Apparently
others had noticed this problem, I thought it was a compiler problem, and
it turned out to be a cache (hardware) problem.  (rectified)

Benchmarking at this "level of the stratosphere" can literally make or break
companies.  The NBS (and a few others) collect benchmarks, but they don't
collect benchmark results for fear of liability.  Linpack, the LLNL loops, and
the Dhrystone are exceptions.  The problem is (unlike Boisjoly) that these are
not all or nothing situations.  Sure, the program runs, it produces correct
(and sometimes incorrect) results.

Oh, a third example came to mind.  Years ago, I was working to understand what
made network protocols run.  As young-un, I had oldsters tell me: it's the
bandwidth of the wire for high speed (Mb/S) networks.  I believed them.  They
didn't know what they were talking about: turns out to be memory (the operating
system specifically).

We tend to assume a lot about our machines without rigourous testing.  I also
notice that functional testers usually don't include performance measurements.

On other forms of performance evaluation:
I have to admit that I am not a fan of queueing theory when it comes to
measuring the predicted performance of computer systems.  I also realize
I'm not alone.  My approach is empirical, similar to how cardiologists
look at cardiograms.  (Show me [something useful].)

I am willing to collect interesting benchmark stories like Tom Lane's.
"it's not enough that it run, it has to run in the right ways" stories.
I'm uncertain the best way to do this.  A single posting won't be enough.
So the audience is welcome to send me interesting stories, and I will
collect them.  In cases where I can, I will try to verify them.
I wish to avoid "popular" stories like the DO-loop.

--eugene miya, NASA Ames Research Center, eugene@ames-aurora.ARPA
                soon to be aurora.arc.nasa.gov
  {uunet,hplabs,hao,ihnp4,decwrl,allegra,tektronix}!ames!aurora!eugene

Please report problems with the web pages to the maintainer

Top