The RISKS Digest
Volume 6 Issue 65

Wednesday, 20th April 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Creating Alternatives to Whistleblowing
Vin McLellan
Safety nets under falling bridges
Rob Horn
Datamation, 15 April 1988, on "Risk"
Martin Minow
Poorly designed error messages
Bob Larson
RISKy Airline Meals
Mark Jackson
Response-time variability — prior art
Martin Minow
Re: Security of OS: who is responsible? Klaus Brunnstein
Israeli Viruses
Fred Cohen
Time-zone problem
Peter Webb
Info on RISKS (comp.risks)

Creating Alternatives to Whistleblowing

"Vin McLellan" <SIDNEY.G.VIN%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU>
Tue 19 Apr 88 05:56:14-EDT
    On April 14, an MIT graduate student organization sponsored a 
forum on Ethics in Engineering and Science which turned into
a discussion of whistle-blowing: what can lead an engineer to
consider it, how it can be done, and how badly one can expect
to be punished for being the messenger bearing troublesome news.

    Sylvia Robins, the Rockwell software engineer from the space 
vehicle program who protested fraudulent and fudged testing, 
lack of required security, and contract featherbedding up the 
line within her company, Unysis, and then jumped into the 
hierarchy of the prime contractor, Rockwell, was a brisk, 
impressive, and inspiring example of the breed as she spelled 
out the difference between a salary hack and a self-respecting 
professional. 

    With lives and important missions at stake, she said, 
she couldn't and wouldn't participate in massive and normative
fraud. As a result, Robins said, she has been bugged, tapped, 
followed, slandered, had her property vandalized, and was subjected 
to repeated threats, even assaults. Robins' story has been reported 
in detail elsewhere (and recent federal charges seem to substantiate
many of her specific complaints) but she gave the MIT kids
a few thought-provoking bulletins from the real world.

    According to Robins, at Rockwell the Corporate Ombudsman and
the Corporate Ethics Office are both managed by the corporate
security staff — the very thugs who apparently saw their duty
in orchestrating the campaign against her within and around the
company. (When she returned to her office after she finally went
public with her charges at a press conference, the walls at 
Rockwell closed in around her — literally. The partitions that
shaped her workspace had been moved to crowd her desk: she
could reach out, seated at her desk, and touch both sides of the
room.) 

    Lone messengers really do get the shaft, she said, no
matter how real or accurate their complaints — although sometimes,
even so, a woman has to do what is right. Robins said she now 
realizes that many engineers unexpectedly find themselves confronted 
with major ethical issues on the job; in the past six months, she 
said, some 160 engineers have contacted her for advice in how to deal 
with such situations in their work. Among her bitter lessons, 
she said, was that anyone caught in her position should try to 
build a consensus among peer engineers and, if at all possible, 
to present major complaints in a group petition. A whole department
is harder to ignore, slander, or ostracize. For a lady with a rep
for a carbon steel spine, her suggestions and attitudes were 
politically savvy and not confrontational.

    Beside the pert matronly Robins, a slouched yet looming 
presence on the MIT stage was fellow panelist Ralph Nader. 
(Astonishingly, this was only the third time in 15 years that 
Nader — still probably the leading critic of poor and unsafe
engineering knowingly foisted upon the public — had been
invited to speak at MIT.)  While the planning of the forum left
much to be desired, in that Nader was given only 20 minutes to
address a crowd largely drawn by his name, his sardonic and 
bitter humor brought an edge to what had been a sometimes 
blithering panel. After paying warm homage to the courage and
honor of Ms. Robins — and worrying aloud how many of the students 
before him could have survived the brutish campaign Robins
endured — Nader left the podum with an interesting observation,
almost a challenge, to both the students and career engineers. 

  In the mid-1970s, he noted, rising concern over social issues 
among law students was directly reflected in the sort of questions
the students asked of the corporations which sought to recruit them
from the campuses. And those questions, he said, quickly and quite
directly shaped the image of themselves the major law firms
learned to project — and were soon reflected in the work practice 
of the best law firms themselves, those most successful in recruiting
top students. Specific questions about the amount of time a law 
firm committed to pro bono legal work, for example, *introduced*
the practice of pro bono work in many large law firms. 

    If engineering is truly a profession, with minimal standards 
of technical prowess and personal integrity to be upheld, said Nader,
engineering students could similarly have a major impact on 
corporate behavior by asking about specific policies and practices
which could protect a dissident or worried professional within a 
corporate setting, perhaps guarrantee him or her a hearing (before 
engineering peers, in matters of technical or professional integrity)
when immediate corporate superiors were hostile or unsympathetic.

   A lawyer, albiet one skilled in corporate infighting, Nader 
couldn't go into details for his suggestion. RISKS, however, is 
an unusual forum that reaches deeply into academic, corporate, 
and government engineering. Could we hear some suggestions for those
students? What questions could be asked?  What corporate structures 
and/or procedures could guarrantee an honorable engineer who confronts
an issue of ethics on the job a viable alternative to the 
self-sacrifice of public whistle-blowing? 

Vin McLellan
The Privacy Guild       (617) 426 2487
Boston, Ma.             


Safety nets under falling bridges

Rob Horn <BBN!ulowell!infinet!rhorn@husc6.harvard.edu>
Mon, 18 Apr 88 21:30:21 est
Brian Urquhart wrote

  ``I believed then, as most conceited young people do, that a strong
  rational argument will carry the day if sufficiently well supported by
  substantiated facts.  This, of course, is nonsense.  Once a group of
  people have made up their minds on something, it develops a life and
  momentum of its own which is almost impervious to reason or argument.''

This belief was based on his experience as intelligence officer prior to the
Arnhem attack and in the UN where he reached Under-Secretary General.  It is
relevant to risks because engineers seem to fall into the perenially young
category in their faith that evidence can change decisions.

Most of the discussion of whistle-blower protection etc. make as much sense
as putting a safety net under a poorly engineered bridge.  It may help
reduce injuries but it ignores the more fundamental problem.  The problem is
that momentum is built up in the system beyond the point where a decision
can be reversed.  This is inherent in Feynman's and others' complaints about
the Challenger disaster.  The problem is not the O-rings, it was that the
momentum to launch was allowed to get so strong.  This was clear for months
prior to launch.  Aviation Week was full of stories about rushed schedules,
botched work, confusion, fatal and near fatal accidents.  Yet no one could
stop the launch.

When a system has reached this point disaster is inevitable.  All you can do
is try to soften the blow.  Yet the focus of debate here and elsewhere is on
issues that arise too late.  When the system has reached the point that a
whistle-blower needs protection you are already past the point of no return.

Much more important, but much harder, is understanding the human decision
and organizational structures that lead to this momentum.  How do you
destroy this overwhelming force to completion without destroying the will to
succeed?
        Rob Horn, Infinet, 40 High St., North Andover, MA
...harvard!adelie!infinet!rhorn ...ulowell!infinet!rhorn ..decvax!infinet!rhorn


<minow%thundr.DEC@decwrl.dec.com>
19 Apr 88 16:00
      (Martin Minow THUNDR::MINOW ML3-5/U26 223-9922)
Subject: Datamation, 15 April 1988

The cover article is on "Risk."  "When you become dependent on any resource,
you become more vulnerable."   Martin.


Poorly designed error messages

Bob Larson <blarson%skat.usc.edu@oberon.USC.EDU>
Tue, 19 Apr 88 00:33:02 PDT
The message in a recent RISKS about starting from false assumptions when 
someone gives you their conclusions rather than the symptoms (which I
assume many of us have discovered the hard way) got me thinking about
how the bad conclusions are reached.

Primos has a "standard" error message "Access Violation".  On a number of
occasions, people come to me (as the local primos "guru") asking me to help
me find the file they can't access when they get this message.  The error
message is used exclusivly for MEMORY access violations.  (This is one of
several messages that usually indicate a bad pointer.)  File messages
include "insufficent access rights" and "not found" to cover files that
can't be opened due to insufficent access rights.

While not a huge error, this poorly designed error message has probably
caused many man-months of time wasted looking for the wrong problem.

Bob Larson  blarson@skat.usc.edu  {sdcrdcf,cit-vax}!oberon!skat!blarson


RISKy Airline Meals

<MJackson.Wbst@Xerox.COM>
19 Apr 88 07:50:52 EDT (Tuesday)
The following is from the letters column of the "Travel" section of the
April 17 /New York Times/.   Mark

  To the editor:

  The perils of ordering special meals on airline flights cannot be
  overlooked.  A while back we traveled from Fort Lauderdale to Detroit with
  Delta Airlines and ordered a cold seafood plate instead of the regular meal.
  Delta responded with a hot seafood plate.  We wrote them a letter to
  complain and they apologized.

  However, since then we have been on three morning Delta flights where
  breakfast was served.  Each time we were brought a cold seafood plate.  We
  did not want it.  We did not order it.  Somehow, our name has gotten into
  the computer, and every time we fly we get the cold seafood plate.

  The last time it happened, the flight attendant referred to us as "Mr. and
  Mrs. Seafood" instead of Mr. and Mrs. Stanton.

                    Roger Stanton, Grosse Pointe, Mich.

/A spokeswoman for Delta Airlines replies:/

  There are two codes that can be used to tell the computer that a request for
  a special meal has been made, one for a specific flight, the other for all
  flights a passenger might take.  In Mr. Stanton's case, the agent apparently
  used the wrong code.  That has now been corrected.  We encourage passengers
  to request special meals at the time the flight reservation is made, but it
  can be done up to three hours before flight time for most meals, eight hours
  for kosher meals.  Passengers should specify whether they want a standing
  order or a one-time-only order.

                [At 35,000 feet, on a clear day you can seafood forever,
                especially if you are standing — or Stanton.  PGN]


<minow%thundr.DEC@decwrl.dec.com>
19 Apr 88 09:55
      (Martin Minow THUNDR::MINOW ML3-5/U26 223-9922)
Subject: Response-time variability — prior art

The recent re-invention of response-time variability reduction techniques
forced me to dig out an article I published in an obscure journal in 1977.
In the Decus Proceedings vol. 4, no. 2, I wrote a long article on system
performance and usability.  To quote:

    One example of a simple method to improve the way a system seems
    to perform is illustrated by the program segment [as follows]:
    100 PRINT "Prompt";
    200 INPUT request
    300 start = [get time of day]
    400 ... Calculate result ...
    500 elapsed = [time of day] - start
    600 IF (elapsed < 5) THEN sleep(5 - elapsed)
    700 PRINT result
    ... This has several implications:
    — Response times are less dependent on system load.
    — The operator learns when to expect a response from the system
    and thus is able to build a rhythm by knowing when to look
    back at the terminal.
    — The system response degrades more slowly.  If the actual response
    time varies from one second to six seconds, the operator will
    see only a one-second variation, instead of an almost five-
    second variation.
    ...
    In general, your programs should be written so that the operator
    feels that "they're always there;" that they will always do something
    reasonable in a resonable time.  Early computers often had loudspeakers
    attached to some part of the CPU.  The operator heard what was happening:
    how far the production run ahd progressed, when it was about time to
    change tape reels.  ...

    In all cases, try to keep the feeling that the system "listens" to
    the operator at all times, and — especially — "tells" the operator
    what is happening.

I don't claim originality for these ideas: I was taught them by the
customers I supported in Sweden in the early 1970's.  I guess my mistake was
not wrapping them inside some theoretical framwork ("System usability and
implications for the eight queens problem") and publishing them in CACM.  Of
course, if I did so, the people who needed the information might not have
seen it.
                                    Martin.


<Klaus Brunnstein>
19-Apr-88 07:45:48-PDT
      <brunnstein%rz.informatik.uni-hamburg.dbp.de@RELAY.CS.NET>
Subject: Re: Security of OS: who is responsible?  (RISKS-6.64) 

In his information how to cope with an error in DEC's Kid Install software
for its recent security update for Vax Workstation Software, Darren compares
the security of an operating system to a house.  Inhabitants are, according
to his example, themselves responsible to prohibit thieves from easy access
by just using their keys and locks.

Unfortunately, the example is misleading: while every `user' of a house
knows about access and how to control it, complex operating systems have so
many doors that nobody can understand the diverse control techniques. While
house are designed, as socio-technical systems, according to a
user-understandable model, an operating system is designed as a technical
system without virtually any reference to users concepts. 

In this situation, the designers responsibility to guarantee a `safely
usable operation system' (especially when a design or programming error is
probable) cannot so simply be transferred to the users (also in the case of
non-benevolent users). I therefore greet DEC's activities to provide better
professional standards in dealing with security updates.

Klaus Brunnstein, University of Hamburg, Fed.Rep.Germany


Israeli Viruses

Fred Cohen <fc@ucqais.uc.edu>
19 Apr 88 02:18:53 EDT (Tue)
I should point out that I wrote the research papers that detailed the class
of methods proposed for detecting viruses by the Isreali team - they were
published in Computers and Security in 1987 - the checksums I have seen are
fairly easy to forge, but even the very strong ones like the one I published
can be broken given enough time. They are a "Complexity Based Integrity
Maintenance Mechanism" (the name of one of those papers).  Indeed, I suspect
that I could still write a virus that they could not detect, as I have done
considerable research into the topic and understand the underlying
mechanisms. I should note that the source code for such a high quality
checksum is to be published in the April issue of C+S, so you'd better take
all the cash you can get right away, before the public finds out they can
get the same or better protection for free. - FC


time-zone problem

Peter Webb <webb@applicon.COM>
Fri, 15 Apr 88 10:30:17 EDT
    I have learned that Telnet announced, on April 7, that everyone who
uses its Electronic Bulletin Board service should ignore any and all bills
for daytime usage, from Sept 1987 to Feb 1988. Apparently, calls to Telnet are
often automatically re-routed by the local phone company.  In some cases the
calls are forwarded to an exchange in a different time zone than that of the
originating user.  Under the correct circumstances, ie a user dialing in less
than an hour after night/evening rates go into effect and having his or her 
call forwarded to a node in a time zone at least one hour earlier, this can
lead the Telnet system to believe the call was placed during daytime
hours, and to consequently bill the user at daytime rates.  The problem is
excaberated by Telnet's policy of billing an entire session at daytime rate
if any part of it occurs during daytime hours

                Peter Webb.

{allegra|decvax|harvard|yale|mirror}!ima!applicon!webb, 
{mit-eddie|raybed2|spar|ulowell|sun}!applicon!webb, webb@applicon.com

     [Again!  This has happened to the competition as well.  If it wasn't so
     late and I wasn't commuting to the Security and Privacy meeting, I'd
     dig up three or four previous cases in RISKS.  PGN]

Please report problems with the web pages to the maintainer

x
Top