The RISKS Digest
Volume 11 Issue 61

Friday, 3rd May 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

The means justify the ends? (piracy probe)
Jim Cheetham
Re: Almost Humorous Fly-by-Wire Glitch
Mary Shafer
Re: Old O/S and Network Security
Rick Smith
Mike Muuss
PRODIGY: STAGE.DAT
A. Padgett Peterson
Do unauthorised users deserve the protection of the law?
Hugh Cartwright
Rude behavior and the net.police
Edward Vielmetti
Software Warranty
Geoffrey H. Cooper
Risk Analysis Seminars
Cecilia Spears
WORMSC Call for Abstracts
Andrew Lacher
Info on RISKS (comp.risks)

The means justify the ends? (piracy probe)

Jim Cheetham <jim@oasis.icl.co.uk>
Thu, 2 May 91 15:35:59 BST
From the front page of "Computing", 2 May 1991, comes the following report
(quoted with permission).  Comments/indications of unquoted text are in []s.

       DEC raids consultant in piracy probe (by Joanne Evans)

  DEC has searched the office and home of a training consultant in Reading
 [England -jim], confiscating hardware, software and paper files to find
  evidence of alleged software piracy.

  A number of people from DEC and it's solicitors, Linklaters & Paines,
  searched the home of Greg White [...] on 5 April while he was out.
  They had obtained a civil search warrant.

  Earlier in the day, they searched the office of Syntellect, an arm of
  a US training company which Greg White runs in the UK. [...]

  DEC was granted the Anton Piller order, a warrant granted without the
  subject's knowledge, by the High Court of Justice Chancery Division
  in London on evidence alleging that Syntellect was using unlicensed
  software to provide training courses. [...]

  The evidence was obtained by a consultant employed by DEC at attend
  a Syntellect training course in February. He copied it's system
  software which was later examined by DEC. [...]

Besides the problem of someone holding unlicensed proprietary software,
what strikes me here is the subterfuge used by DEC to gain the evidence
they needed in the first place. Surely the consultant attending
Syntellect's training course didn't ask for permission to copy their
system? In which case, is the evidence not inadmissable by virtue of
being gained by illegal means?

This seems to be a case where "the ends justify the means", which is
most definitely *not* acceptable in the legal system normally. Another
example of the law not being in touch with computers, perhaps?

  Jim Cheetham, jim@oasis.icl.co.uk, +44 344 424842 x3121 (ICL ITD 7263 3121)


Re: Almost Humorous Fly-by-Wire Glitch

Mary Shafer <shafer@skipper.dfrf.nasa.gov>
Thu, 2 May 91 13:55:59 PDT
Joseph Nathan Hall (jnh@eceugs.ece.ncsu.edu) writes:

      From the pages of Popular Science, April 1991:

  "...Spectators at the first flight of Northrop's [YF-23] prototype noticed
  its huge all-moving tails--each larger than a small fighter's wing--quivering
  like butterfly wings as the airplane taxied out to the runway.  Test pilot
  [Paul] Metz says this occurred because the early generation flight-control
  software didn't include instructions to ignore motions in the airframe caused
  by pavement bumps.  The answer, he adds, is inserting lines of computer code
  that tell the system not to try to correct for conditions sensed when the
  fighter's full weight is on its nose gear."

Talk about a pack of slow learners.  I remember sitting in the control room
watching the AFTI/F-16 waving its canards and tail at every expansion joint in
the taxiway.  They finally stopped it with a squat switch.  But I shouldn't
criticize the YF-23 team too much, because the X-29 didn't have one originally,
either.

  I'll grant that in the 1990s we can analyze wind-tunnel tests in a few hours
  (or less) and can even simulate untested airframes with some success.  In the
  1950s pilots frequently flew prototypes before the final results of early
  wind-tunnel tests were completely analyzed--a process that sometimes took
  weeks or months.  But am I alone in thinking that in some respects it takes
  more chutzpah to test-fly one of these modern fly-by-wire wonders?  <shudder>

What do you think they do, drop in a computer and tell the pilot to take it
around the pattern a couple of times?  No wonder everyone's so goosy about
fly-by-wire.  We're talking about V&V here--verification and validation.

The very least they will do is hot-bench testing, with all the hardware in
place and the best aerodynamic model in a computer.  They may well have had an
iron bird, although I doubt it.  The B-2 maybe, but not the YF-23, in my
opinion.  The iron bird for the F-8 Digital Fly-by-Wire (DFBW) was an F-8C
airframe, with the wing tips removed and the engine gone.  The hydraulic
system, the actuators, the surfaces, the FCS computers--all the "real"
hardware, with the aerodynamics in a computer.  I don't think there were iron
birds for the F-16 or F-18, since our experience with the F-8 DFBW indicated
that it was at best an act of supererogation and at worst a red herring.  We
spent months trying to take care of a problem that turned out to be unique to
the iron bird itself.

The X-29, 30-per-cent statically unstable little beast that it is, didn't have
an iron bird and it was a great deal more experimental than the YF-23.  Nor did
the HiMAT or the Shuttle.

A good FCS is sufficiently robust that it can deal with variations in the
stability and control derivatives.  In general, the initial FCS will be tuned
when the derivatives are determined during the envelope expansion that is the
first part of the flight program.  We have a pretty good idea just how good or
bad a particular derivative from the tunnel tests is and we can make worst-case
assumptions based on the historical error margins.  This is called parametric
variation.  I was involved in just such an assessment for the Space Shuttle
back in the mid-70s.

There is an interesting example of FCS robustness from the Shuttle.  Rolling
moment due to yaw jet is not only twice the predicted magnitude, it has the
opposite sign.  The FCS was sufficiently robust to deal with this, although
hand-flown roll reversals replaced the preprogrammed ones until the FCS was
refined after STS-5.

I've asked our pilots about this and they don't think that it takes anything
more from the pilots to fly modern FBW aircraft.  The F-8 DFBW, the first
digital FBW airplane, was, they say, a little more exciting, but that was some
time around 1970 and it had no reversion mode.  But modern FBW is no big deal.
One of them does admit to being a little nervous about the forward-swept wing
on the X-29, though.

Mary Shafer  shafer@skipper.dfrf.nasa.gov  ames!skipper.dfrf.nasa.gov!shafer
      NASA Ames Dryden Flight Research Facility, Edwards, CA


Re: Old O/S and Network Security

Rick Smith <smith@SCTC.COM>
Thu, 2 May 91 15:51:24 CDT
The article about securing old OSes presented some interesting ideas, but it
leaves the impression that you can easily solve security problems simply by
installing systems with high NCSC security ratings.  This is not true.

>If you REALLY want security, install a Honeywell SCOMP between the
>internet, and your local host, LAN, or whatever; at less cost, and more
>risk, install any C2 or better O/S host at that connection site.
> {followed by more about using B2 hosts}

Neither the SCOMP nor a C2 rating provide a magic shroud to protect you from
network-borne crackers if you depend on the usual Userid and Password for
access to your LAN. A password based authentication system meets the
requirements even for an A1 rating. You still need security-conscious users who
use passwords in a secure manner, or even an A1 system can be penetrated.

The distinguishing security feature of systems rated B or higher is that they
keep a computer's data and activities strictly separated by "security levels."
In a military environment you would set up the system to prevent "secret"
information from intermingling with "unclassified" information. The system then
keeps data at higher levels from leaking into lower levels, whether by accident
or on purpose. Higher security ratings (B2, B3, A1) indicate stronger assurance
that objects at different levels remain properly segregated.

Even MAIL between users at different security levels is generally prevented.
Mail from a higher level user to a lower level user is completely forbidden
since it would provide a path for "secret" data to leak out. Strictly speaking,
a lower level user may send mail to a higher level user, though the system
would not tell the sender if the mail was actually received.

Despite such restrictions, there are real benefits for commercial users. For
example, a company could use level separation by defining a "public" level
similar to the military's "unclassified" level and a "proprietary" level
similar to "secret." Users operating at the proprietary level would be
prevented from leaking proprietary data into public areas, and users at the
public level could access the public internet. This would protect proprietary
data from internet access. This would not necessarily protect the multilevel
host from crackers entering it at the public level, but it would keep them away
from company valuables.

However, this assumes that only A and B rated computers can access both the
sensitive data and the public network. If your unsecure computer can access
both your sensitive data and your public network, then you have subverted any
other protection you might have provided.

>       [The Honeywell SCOMP — still the only A1 evaluated system --
>       is now the Bull SCOMP.     ... PGN]

The current version of SCOMP, the XTS-200, is currently under evaluation
at the B3 level, not A1. I do not know if an A1 SCOMP is still available,
though perhaps another netter might.

Rick.            smith@sctc.com       Arden Hills, Minnesota


Re: Old O/S and Network Security

Mike Muuss <mike@BRL.MIL>
Wed, 1 May 91 21:56:53 EDT
Bob, In your recent message, you wrote:

<> For years now, there has been a reasonable solution to the problem of
<> "an old O/S" and network security: Install a newer, much improved O/S
<> (WRT network security) _between_ the open network, and the "closed"
<> community that uses the old O/S to process valuable information.

In general, I strongly *disagree* with this policy.  The consequence of
erecting just one fence around your internal network is that, if any one
node of your internal network is compromised, then the entire internal
network is compromised.  Not a pretty thought.

This was clearly spelled out in the old Army Regulation 380-380, which
indicated that every host *must* fully defend itself at the host/network
interface.  Any additional protection at the local_network/WAN interface
was "gravy", but did not count towards successful accreditation of
the host's defenses.

I find this strategy rather comforting, knowing that every host on the
network is prepared to defend itself, rather than being entirely
dependent on the services of the "guard force".

[ This discussion generalizes to private life, too.  Are you personally
prepared to deal with natural disasters, civil disobedience, etc.
yourself, at least to a limited degree, or do you place 100% faith in
the services of the police and other government agencies?  There is a
term for people who are prepared: "survivors".  Same for computers.
But, I digress. ]

I've had the privilege of providing the Army's most effective "Tiger
Team" to help check internal security at other installations.
Installations such as you describe are always the easiest to compromise,
because when (not if) you find the first "chink in the armor", the
entire site is yours to command.

However, there is an even more severe penalty from the type of policy
that you propose.  I'm a big proponent of network distributed computing.
I used a lot of X windows, BRL-CAD LIBFB remote framebuffers, and
distributed high-performance computing applications.  On one occasion in
'85, I used every Gould PN9000 computer on the MILNET on the east coast
of the US to run my distributed ray-tracer, to get images ready for a
publication deadline.  (Made it, too!).

I also travel a lot to various Universities and National Labs. I'm quite
accustomed to being able to simply "reach out" and invoke processing
power on my SGI compute server or one of our Crays, and interact with
the processing.  Regardless of where I am. I've delivered lectures at
NASA facilities, where the images were interactively computed by BRL's
XMP and provided in real time over the InterNet.  I've run calculations
in the USA from the back of a British personnel carrier as RSRE (RSRE
packet radio to SATNET to ...).

This type of scientific computing environment is completely compromised when
you interpose Janus hosts between the internal and external networks.  This
spoils the fruits of the past 10 years of accomplishments of DARPA and the
network research community.

Furthermore, it usually isn't even very secure.  Usually, when I'm trapped into
working at such a site for more than a few hours, I manage (as an unprivileged
user) to create a bypass in the Janus host, so that I can get some work done.
Thus defeating your "security" again.

Now, I understand that at some sites, it is not possible to implement
good security on some selection of hosts.  (Macintosh and PC's are prime
examples of this).  In those cases, it may well make sense to isolate those
fundamentally insecure machines in a "leper colony" sub-network.
But all "real" computers (i.e. anything >= a Sun-2/50) with "real"
software (i.e. UNIX, TOPS-20, VMS, etc.) should be provided with a full
host defense, and given full access to the network.

So, if you need to implement a "leper colony", at least you will be more aware
of the risks associated with doing so.  It has a potentially large negative
impact on legitimate use of your computers, and it has security vulnerabilities
of it's own.  However, in the final analysis, it may be cheaper and easier to
do than actually implementing real security on all your hosts.  I'm certain you
can see which strategy I prefer.

I'd like to close by quoting my "golden rule" for computer security:

   "Computer security should be strong enough to repel virtually any attack
   ***from the outside***, yet unobtrusive enough that the average user is
   unaware that he is being guarded by a strong defense."  -Mike Muuss

Mike Muuss, Advanced Computer Systems, Vulnerability/Lethality Division
U. S. Army Ballistic Research Laboratory


PRODIGY: STAGE.DAT

A. Padgett Peterson, 407-356-6384 <padgett%tccslr.dnet@uvs1.orl.mmc.com>
Thu, 2 May 91 15:36:06 -0400
>From: Bill Seurer <seurer@rchland.vnet.ibm.com>
>Subject: A Prodigy experiment

>The results are:
>  Prodigy does not appear to be uploading any data.
>  The STAGE.DAT file contains bits and pieces of ERASED files.

I first saw mention of this shortly after the "new" PRODIGY software came out
and have participated in a number of discussions on the PC Board before
deciding that just the same questions were being asked over and over and....

Consequently, a few months ago I did some testing of my own since my PC has
some unusual system integrity management tools resident.

Findings:

1. PRODIGY does not seem to have modified any executables on my system
   following installation that use the normal DOS EXEC call to load.
   (A warning screen would appear if this happened) Though PRODIGY has
   stated that it has the power to do so. Of couse PRODIGY could load a
   file as data and tranfer execution to it without using EXEC.

2. PRODIGY does seem to have captured the contents of memory at time
   of installation inside STAGE.DAT. (just before installation I did a
   SCAN of the disks for viruses using the McAfee utility. The data used
   by McAfee is encoded in the file to prevent alarming on his own utilities
   and to make it more difficult for virus writers to discover the strings
   in use. The decypted strings that would have been in memory were found
   within STAGE.DAT.

   This probably accounts for the report of the ENVIRONMENT strings and
   the root directory entry in STAGE.DAT since they are stored in memory.

    The bottom line is that I do not know what the limits of PRODIGY's
control over my PC when connect are and PRODIGY is not willing to disclose that
information. The RISK is not in what is being done today, but what COULD be
done by a malicious person or entity with sufficient access (legal or
otherwise).
                          Padgett


Do unauthorised users deserve the protection of the law?

<HCART@vax.oxford.ac.uk>
Thu, 2 MAY 91 17:00:38 BST
Herman Woltring has presented an interesting defense of his compatriots in
Holland who have obtained unauthorised access to computers in the US.  He
argues that "[owners who] are often too lazy to accept their civil
responsibilities...try to fall back on criminal law ... to protect their
private interests."

Let's be clear about this: the law has to take a stance.  It can protect the
interests of legitimate users, by making unauthorised access illegal.  Or it
can protect unauthorised users by making it legal.

If the law can work for just one of these two groups, which should it be?
Legitimate users, who pay for and maintain their systems, deal with the
frustration and expense of break-ins and often need the computer systems as a
vital part of work or research?

Or unauthorised users, whose use occupies system resources and may
intentionally or otherwise cause file system damage?

Mr. Woltring evidently believes unauthorised users need the protection of the
law more than legitimate users.  Could he now tell us the moral arguments
behind this conclusion?

                  Hugh Cartwright.  Physical Chemistry, Oxford University, UK.


rude behavior and the net.police (Knowles, RISKS-11.58)

Edward Vielmetti <emv@ox.com>
Fri, 03 May 91 01:14:19 EDT
Brad Knowles (blknowle@frodo.jdssc.dca.mil) says some reasonable things, and then
goes off the edge a little bit.

>     .. if someone from the University of Utrecht keeps breaking into
> computers at Lawrence Livermore Labs, or NASA, or where ever, and we
> can't get the local police to prosecute them (assuming that we have
> collected enough information to identify the perpetrators), then our
> only course of action is cut the Univerisity of Utrecht off from the
> Internet in the US.

It's well within the rights of Livermore or NASA (or their contractors) to put
in filters that would deny access from Utrecht to the NASA networks; that might
cause some inconveniences if NASA folks actually want to talk to Utrecht folks.
Most modern routers have packet filters and network routing control which would
make it reasonably straighforward to cut off intruders if you knew their point
of access.  If you want to press home the point, then have NASA (or where ever)
selectively cut off access to the resources which they provide to the Internet,
e.g. make everyone from Utrecht get a message when they go to ftp the latest
Voyager images at ames.arc.nasa.gov that they are not welcome until they clean
up things locally.

> If most every University in the Netherlands has
> people trying this, then we must cut them all off.  Since the Defense
> Communications Agency (DCA, soon to be changing our name to Defense
> Information Systems Agency (DISA)) has the right, nay the
> RESPONSIBILITY to keep up (and presumably to police) the Internet,
> then I think I can safely say that this might actually get implemented
> if the folks over at the University of Utrecht keep it up.

Who hired you to be the net.cops?  I'm going to be rather annoyed if DCA comes
trooping into Alternet headquarters, demanding that the Alternet-EUNET link
from Virginia to the Netherlands be severed because of lousy network security
inside of Livermore.  Consider that the vast majority of the network is
well-served by the link to the Netherlands, and that storming in and cutting
off the link because of your putative RESPONSIBILITY is going to have negative
consequences for quite a few people.

 >     We must *not* punish the victims for the crimes of others!  As
 > was stated by another author, some folks simply don't have a choice
 > — they *must* stay with an old version of their OS becuase some
 > mission-critical piece of software runs *only* with that version of
 > the OS.

Corporate America (e.g. Ford, AT&T, Sun) have learned to live on the Internet
with less-than-secure systems internally by putting in appropriate application
level and network security instruments around their systems.  If that
particular fragile old insecure system has mission-critical software on it,
what the *hell* is it doing sitting there out exposed on the Internet?  The
very least responsibility you have is to pay for your own costs of security and
not foist the burden upon the entire rest of the Internet population.  Don't
drag the rest of us down forcing us to pay for your mistakes.

 Msen   Edward Vielmetti    moderator, comp.archives    emv@msen.com

"(6) The Plan shall identify how agencies and departments can
collaborate to ... expand efforts to improve, document, and evaluate
unclassified public-domain software developed by federally-funded
researchers and other software, including federally-funded educational
and training software; "
            High-Performance Computing Act of 1991, S. 218


Software Warranty

Geoffrey H. Cooper <geof@aurora.com>
Thu, 2 May 91 18:05:01 PDT
I recently bought a three licenses of BBN/Slate for $297 on a special offer.
The software came with a little booklet describing the licensing agreement.  I
was impressed that an honest and sincere effort had been made to make it
understandable to an educated person who was not a lawyer.  Gold star, BBN.

The document included a software warranty.  This was covered some time ago by
RISKS in the context of the risks of usual software shrink-wrap agreements;
some said it was possible and some said it couldn't work.  But I don't recall
anyone ever bringing up an actual software warranty that is in use.

So I typed in the juicy bits.  I haven't had cause to test this warranty, so I
still don't know if it works in practise.  It did give me a warm, fuzzy feeling
to read it after paying my money.  By the way, it also appeared that BBN didn't
debit my charge card till the 30 day warranty was up.

[There is no copyright on the agreement, although the term BBN/Slate
is listed as a registered trademark.]

>From "Licensing Agreement for BBN/Slate(TM) Software":

    7. Limited Warranty and Disclaimer of Liability

    BBN HAS NO CONTROL OVER LICENSEE'S USE OF THE SOFTWARE, THEREFORE, BBN
    DOES NOT AND CANNOT WARRANT THE PERFORMANCE OR RESULTS THAT MAY BE
    OBTAINED BY ITS USE.  HOWEVER, BBN PROVIDES THE FOLLOWING LIMITED
    WARRANTY:

            LIMITED WARRANTY COVERING BBN SOFTWARE PRODUCTS
                     BBN/SLATE SOFTWARE PRODUCTS

    What is Covered:

    BBN Warrants that the magnetic tape cartridge(s) on which the enclosed
    computer SOFTWARE is recorded and the DOCUMENTATION provided with it
    are free from defects in materials and workmanship under normal use.
    BBN warrants that the SOFTWARE itself will perform substantially in
    accordance with the specifications set forth in the DOCUMENTATION
    provided with the SOFTWARE when used on an appropriate computer.  It
    is <

Risk Analysis Seminars

"Cecilia Spears" <NG.CSW@Forsythe.Stanford.EDU>
Thu, 2 May 91 16:38:09 PDT
The next upcoming "Risk Analysis Seminar Series" Coordinated by Prof. M.
Elisabeth Pate-Cornell.  Location: Terman Building, Room 101, Time: 4:15 to
5:30 PM.  This class is offered for credit to Stanford students; one unit per
quarter. Open to the public - no charge. Offered by Department of Industrial
Engineering and Engineering Management.

The schedule is as follows:

  May 9: Dr. Max Henrion, Rockwell International, Palo Alto: "COMPARING
  DIAGNOSTIC RULES AND PROBABILISTIC INFERENCE FOR KNOWLEDGE-BASED SYSTEMS"

  May 23:  Prof. Daniel Kahneman, University of California, Berkeley:
  "TIMID DECISIONS AND BOLD FORECASTS"


WORMSC Call for Abstracts

Andrew Lacher <m18709@mwvm.mitre.org>
Friday, 3 May 1991 13:53:50 EDT
                           ** Call for Abstracts **

The Washington Operations Research & Management Science Council (WORMSC) is
looking for presentation abstracts for their 28th annual symposium in
Washington, DC on October 28, 1991.  Subject matter is flexible but should
relate to operations research or management science.  If you have questions or
would like to submit an abstract, please contact:

Andrew Lacher, The MITRE Corporation, 7525 Colshire Drive, McLean, Virginia
22102     703-883-7182      m18709@mwvm.mitre.org

               [The WORMSC Call in, the WORMSC Call out, ...  Do WORM-SCrawl
               him a line, and you can bite, hook, line, and synch-er.  PGN]

Please report problems with the web pages to the maintainer

x
Top