The RISKS Digest
Volume 7 Issue 95

Friday, 16th December 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Armed with a keyboard and considered dangerous
Rodney Hoffman
Value for money? (Part 2)
Jerry Harper
USAF software contractors score poorly
Henry Spencer
Reasoning about software
Nancy Leveson
Hacking the etymology
Nigel Roberts
[Shattering revelations]
Shatter
Info on RISKS (comp.risks)

Armed with a keyboard and considered dangerous

Rodney Hoffman <Hoffman.es@Xerox.com>
16 Dec 88 08:13:25 PST (Friday)
The 16 Dec 88 'Los Angeles Times' contains this story (excerpts only):

         EX-COMPUTER WHIZ KID HELD ON NEW FRAUD COUNTS
                         By Kim Murphy

  Kevin Mitnick was 17 when he first cracked Pacific Bell's computer 
  system, secretly channeling his computer through a pay phone to 
  alter telephone bills, penetrate other computers and steal $200,000 
  worth of data from a San Francisco corporation.  A Juvenile Court 
  judge at the time sentenced Mitnick to six months in a youth facility....

  [After his release,] his probation officer found that her phone had 
  been disconnected and the phone company had no record of it.  A 
  judge's credit record at TRW Inc. was inexplicably altered.  Police 
  computer files on the case were accessed from outside.... Mitnick 
  fled to Israel.  Upon his return, there were new charges filed in 
  Santa Cruz, accusing Mitnick of stealing software under development 
  by Microport Systems, and federal prosecutors have a judgment showing 
  Mitnick was convicted on the charge.  There is, however, no record 
  of the conviction in Sant Cruz's computer files.

  On Thursday, Mitnick, now 25, was charged in two new criminal complaints
  accusing him of causing $4 million damage to a DEC computer, stealing 
  a highly secret computer security system and gaining access to 
  unauthorized MCI long-distance codes through university computers 
  in L.A. and England.  

  U.S. Magistrate ...took the unusual step of ordering [Mitnick] held 
  without bail, ruling that when armed with a keyboard he posed a danger 
  to the community.  "This thing is so massive, we're just running around 
  trying to figure out what he did," said the prosecutor, an Asst. U.S. 
  Atty.  "This person, we believe, is very, very dangerous, and he needs 
  to be detained and kept away from a computer."  LA and FBI Investigators 
  say they are only now beginning to put together a picture of Mitnick 
  and his alleged high-tech escapades.  "He's several levels above what 
  you would characterize as a computer hacker," said Detective James K. 
  Black, head of the LA Police Dept's computer crime unit.  "He started 
  out with a real driving curiosity for computers that went beyond personal
  computers.... He grew with the technology."

  Mitnick is to be arraigned on two counts of computer fraud.  The case 
  is believed to be the first in the nation under a federal law that makes 
  it a crime to gain access to an interstate computer network for criminal
  purposes.... Federal prosecutors also obtained a court order restricting
  Mitnick's telephone calls from jail, fearing he might gain access to a 
  computer over the phone lines....


Re: Computer Virus Eradication Act of 1988

Les Earnest <LES@SAIL.Stanford.EDU>
16 Dec 88 0103 PST
The note from Don Alvarez <boomer@space.mit.edu> in RISKS-7.91 gives the
text of proposed legislation that is intended to inhibit certain kinds of
computer crime.  If you look at it only as a protection against skulduggery
then it looks reasonable, but it also seems to prohibit certain plausible
defensive tactics against software piracy.

Suppose that a software developer wishes to protect his program against
theft and happens to know with certainty that the computing environments
of all customers will have a certain property and that those of thieves
may not have that property.  It would be reasonable to have the program
check for the property and, if it is missing, either self-destruct or
malfunction in subtle ways.  (Admittedly there is some risk in doing this,
given all the crazy things that customers do, but with suitable admonitions
this could be a reasonable defensive tactic.  In fact it has been used
in the past.)

The proposed legislation reportedly says:
"(a) Whoever knowingly-
  "(1) inserts into a program for a computer information or commands,
  knowing or having reason to believe that such information or commands
  will cause loss to users of a computer on which such program is run
  or to those who rely on information processed on such computer; and
  "(2) provides such a program to others in circumstances in which those
  others do not know of the insertion or its effects; or attempts to do so,
  shall if any such conduct affects interstate or foreign commerce, be fined
  under this title or imprisoned not more than 10 years, or both."

This wording, as it stands, would appear to make defensive programming of the
type described above illegal.  The problem is that it fails to distinguish
between the interests of legitimate users of programs and those who steal them.

    -Les Earnest


Value for money? (Part 2)

Jerry Harper <jharper@euroies.UUCP>
Tue, 13 Dec 88 20:43:11 GMT
Just a week back a note appeared from me citing an Irish Times report of
how our Department of Health spent approximately $67million on a medical
informatics system which was substandard in many respects. A lamentable fact 
of the debacle is the Dept's dogged refusal to accept the advice of a range
of academics concerning inadequacies in the system.  This little anecdote
will impress RISKS readers I hope.  

Shortly, after the contract had been agreed, one of the management 
consultants favouring the system because of its advanced features
had the temerity to ring one of the opposed academics and ask if they
could recommend a good introduction to medical information systems!


USAF software contractors score poorly

<attcan!utzoo!henry@uunet.UU.NET>
Wed, 14 Dec 88 01:39:45 EST
>From the Nov 14 Aviation Week & Space Technology (page 103):

    The [USAF] Electronic Systems Div. has developed a new system
    for Air Force source selection boards to use to evaluate
    contractors' software capabilities.  Using a questionnaire,
    companies are ranked from one to five.  Some 84% of the 178
    contractors checked so far rank at the lowest level, with
    chaotic or unpredictable, poorly controlled processes.  Only
    14% ranked at the second level, meaning they could repeat
    previously mastered tasks.  Two percent met the third level
    with well-understood processes.  The processes for the fourth
    level are defined as well-measured and controlled, and for
    the fifth as optimized.  So far no contractor has ranked
    above the third level.


reasoning about software

Nancy Leveson <nancy@sablon.ics.uci.edu>
Fri, 16 Dec 88 11:55:44 -0800
I have a somewhat different interpretation of the draft ICAO standard
than Steve.

I originally quoted from a draft standard that included the following:
  > "... [Software] must be developed systematically in such a way that its
  > behavior, under all possible conditions, can be established by logical
  > reasoning (to a level of formality appropriate to the application).

Steve responded with:
 <> It's my opinion that strict enforcement of the above requirement simply
 <> makes the developer liable for errors, but doesn't do much for actually
 <> improving software reliability.  It is unlikely that "all possible 
 <> conditions" can be for[e]seen, let alone provided for.  The problem becomes 
 <> bigger as the complexity of the system increases, to the point where 
 <> exhaustive analysis of a system could take centuries to perform.

One of the most effective ways to increase reliability is to decrease
complexity.  I have seen safety-critical systems where the developers purposely
simplified their systems to make the above reasoning possible.  The results
were highly reliable.  I believe (and have heard those in the field of formal
verification confirm) that one of the advantages of formally verifying software
is that it encourages simplicity in the software design in order to perform the
necessary logical reasoning.

Reasoning about all conditions is currently required for hardware.  System
safety engineers use techniques such as FMECA (Failure Modes, Effects, and
Criticality Analysis, as mentioned in the standard) to accomplish this.  Should
regulatory agencies relax their standards for the software used to replace this
hardware?  Such hardware analyses currently do find many problems that are
fixed before they can cause an accident.

Microwave landing systems are used when visibility does not allow the pilot to
land the plane alone.  Current systems allow landing only when visibility is at
least 200 feet, so the pilot has a chance to abort and go around.  However,
they are now talking about allowing landings where the visibility is zero.
Perhaps we should not be putting trust in these systems if we cannot build them
in such a way that we CAN reason logically about their behavior under all
conditions.

 <> The requirement is essentially that systems be perfect.  That goal has
 <> proven elusive (unattainable?) in all areas of human endeavor.  Extensive
 <> formalism and verification should be required of critical systems, but
 <> requirements for perfect function are inane.  

I don't read the requirement as requiring perfection.  It says that we must
build the software in such a way that we can reason about it under all
conditions, including presumably what happens when there are software errors.
The standards certainly should not imply that failures in such systems are
acceptable.  Would you want a standard involving the safety of commercial
aircraft to require less than perfection?  Extremely high reliability
requirements (e.g., 10^-9 probability of failure over a fixed period of time)
are merely attempts to provide virtual perfection in hardware systems where
failures are random. In fact, it has been written that the FAA 10^-9 figure is
meant to be equivalent to: "is not expected to occur within the total life span
of the whole fleet of the model." [Waterman, "FAA's certification position on
advanced avionics," AIAA Astro. and Aero., May 1978, pp. 49-51]

 <> A better approach would be to require independent performance monitoring 
 <> and evaluation as part of the complete system.  

I agree, but I don't think the standard precludes this; in fact, I read
it as implying the necessity for it.  However, independent performance 
monitoring and evaluation can be flawed and implemented imperfectly also;  
error detection can be quite difficult in many applications.  I would 
feel most comfortable if companies do everything they can to make  
such safety-critical software as good as possible and then provide
safeguards in case they had not been completely successful;  both of 
these things need to be done in order for us to have the maximum confidence 
in our software at our current level of technology. 


Hacking the etymology

Nigel Roberts, D-8043 Unterfoehring <roberts%untadh.DEC@decwrl.dec.com>
Tue, 13 Dec 88 07:00:26 PST
The recent discussions of the etymology of the terms "hacker", "cracker", 
_et al_ & the recent spirited defence of the activity by one or two 
contributors (at least one of them being a self-confessed "hacker") has 
set me to thinking.

In RISKS & elsewhere, I see a "generation gap" between what, for want of a 
better term, I would describe as the "old-time hackers", who were experimenters,
and the current cyberpunks, the "hackers" of popular mediaspeak, the 
eponymous "shatterers". 

I think this apparent generation gap is fundamental to the discussion. 

The "old-style hackers" (of whom I am vain enough to claim I belong) learned
their computing in the 60s and 70s, often in a university or similar multi-
user environment, where, as often as not, hacking involved programming.

Today's stainless steel rats are much more likely to have discovered 
computers in the home, courtesy of Apple, Commodore or IBM, and started their
"network tourist" activities by purchasing a modem.

The old school (& I include myself here) resents the way the term "hacker" 
has been hi-jacked and is today being used to connotate anti-social activity. 
This is despite the ambiguous roots of the term (described by Weizenbaum
in _Computer Power & Human Reason_).

Today's cyberpunks are computer burglars, who are attempting to justify their 
activities by claiming a common motivation with their arguably less anti-social 
predecessors.

Like any story of generation conflict, there are elements of truth in the claims
of both sides.

It is going to be impossible to prevent the media from using the word "hacker"
in a way that the "old school" dislike. It would almost be easier to claim that
the word "gay" meant "happy, carefree".

But maybe the media and the collective unconscious understand the evolution 
of hackerism better than we do.

For just as there is at least a tiny thread of commonality with the hackers
of old in the network rats of the 80s, and I would say that there was some small
element of today's network rats in the hackers of old.

But of course, there IS a distinction between hacking around a system whose 
sole reason of being is to teach people about computers, and hacking into
systems which are being used for serious business purposes and where outsiders
no right to be. 

That difference is ethical, and has well expounded here in RISKS already.

Seeing as we can't get rid of "hackers" in the popular media, I would like 
to coin the term "punk hackers" (an abbreviation of 'cyberpunk hackers')
to describe their anti-social activities.

It seems to fit only too well, just like "punk rock" is rock music with 
swearing & spitting at the audience. 

And using it would let us "old hackers" keep our self-respect!

    Nigel Roberts,      Munich, W. Germany.


[Shattering revelations]

Shatter <unido!altger!Shatter@uunet.UU.NET>
15 Dec 88 04:58:42 MEZ (Thu)
First of all I would like to thank all the ppl who gave me feedback on my prev-
ious contribution to risks it has on the whole been quite positive :-)
[You will now have gathered that I have gone legit as I am now too well known
to continue with active hacking and will have to make do with the odd foray
into the net on highdays and holidays]. But there has been at least one recent contributor who does not seem to get the point that I was trying to make
and s my last effort was knocked up in 10mins I have decided to put a bit more effort into this one.
My previous article [if you can call it that] was not trying to justify anything
but was written to try to point out a major flaw that exists in the IT community
and it is one that should at least show some signs of being rectified in the
near future or more serious attacks on networks such s internet will no
doubt occur.
The contributor who compared modern day hackers to the punk rock musicians
of the 70's obviously has not spent time within the hacker community
in the last 10 to 20 years as if he had he would releise that the sense of
ethics and morality is as strong if not stronger than in his day
and his assumption is like saying all black male teenagers are muggers,
rapists and murderers.[but i wander yet again]
and I would like to say to him am I anyless of a caring,moral and intelligent
human being becoz I learned my craft on a home micro,network of tandy modal 80's
and a modem I made myself? Wot I think we have witnessed in recent issues of risks
is a kind of computer snobbery that does little to promote the spirt of goodwill
and intellectual exchange that should exist within our community [for all our sakes]. Comments have been made that Hackers of today do not inform the owners
of the systems of the holes that exist and in some instances that is true but
I ask you "When those of you who claim to be 'old-time hackers' found a possible
security breach on a machine did you immdiatly go running printout in hand to
the owner of the system?????" I think not the temptation to explore just that little bit further is too great.
and in some cases the administrator is rude and often downright abusive
when a security hole is brought to his attention [sorry I am not sexist the masculine gender is used to mean mankind in general not just the male sex]
Which is often the case on commercial sites[an exparience I myself have
expirienced]
To finish this "article" off i will just make the following points:-
1. Can we please have less of this snobbery that exists
2. Work with the hacking community as much as possible. We will both gain
from the exparience [offer an insentive if nessesary[an account that is open to
all but only usable at nite and has say a MUD on it or even MONEY :-) ]]
3. Work with each other

and finally if anyone has a need for any help with any thing that you
think I can help with then mail me at ...!unido!alter!Shatter and
i will see if I can help.
                                        Shatter

Please report problems with the web pages to the maintainer

x
Top