The RISKS Digest
Volume 12 Issue 67

Monday, 2nd December 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Computer Delays costs Hospital over \pounds 300,000
Paul Leyland
A RISK of dishonestly using a visible password
Paul Leyland
Sprint Voice Calling Card uses SS#
Lauren Weinstein
Bright AT&T billing sys?
Thomson Kuhn
`Contractor queries data security'
Matthew Farwell
Proposed traffic congestion charging system, Cambridge UK
Hugo Tyson
Mailing lists - a right royal mistake
Dave Horsfall
Re: Leaves, trains, and computers
Peter Mellor
Re: Proposed Antivirus Certification
David A. Honig
Re: Employee Termination
anonymous
Bill Murray
Re: Pentagon computers vulnerable
Brinton Cooper
Re: Risks of hardcoded hex instead of symbolic constants?
Bob Frankston
Bennet Yee
Graham Toal
Brandon S. Allbery
Paul S. Miner
Info on RISKS (comp.risks)

Computer Delays costs Hospital over \pounds 300,000

Paul Leyland <pcl@oxford.ac.uk>
Tue, 26 Nov 91 17:57:26 GMT
_The Health Service Journal_, 12 September, 1991.

     Nottingham splashes out \pounds 300,000 to bridge HISS gap

Nottingham City Hospital has been forced to spend more than \pounds 300,000 on
a stopgap computer system because of delays to its wide-ranging hospital
information support system (HISS).

The hospital is one of the pilot sites selected by the Department of Health to
test the ISS concept, which involves computerising almost every aspect of
hospital operation at a cost of millions of pounds.  But clinical directors at
the hospital have said that they cannot wait until the HISS is fully installed,
according to HISS project manager Andy Norman.

The hospital is spending the cash on a case-mix system from ACT Medisys, which
will collect and sift data from existing systems for costing, audit and other
purposes.  Installation of the case-mix software and hardware has already
started.  In contrast the HISS, which is being part-funded by the DoH, is
unlikely to be fully installed for two or three years.

Even by NHS [National Health Service — pcl] standards the purchase of the HISS
for Nottingham has been protracted.  Nottingham's HISS will require a
substantial amount of programming work, unlike previous HISS projects which
were largely based around existing packages, often already in use in the US.
The project will be based around a detailed abstract description of how the NHS
operates known as the common basic specification.

The contract was supposed to have been awarded at the end of last year.  Mr
Norman said last week that the contract would be awarded by the end of October.
IBM has recently quit chasing the contract, saying that the two-year bidding
process had wasted too many resources.


A RISK of dishonestly using a visible password

Paul Leyland <pcl@oxford.ac.uk>
Tue, 26 Nov 91 17:44:27 GMT
_The Health Service Journal_, 12 September 1991.

A 21-year-old supplies clerk with Berkshire County Council has been jailed for
two years after stealing \pounds 120,000, using the council's computers.  A
senior manager had left the password to the payments system by the computer
screen.
          [Yet another example of password insecurity.  There is no record of
          what sanctions, if any, were taken against the manager — pcl]


Sprint Voice Calling Card uses SS#

Lauren Weinstein <lauren@vortex.com>
Thu, 28 Nov 91 12:22:19 PST
Just a quick note to mention that Sprint is apparently using customers' SS#s as
the main portion of their experimental voice-activated calling card system.
While Sprint claims this isn't a problem, since the system is only supposed to
respond to the callers' own voice (I suppose time will tell how well this
system really works!), the problems of people overhearing your SS#, and then
using it for other non-calling-card purposes, are obvious.

I don't know at this time if Sprint plans to continue using SS#s after their
system passes beyond the experimental stage, but it wouldn't surprise me, given
their lack of concern over customer privacy in the past.  By the way, I'm still
arguing with them about their system that allows anyone to interrogate account
balances using nothing but the 10 digit telephone number--no passcodes, no
controls, and no way for customers to "opt-out" of the system.  I'll report
back if anything changes in this area...
                                                   --Lauren--


Bright AT&T billing sys?

Thomson Kuhn <70007.5444@compuserve.com>
01 Dec 91 14:49:06 EST
Recently I opened my phone bill and found it to be five times its normal size
(in both dollars and pages!).  Looking over the 50+ pages of charges and
remembering a recent _60_Minutes_ program, it became clear to me that someone
had gotten hold of my AT&T calling card number and passed it to friends and
relatives all over the American Hemisphere.  The best part of the experience
was a note from the AT&T billing system which followed nine pages of charges to
(and from) places I have never been or called:

"*After analyzing your AT&T long distance calls on this bill, we find you could
have saved money with the AT&T Reach Out America Plan with the AT&T calling
card discount for your direct-dialed out-of-state calls..."

Thomson Kuhn, American College of Physicians      70007.5444@@compuserve.com


`Contractor queries data security'

Matthew Farwell <dylan@ibmpcug.co.uk>
29 Nov 91 02:36:14 GMT (Fri)
Computer Weekly, 28Nov91

A complaint to the data protection registrar has raised the issue of whether
address lists compiled by contract staff agencies which then go bust can be
sold to other companies.

Computer contractor Ian Dallison has complained after the employment Department
told him that a regulation stating that only agencies can only pass on
information when finding a person a job does not apply if an agency goes bust.

Dallison first wrote to the data protection registrar's office and to the
Employment Department in the summer after being contacted by two agencies and a
timeshare company which had bought the address list of a bankrupt agency from
the liquidator.

The Employment Department's Employment Agency Licensing Office has only just
come back with its negative reply - and Dallison is now pursuing the matter
with assistant data protection registrar John Lamidey.  Lamidey says this
issue arises in the insurance business when a small broker goes out of business
and another firm takes up its clients.

But in Dallison's case the relationship between the individuals and the new
owners is different.

"The Data Protection Act says you have to tell people what you intend to do
with the personal information when you collect it - but you can't predict that
you'll go out of business and the list will be sold," Lampidey says. "This
circumstance probably wasn't thought up when the Act was drawn up"

One point raised by Lampidey is that a Liquidator takes control of a company
and in effect becomes the owner of the data and therefore legally responsible
for it. He is considering where this leaves the liquidator in cases like
Dallison's.

Dylan.                   dylan@ibmpcug.co.uk || ...!uunet!uknet!ibmpcug!dylan


Proposed traffic congestion charging system, Cambridge UK

Hugo Tyson <hugo@harlqn.co.uk>
Fri, 29 Nov 91 16:43:49 GMT
This is from memory of what I've read in the local papers and an article
on the BBC2 motoring programme "Top Gear".  I live in Cambridge so I do
have an interest in preventing this lunacy.  Objective from now on....

    CAMBRIDGE, England:
    This ancient university city suffers from bad traffic congestion during
most of the day, and a risky solution has been proposed.  All cars (vehicles?)
registered within 15 or 20 miles of the city will be fitted with a box,
connected to the speedometer (presumably) and the ignition system, which has a
slot in it for a phonecard-like card.  The box is enabled and disabled by
microwave transmitters on the 7 roads in and out of the city.  While the box is
enabled, so the proposal goes, if you travel less than some small distance in a
certain time (I think it was the order of 300 metres in 30 seconds) you are
deemed to be, and be causing, congestion, and the your special card, which is
in the slot in the box on the dashboard, will have its credits debited.  If the
card runs out, you allowed a short way into debt on the card, and then the
engine cuts out (whether this is until you are no longer "congested" or not is
unclear).  You can get your card "recharged", or buy a new card (?) at machines
on street corners, post offices and the like, by handing over money.  Visitors
will be directed by signs to one of a number of ticket machines where a
"day-pass" can be bought for a fixed fee.

The idea is that this charging will cause people not to travel at times of
congestion to avoid paying the charges needed to keep their vehicles going
at these times, thus reducing the congestion.

This is how it is different from other "road pricing" schemes - it only
charges if you travel in, and thus cause, congestion.

There are many risks here - I present some in no particular order:

 * if the system is expensive enough to be a deterrent to travelling
during congested periods people will disconnect the box - it can't be
hard.  If it is cheap enough that they won't do this, it won't be a
deterrent, and will thus only be a small income source.
 * companies with offices in the city may have to pay the charges to
attract employees - thus the deterrant value disappears.
 * people in traffic jams will stop the engine for 30 seconds until there
is a large gap in front then speed down it and stop the engine again to
avoid the charge, unless the system detects this, leading to more
congestion behind these people.
 * visitors to the city pay a fixed fee - there is no deterrent for them,
and unless there are _many_ spot checks no reason to buy the pass at all.
 * immobilised vehicles will cause more congestion, unless a rapid removal
service exists - and how does that get through?
 * what if the box breaks?  And what if I break it?  This is very
difficult to police.  The implications for my car for example are complex
too, as it is owned by a company 200 miles away and leased to my employer.
Maybe I will count as a visitor, but as I live in the city I'd not enjoy
having to pay a daily visitors' fee.
 * microwave transmitters on routes in and out of the city.  Most of these
are two-lane roads, one in either direction.  Can transmitters be made
directional enough to only get cars in the one lane - or travelling in one
direction?  Or will the box simply toggle its state on exposure to the
signal?  This is very unsafe, suppose it doesn't turn off and your engine
then cuts in London where you can't recharge your card?  Will Cambridge
City pay your parking/other fines and costs?
 * car repairs - often require the car to sit in my drive or in a garage
with the engine running, not moving.  The system would charge me for this.
 * speedometer cable failure is not uncommon on older cars.  It is illegal
to drive a car like this, because the total mileage clock isn't
incrementing (and you can't tell your speed).  But the box would think
you're always stationary and charge you on top of any other trouble you
get on the way directly to the car spares shop for a new cable. ;-)
 * all the other risks associated with cards that contain money, and
adding a system capable of cutting the engine to a car.

Only some of these are computer or sensor failure risks - the others are system
design risks.  But the more special cases you put in to handle these other
risks the more complex and failure prone the computer in the box becomes.  For
example: (conjecture) box only stays "on" for one hour regardless of whether it
sees a turn-off signal, plus turn-on repeaters around the city interior fixes
the non-turn-off problem.  Maybe.  And so on.

More conjecture:
  The only way I can see to make this safe (safer) is to supply a pass-card or
key to everyone as well, which allows you to progress for free, but if you are
caught using it _on the city streets_ you get fined.  This would require spot
checks to police it, and it must not be trivial to change the card in the slot,
but the slot and the display on the box must be visible through the window.

Reality:
  Politically active friends do not believe that this will be implemented, for
various reasons, one being that it would annoy too many voters.  I believe the
same.  However it is worrying that such a dangerous system is being seriously
studied, when straightforward tollbooths with time dependent charges would do
the same job IMHO.

Hugo Tyson, Harlequin Limited, Barrington Hall, Barrington, Cambridge, CB2 5RG
England;    Tel.  (UK) 0223 872522  (International) +44 223 872522


Mailing lists - a right royal mistake

Dave Horsfall <dave@ips.oz.au>
Tue, 3 Dec 1991 10:23:53 +1100
Taken from "Column 8" in the "Sydney Morning Herald", 2nd Dec 91:

``Queen Elizabeth II Research Institute for Mothers and Infants is a
  section of the University of Sydney's Faculty of Medicine.  In the
  best traditions of a computer mailing list gone berserk, it received
  an invitation the other day to join the New York Academy of Sciences.
  It began: "Dear Queen Elizabeth, It is my pleasure, indeed, to extend
  to you this invitation to membership..."''


Re: Leaves, trains, and computers

p mellor <pm@cs.city.ac.uk>
Thu, 14 Nov 91 11:33:05 GMT
Further to the item by Graeme Tozer in RISKS-12.62, the official explanation of
why leaves delay trains, contained in the leaflet recently distributed by
Network Southeast to its commuters, is that the effect is mechanical. Wheels
slip on the rails, or lock during braking, causing overheating due to friction
resulting in cracking, or wearing flat spots on the circumference. Damaged
wheels need to be replaced or repaired, hence available rolling stock is
depleted, hence delays.

No mention of computers. This is odd, because I cannot remember such disruption
being caused by leaves in any previous year.

Snow is a different matter, particularly the "wrong kind" of snow - the fine
powdery stuff that gets into brake units. Perhaps we have the "wrong kind" of
leaves this year! :-)

Peter Mellor, Centre for Software Reliability, City University, Northampton
Sq., London EC1V 0HB +44(0)71-253-4399 Ext. 4162/3/1 JANET p.mellor@uk.ac.city


Re: Proposed Antivirus Certification (Brunnstein, RISKS-12.66)

"David A. Honig" <honig@broadway.ICS.UCI.EDU>
Fri, 29 Nov 91 22:53:25 -0800
A few comments on Dr. Brunnstein et al.'s proposal to create a bureaucracy to
manage antiviral certification:

Some aspects of the proposal are beneficial, e.g., creation of an
organization that evaluates antiviral products. "Consumer reports" -style
journals are useful as long as they are accurate.  They make the marketplace
more efficient by reducing the cost of obtaining information.

But much of the proposal is stifling. For instance, the creation of a
"certification" that one is a "trusted party" creates what the military calls a
"security clearance".  A result will be conferences with closed-doors.  Should
we licence owners of tech manuals too?

The concern that " (2) ensuring that decisions restricting the flow of
knowledge of details of malware do not result in undesirable side-effects."  is
mentioned but not discussed at length.  Indeed, some people believe that
"security through secrecy" is fundamentally flawed.  Yet many aspects of the
proposal have precisely that problem.

In sum, the creation of a software testing house specializing in anti-malware
is a good research topic and a useful idea; the creation of an
academic/industrial "trustworthy" clearance is a dangerous one.  Instead of
secrecy, we should have dissemination of both caveats and solutions to security
problems.
                                      David Honig


Re: Employee Termination (RISKS-12.65,66)

<[anonymous]>
Sat, 30 Nov 1991 13:19:22 -0500
    [Previous poster describes firing practices implemented to prevent computer
    sabotage by people that were just fired.]

This also shows that the management is not very confident in their backups.
Then again, does the fact that my site has very reliable backs make it easier
to fire me?


Re: Computer-related Risk of Employee Termination (RISKS-12.66)

<WHMurray@DOCKMASTER.NCSC.MIL>
Thu, 28 Nov 91 22:49 EST
As I advise my clients, terminations should be timely and complete.  What
constitutes timely and complete is a function of the nature of the termination,
the role of the employee, the residual relationship, and the culture of the
institution.

If the termination is hostile, timely means immediate and complete means
that all privileges, and tokens of privilege be collected or revoked
immediately.  This includes keys, identification, signature cards, and
logon IDs.  This often means that separation pay is given in lieu of
notice.

In the case of mass layoffs, a presumption of some hostility [must exist.  ???]

Sometimes, even in the case of voluntary termination, for example when the
employee gives notice of her intent to leave, the sensitivity of the role may
be such that timely means immediate and pay in lieu of notice is indicated.
For example, some organizations do not want people who have given notice to
continue in management roles.  Personally, I would not want those who have
given notice to continue to function as operators, system or security
administrators, or system or application programmers.

On the other hand, senior employees with significant reputations to protect may
be considered safe.  It is not uncommon to provide such employees with office
privileges to facilitate finding a new job.

Likewise, those employees to whom large sums of money are payable over time are
usually safe.  Retirees are not likely to put their retirements at risk by
taking a parting shot.  Many organizations give permanent credentials to their
retirees.  Most will provide offices to retired long-tenure founders or even
CEOs.

Finally the culture of the institution may influence what constitutes timely.
Some institutions or industries, as a matter of practice, do not offer long
tenure employment; there employees do not expect it.  The only question about
termination is when, not if.  These organizations enjoy a reputation of
"friendly" terminations and often maintain mutually beneficial relations with
their "alumni" for decades.  Here again, timely means less than immediate.

All but the most amicable separations involve some risk.  Computers may
aggravate this risk to the extent that they empower individuals, blur the lines
between what belongs to the institution and that which belongs to the
individual, mask the consequences of the user's actions from him, are so
attractive that the individual is reluctant to be separated from them, or makes
us dependent upon the special knowledge of one or two individuals.

The first risk is the one that concerns most management.  With a few
key-strokes, the terminated employee might be able to wipe out or erase a great
deal of information very quickly.  Likewise he might be able to create a trap
door that would make it impossible to exclude him.  Management lacks confidence
in the effectiveness of the controls that it has over the behavior of the
system.

The risk of the exercise of power by the separated individual may be aggravated
by the tendency of the computer to distance the user from the consequences of
his acts.  For example, an employee whose personal controls might not permit
him to set fire to the files might easily be able to erase them.

I still have a diskette marked "VM Files" that contains data that I down-loaded
from "my" VM system on the occasion of my retirement from IBM.  This diskette
contains a copy of my personal telephone directory, as well as copies of
several papers that I wrote while a user of that system.  I am satisfied that I
have sufficient rights in that data, and that after I left, they were simply
erased by the system managers.  Of course I honored my employment agreement
that required that I not disclose any IBM Confidential data for one year after
my retirement.  Nonetheless, my own separation illustrates many of the
conflicts that might arise between the rights of the institution and those of
the individual.

I also remember that one of the most difficult things for me to part with upon
my retirement was access to that system and the network that I accessed through
it.  It has taken me years to replace it.  I continued to use if for almost a
month after my termination until my account was finally revoked.  I can easily
sympathize with the anxiety of a suddenly terminated employee who can no longer
access "his" system and "his" data.  I can also sympathize with the concern of
management that a terminated employee might steal their data.

Finally, many institutions are dependent upon the special knowledge of a few
individuals, mostly programmers, whose untimely separation might deprive the
organization of knowledge that they require to properly manage their systems.
Many managers would feel prevented from immediately separating such people who
gave notice of their intent to leave.

Conversely, the risk of termination can be reduced by computer controls that
involve multiple people in sensitive duties, clarify the division of rights
between the institution and the individual, make the effects of computer
operations explicit, or which reduce the dependence of the institution on the
special knowledge of individuals by encapsulating that special knowledge within
the system.

It should be noted that when management errs on the safe side in terminations
they tend to embarrass both the separated employees and themselves; they may
look both paranoid and insensitive.  On the other hand, if they err in the
direction of risk and something goes wrong, they will appear to be imprudent.
Few managers will always, or even ever, walk this difficult line to the
satisfaction of everyone.

When few employees used computers in the course of their jobs, those employees
could be treated differently on separation than others.  When all employees use
computers, the capability for orderly separation will require that we control
computers in a more appropriate manner in the normal course of events.

William Hugh Murray, Executive Consultant, Information System Security
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840         203 966 4769


Re: Pentagon computers vulnerable (RISKS-12.66)

Brinton Cooper <abc@BRL.MIL>
Mon, 2 Dec 91 10:21:40 EST
... The report cited recent "penetrations" of "U.S. military computers" at "the
Pentagon" during the Gulf War.  I heard this report, originally, on NPR and
continue to have questions:

    1. Were the computers really at "the Pentagon?"
    2. If not, where were they?
    3. Was classified information compromised?
    4. If not, what sort of information was compromised?

    The press might consider the computer from which this note is posted as
a "Pentagon" computer because it is owned and operated by the US Army.  My data
files might be reported by a naive reporter as containing "military"
information.  In fact, they contain information on information theory,
algebraic coding theory, decoding, and associated bibliographies.

    Apart from the slightly sensational aspects of reporting "breaking into
Pentagon computers," the article talks about how hackers can cover their
tracks, appearing to have been anywhere in the world other than where they
actually were at the time of hacking.  Such discussions could be cited as
evidence that tracing of the access path to Internet computers should be
performed.  This, in turn, could easily lead to exactly the same arguments seen
here and in other forums (fora?) about telephone privacy vis a vis Calling
Number ID.  Is history about to repeat itself (again)?
                                                            _Brint


Re: Risks of hardcoded hexadecimal instead of symbolic constants?

<frankston!Bob_Frankston@world.std.com>
30 Nov 1991 09:52 -0400
Ultimately there is data somewhere deep in the bowels of a system. A 6 vs D
could easily have been a data error in a table.  Or it could have been in the
definition of a symbolic constant.  Giving a value a name doesn't make it
correct and might even obscure errors.  Even worse, errors in error paths are
very difficult to check when they only show up in system-wide interactions in a
very big system.  It is amazing how well systems work despite serious errors
until a particular set of conditions arise.

I'm sympathetic to approaches to minimize errors such as using closed loop
systems, redundancy etc but I'm afraid of people making the assumption that
perfection is achievable.  The challenge is to make the systems resilient
though not perfect. In something like the SS7 collapse the question is not
whether we can discover the bugs beforehand, but that the system is so
complicated that there weren't the firewalls to limit the collapse.

The two issues are related.  If we expect failure then we should design
firewalls independent of the complex failure recovery modes of the system.  Of
course, this too is ideal since both the system design and the firewall design
might suffer from the same systemic assumptions.

One product design I did involved dialup communications with two levels of
protocols.  I made the assumption that the recovery approach for any nontrivial
error was to hangup the phone.  Partially these was because I didn't want to
spend limited RAM and programming resources.  But also because I didn't see the
point of using complicated algorithms when a simpler approach would work.

Since I don't know anything more about the SS7 collapse than the "6" vs "d"
(more likely than "D" (a good example of how newspapers can mislead with the
most innocuous of changes)), none of this might apply.


Re: Risks of hardcoded hexadecimal...? (RISKS-12.66)

<Bennet_Yee@PLAY.MACH.CS.CMU.EDU>
Thu, 28 Nov 91 03:01:10 EST
I fail to see how such symbolic constants can be defined other than in terms of
a hexadecimal (or binary or ...) constant or other symbolic constant(s).  You
still have to have constants somewhere, even if it's only zero and the
successor function. :-)

In any case, the typographic error could just as well have been in the
definition of the symbolic constant.  Symbolic names may well help, but are no
panacea.

It's not really fair to be jumping to conclusions about the style of DSC
software.


Re: Risks of hardcoded hexadecimal...? (RISKS-12.66)

Graham Toal <gtoal@gem.stack.urc.tue.nl>
1 Dec 91 02:25:56 GMT
I would *love* to see the actual line of code.  Is there any chance of getting
it out of them?  I don't see how someone could accidentally type D for 6 or
vice-versa - too far apart.  I wonder if somehow or other this code was scanned
in - or (HHOS) typed in by a 'coder' like in the old days from a 'coding
sheet'? :-)

Just using symbolic constants to hide your typing mistakes in another file
isn't much of an improvement by the way.  NASA-style red/black tiger teams
might help a little, but I'm not sure what else would.  From what I've heard of
the state of the formal methods art, things haven't improved much since when I
was a student in the seventies...


Re: Risks of hardcoded hexadecimal ... (RISKS-12.66)

Brandon S. Allbery KF8NH <allbery@ncoast.org>
Sun, 1 Dec 91 10:33:31 -0500
I've had at least one bug creep into a program despite such care:  I was
careful to use symbolic constants even if I only used the constant once...
then proceeded to insert a typo into the declaration of the constant.

Don't make unwarranted assumptions.  That's a RISK in itself.

Brandon S. Allbery, KF8NH [44.70.4.88]        allbery@NCoast.ORG


Re: Risks of hardcoded hexadecimal ... (RISKS-12.66)

Paul S. Miner <psm@air16.larc.nasa.gov>
Mon, 2 Dec 91 09:50:42 -0500
Actually the conclusion that the data was HEX is not inevitable; the difference
between the binary representations of ``d'' and ``6'' in ASCII is three bits
(just as the difference between a ``6'' and ``d'' in HEX is three bits).  Thus,
the comments about the use of ``cryptic hexadecimal constants'' are not
necessarily relevant to this problem.

Paul S. Miner, 1 Gregg Road / Mail Stop 130, NASA Langley Research Center
Hampton, Virginia 23665-5225

Please report problems with the web pages to the maintainer

x
Top