The RISKS Digest
Volume 11 Issue 04

Thursday, 7th February 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Subway door accidents
Mark Brader
"Virus" destroys part of Mass. state budget plan
Adam M Gaffin
Reinterpretation of term "Computer Security"
Frank Dixon
SSN and Schwab Telebroker RISKS
Curtis Jackson
Re: Inquiry into cash machine fraud
John Sloan
Re: the new California licenses
Mark Jackson
Mark Gabriele
No more quick n' easy account info from Fidelity
Carol Springs
B.J. Herbison
Carl M. Kadie
Re: Electronic cash completely replacing cash
Lee S. Ridgway
Re: Predicting System Reliability...
Brad L. Knowles
Jeff Johnson
Electronic telephone directory
Jan Talmon
Info on RISKS (comp.risks)

subway door accidents (Pete Mellor, RISKS-11.03)

Mark Brader <msb@sq.com>
Thu, 7 Feb 1991 14:07:00 -0500
Here in Toronto, the subway staff can open one door on each side of each car,
but they need a key to do so.  There is an interlock so that any door open more
than 1-2 cm prevents acceleration from being applied, and opening the door with
the staff key apparently does not override the interlock.  (At least, the usual
indicator lights do come on.)  All of this is exactly the way I would expect
sensibly run systems everywhere to do it.  So let's hear about the
counterexamples...

It *is* possible for a train to run with a door open, in case the door fails;
to do so requires the operation of an override control concealed under the
passenger seats.  In that case the door-open warnings do not light.  On using
the override, the crew place large "Danger - Do Not Use" banners across the
doorway.  This too seems entirely sensible.

The open-door interlock is not sufficient to prevent accidents of the type
described above.  The closing doors can trap things like a coat sleeve or a
trailing purse, and people don't always react properly.  There was a fatal
accident here last year which may have been of this type (the exact cause was
not proved); it resulted in a long inquest and a lot of unfavorable publicity
for the transit system; and as a result of this, we now have what seem to me to
be annoyingly paternalistic advertisements, basically telling us that it's
always better to wait for the next train than to try to catch one that's just
leaving.

The operating practice here is to start the train the instant all doors are
closed — often the driver sets the control to accelerate as they start
closing, and lets the interlock start the train when they finish.  On a past
visit to New York, on the other hand, I noticed that there they wait a couple
of seconds after closing the doors and before starting the train.  And as I
recall London does the same.  I then read about an accident in New York where
someone had tried to force the doors open during those seconds, and gotten
caught and killed.  My conclusions were that New York does not have a
door/power interlock (though I suppose it might be that the fool merely managed
to not trigger it) and that it is actually safer to start away at once because
it reduces the temptation to those who would try this sort of thing.

Mark Brader      SoftQuad Inc., Toronto     utzoo!sq!msb, msb@sq.com


"Virus" destroys part of Mass. state budget plan

Adam M Gaffin <adamg@world.std.com>
Thu, 7 Feb 1991 17:36:00 GMT
Reason 4,012 to back up documents: A case study

Middlesex News, Framingham, Mass, 2/7/91

By Adam Gaffin
NEWS STAFF WRITER
     BOSTON - State officials say a computer virus destroyed 50 pages
of Gov. Weld's budget proposal earlier this week, but a computer
consultant with experience in fighting the bugssays uit sounds more
like a case of inadequate maintenance than anything sinister.
     Michael Sentance of Maynard, a legislative aide to Weld, had typed
in 50 pages of the governor's proposed budget on a Macintosh computer
when he tried saving the document to the machine's hard drive around 3
a.m. on Monday - only a few hours before it was due to be submitted to
the Legislature.
     But instead of being saved, the document disappeared, according to
Liz Lattimore, a Weld spokeswoman. Sentance was eventually able to
retrieve an earlier draft, filed under a different name, minus the 50
pages, she said. Because of the snafu, Weld was forced to delay
submitting the plan by a day.
     When Sentance ran a program to check for the presence of viruses
on the machine, it responded with a message indicating a ``type 003
TOPS network'' virus, Lattimore said. TOPS is the name of the network
used by the Executive Office of Administration and Finance to connect
its Macintoshes.
     Sentance had borrowed one of that office's computers because he
was more familiar with Macs than with the older Wang system in the
governor's suite, Lattimore said.
     Viruses are small programs that can take control of a computer's
operating system and destroy other programs and data, and can be spread
through people unwittingly sharing ``infected'' programs or disks.
     Lattimore said officials managed to transfer data from the ailing
computer to another machine, adding that they are now checking all of
Administration and Finance's Macintosh computers for possible
infection.
     But Eileen Hoffman of Needham, a Macintosh consultant, says what
happened to Sentance sounds more like a hard-drive ``crash'' than a
virus - something she said is potentially far more destructive.
     A document that disappears when the user tries to save it onto the
hard drive usually means there is something physically wrong with the
computer's hard drive, not that it is under viral attack, Hoffman said.
     Hoffman, who keeps three or four infected disks in a safe so that
she can test new anti-viral software, said the software that runs TOPS
networks is written in such a way that it can show up as a ``virus'' in
programs that check for viruses. She said a ``Type 003'' virus is one
of these phantom ``sneak'' viruses.
     Hoffman said Macintosh users are often more lax about maintaining
their computer's hard drives than users of IBM compatible machines,
because Macintoshes are aimed at people who do not want to have
anything to do with the hardware of their machines. The Macintoshes
were installed during the Dukakis administration.
     But even Mac hard drives require regular maintenance, she said.
She said she often gets calls from clients who blame disappearing data
or strange things on their screens on viruses, but that almost always
the problem is caused by a mechanical hard-drive problem.
     She added that the particular version of anti-viral software Sentance used
is two years out of date. Since new viruses are created all the time, this
means the software might not be able to detect one even if the machine were
infected, she said.


Reinterpretation of term "Computer Security"

Frank Dixon, fdixon@hq.dla.mil <fhi0011@hq.dla.mil>
Wed Feb 6 09:01:17 1991
One of the moguls in our business has publically taken the position that
computer security, as a concern, should be interpreted to include _all_ data
not just data perceived to be "at risk." He suggests--at his most extreme
pole--that the term "sensitive data" should be stricken from the lexicon of
security practitioners.

Given that those charged with protected the information judged to be sensitive
are more or less plowing an uphill furrow, I am concerned that this movement
toward a broadening of the coverage of the concern will have either of two
deleterious effects: (1) It may weaken the already feeble attempts to protect
the truly sensitive, or (2) be used pragmatically as a justification for
"protecting" all communications.

I don't like the implications.

Frank Dixon, Alexandria, VA


SSN and Schwab Telebroker RISKS

Curtis Jackson <!jackson@adobe.UUCP>
7 Feb 91 20:46:02 GMT
This morning my girlfriend decided to sell some stock in her Charles Schwab (a
popular U.S. discount stock brokerage firm) account.  Schwab has a 24-hour
automated telephone quote and order service called Telebroker, and since I know
that stock orders placed through Telebroker qualify for a 10% discount on
commission fees, I suggested she place her order over the telephone with the
automated system.

All of her records (including her account number) were at work, but she glibly
called up the local Schwab office and was able to obtain her account number
simply by giving her social security number and her address.  She then called
up Telebroker for the first time ever, entered her account number, and it asked
for *the last four digits of her social security number as her default
password*.  "This is easy," she said.  And indeed it is!  It then had her pick
a new password, and dropped her into the standard interface.  I do not know if
it would have balked if the "new" password she entered was the old one — the
last four digits of her SSN.

Telebroker is a relatively new and somewhat unknown feature offered by Charles
Schwab.  As easy as it is to obtain social security numbers of people in the
U.S., it would be very easy to find someone who has a Schwab account but has
never used Telebroker.  If you couldn't get their Schwab account number on your
own, you could simply call up Schwab and get their account number, then call
Telebroker and have a good deal of fun at their expense.  Certainly you
couldn't get your hands on the proceeds from a stock sale, but if you wanted to
wreak havoc on their life you could sell valuable stocks they are holding, or
find a climbing stock and margin them into it to the hilt.  Schwab allows you
to purchase 1.5 times your equity on margin.  Schwab further has a [somewhat
lax] policy of not leaving completed transaction confirmation messages on
answering machines for privacy reasons, so if you do it while Mr. X is out of
town he won't find out what you've done until the margin call.

Even without the human-induced risk of giving out account numbers on the
telephone, the risk of using the last four digits of the SSN as the default
password is a great one.

Curtis Jackson @ Adobe Systems in Mountain View, CA  (415-962-4905)
Internet: jackson@adobe.com uucp: ...!{apple|decwrl|sun}!adobe!jackson


Re: Inquiry into cash machine fraud (RISKS-11.03)

John Sloan <jsloan@niwot.scd.ucar.EDU>
Thu, 7 Feb 91 12:21:35 MST
>  A bank engineer is being interviewed by police investigating unauthorized
>withdrawals from cash machines.  It is alleged that money was withdrawn from
>customers' accounts through information gained during the servicing of machines
>operated by Clydesdale Bank.

This reminds me of a personal anecdote that illustrates how computers can play
passive partners in "low-tech" cash machine ripoffs. In 1975 I worked for a
national bank which was among the first to introduce cash machines to the
region. Normally the cash machines were online and continuously monitored by a
DEC PDP-11 front-ending an IBM 370/135 (I should mention that the cash machines
were not manufactured by either of these two firms). The machines had offline
capabilities in the case that the host systems went down and thus an account
could not be verified.

One weekend we were converting from a 370/135 to a 370/145 during which time
there was a short period in which the machines were left in offline mode. While
the machines were acting independently a large theft occurred from the cash
cassettes of several machines. There was no record of transactions on the
internal audit paper tape on any of the victimized machines. It was clearly an
inside job, since the thief knew precisely when the host system would be
unavailable.

Since I was on duty that weekend during the conversion I was among the
personnel interviewed by the FBI. Nearly a year later, after changing jobs, I
read where the FBI had arrested one of two of the cash machine vendor's service
people, who had confessed. His crime involved some special knowledge — if a
cash machine were to be opened while online that would be time stamped on the
host system console — but required no more technology than the key to open the
vault in the rear of the machine.

John Sloan, NCAR/SCD, POB 3000, Boulder CO 80307             +1 303 497 1243
jsloan@ncar.ucar.edu         ...!ncar!jsloan       jsloan%ncar@ncario.BITNET


Re: the new California licenses

<mjackson.wbst147@xerox.com>
Wed, 6 Feb 1991 14:33:00 PST
That's "coercivity" rather than "corrosivity"; it's a measure of how "stiff"
the system is in the sense of resisting changes in magnetization.  (Differences
in coercivity between DD and HD diskettes in both 3 1/2 and 5 1/4" formats have
a lot to do with the inadvisability / impossibility of using DD disks at the
higher densities.)

The oersted is the unit of magnetic field strength in the CGS system.

Mark <MJackson.Wbst147@Xerox.COM>

     [Also noted by Fred Gilham <gilham@csl.sri.com>, Steve Bellovin
     <smb@ulysses.att.com>, Ron Fox <fox@rudolf.nscl.msu.edu>,
     and Mark Gabriele (gabriele@hub.toronto.edu).  Even PGN noticed
     it, but did not get around to fixing it.  I try, but I do not always
     fix everything that needs it.  If I did, I would probably be
     Chorusively Orst(ed) on my own Peterd.  PGN]


Re: the new California licenses

Mark Gabriele <gabriele@riverdale.toronto.edu>
Thu, 7 Feb 91 11:13:43 EST
>...  The other two tracks will be in a format that is incompatible
>with current commercial readers, and will contain the rest of the
>information that is printed on the front: birth date, eye color, hair
>color, height, weight etc.

The author then goes on to point out the issues of loss of privacy from this
new system, and how all shopkeepers will soon inevitably upgrade their card
readers to capture this important and private information.

I don't see what the privacy problem is here, provided that the only data which
is encoded magnetically on the back of the cards is currently available in
human-readable format on the front of the card.  If you'll give your driver's
license to a clerk, you should be prepared to have the clerk copy down all of
the information on that license (I've had clerks meticulously copy my height,
weight, and eye color off of the driver's license while I and the customers
behind me waited).

=Mark Gabriele (gabriele@hub.toronto.edu)


No more quick n' easy account info from Fidelity

Carol Springs <carols@drilex.dri.mgh.com>
Wed, 6 Feb 91 13:18:42 EDT
In today's Boston Herald (Febrary 6), Robert Powell has a followup to
his article on Fidelity that appeared yesterday.  Basically, Fidelity
has "slammed the door shut" on Fidelity Telephone Connection.  Tracey
Gordon at Fidelity says, "We changed it in response to concerns by some
of our shareholders who called because of reports in the press."

People who called the toll-free number on February 5 got a real human
person asking for SSN and Fidelity account number.

According to Gordon, a system is being set up wherein callers will enter
their account number along with their SSN.  And a few weeks later, a PIN
system will be put into place.

Carol Springs                      carols@drilex.dri.mgh.com


Discontinued: Quick n' easy access to Fidelity account info

"B.J. 07-Feb-1991 1021" <herbison@ultra.enet.dec.com>
Thu, 7 Feb 91 07:46:47 PST
In a message in RISKS-11.03, Carol Springs described a system that allowed
access to Fidelity Investments account information using an 800 number and a
social security number.  The message said it was possible to call Fidelity and
request blocking for your accounts.  When I called Fidelity and asked about the
service, the response was:

    `No, we discontinued that service.'

I thanked the representative, told her I was pleased that the service was dead,
and hung up.  A small victory for privacy.

On a related note, when you call 1-800-DISCOVER you are given the opportunity
to check your Discover credit balance automatically.  When this service first
started, only the card number was needed.  Because of complaints, the system
now require your zip code as well.  Another reason to refuse to write your
address on your credit card receipt.


Quick n' easy access to Fidelity account info

"Carl M. Kadie" <kadie@cs.uiuc.edu>
Thu, 7 Feb 91 11:27:52 -0600
I just called Fidelity. The representative say that the Telepeople system has
been taken down until they can add PIN protection.

                                                       [O PIN, O SESAME?  PGN]


Re: Electronic cash completely replacing cash

"Lee S. Ridgway" <RIDGWAY@mitvma.mit.edu>
Thu, 07 Feb 91 10:46:55 EST
No one seems to have proposed the most obvious, simple solution to the risks of
americards and mastercards: cash! I find that for my normal purchasing habits,
I can pay cash. I can go to my bank machine two or three times a week [I know
some people who visit theirs daily!], get cash to cover known purchases for
several days, and not have to bother waiting for clerks to fill out charge
slips, get verifications, and other time-consuming procedures. Thus, I pay for
groceries, gas, meals, recordings, books, concerts, etc., etc.

I do have credit cards, but I use them only for large or very specific
purchases. That means few databases have records of my purchasing habits. I
also don't carry balances, or incur interest, and don't have to write several
big checks to credit card companies each month.

For those who are now going to say that cards are safer: I live in a city where
street robbery is not unknown, but I don't carry very much cash at one time -
and I don't carry my cards unless I know I will use them. Yes, I've been robbed
a few times (in about 20 years), but never lost much cash, and lost much more
in time and aggravation over stolen credit and ID cards than the cash.

For those who say cards are more convenient: Carry more than two, and they are
as bulky as bills. They require more time to complete a transaction (compare
the time it takes to pay a restaurant bill in cash vs. credit card!). Compare
the amount of time needed for a verification, especially if the phone or
computer connection is down or slow, or the manager is not around.  Compare the
amount of time needed once a month to sit down and check store receipts against
bills, write checks, etc.

One other possible blessing of cash and not cards: I find that I am on the
mailing list of only one or two mail-order houses, from whom I receive catalogs
maybe once a month, while my housemate, who uses credit cards for just about
everything, receives at least six or more catalogs per day! Cause and effect?

   [NO MORE RESPONSES on this topic for a while.  We are badly backlogged. PGN]


Re: Re: Predicting System Reliability...

Brad L. Knowles <blknowle@frodo.jdssc.dca.mil>
Wed, 6 Feb 91 19:16:41 EST
    In reference to Richard P. Taylor's point in 11.03, namely that to show
"sufficient" reliability for a system requires that the entire system in
question run in a production environment for longer than the required time, I
must agree.  In fact, with the things about quality that we are learning from
the Japanese (who learned it from Dr. Deming), I would claim that the entire
system in question run in a production environment for a six-sigma period of
time.

    For those of you who might not have been accustomed to the term six-sigma,
I will attempt my best explanation (poor 'tho it will be):

    In statistics, we can call sigma the Standard Deviation of the
    Probability of Failure of the system in question.  The term x-bar (a lower-
    case x with a bar above it) is the Mean Probability of Failure of the
    system in question.  X-bar plus six-sigma is the value we want to prove is
    lower than some given criterion, to a certain level of confidence.

    We then compose what is called a "Null Hypothesis" that is the exact
    opposite of what we are trying to prove (namely, that our system will last
    at least x amount of time).  We then try to prove our Null Hypothesis
    wrong, to a probability of 95% or better.  If we prove to 95%, then we have
    proven to a two-sigma level of confidence that are system will last at
    least as long as desired.  If we prove to a 99%, then we have proven to a
    three-sigma level of confidence.  If we prove to a 99.997%, then we say we
    have a six-sigma level of confidence.  The test used to show the level of
    confidence is usually "Student's T Test", for historical reasons (I know,
    there are oodles of other tests, this just happens to be the first taught
    in most University Senior-level Stat courses, and probably the best
    known).

    One thing to keep in mind, as we get to higher levels of confidence, we
    must test our system for exponentially long periods of time, assuming that
    all else is equal (which, we all know, never happens).  That is, unless you
    test hundreds or even thousands of units, all in parallel.  This is how a
    disk drive manufacturer can say something like "50,000 Hours Mean Time
    Between Failures" — you didn't think they actually had a single drive that
    they tested for 50K+ hours, did you?  I would go so far as to say that no
    well known drive manufacturer today has a single drive with 50K hours on it
    — they'll get junked and replaced long before that happens.  Still, the
    drives fail.

    The exact numbers of units that you would have to test to prove a
    certain level of confidence can also be calculated by well known
    statistical methods, so that you might have to test only 53 units to get
    the level of confidence desired.

    Basically, this is what the Japanese are hitting us over the head with.
Most Japanese companies have been using six-sigma Statistical Quality Control
for years now, and many of the front runners are now going to nine-sigma (and
even twelve-sigma)!

    Now, the problem we've got is when we try to test a system like a Nuclear
Power plant.  We certainly can't afford to build a single plant to try to prove
our system will last at least x number of years without failure, much less
build *MULTIPLE* identical plants to do so.  Thus, all we can really do is
unit-test as many parts as we can, and forgo the system testing.  I can point
out to one very well known system where this has led to many heartaches, if not
outright system failure before it was ever put on line — the Hubble Space
Telescope.  Everything was completely unit-tested, but system testing was
deemed too expensive and time-consuming.  Still, in some cases, system testing
just is not possible.

    Oh well, this is an interesting discussion of an ages-old problem — it is
non-trivial to prove even trivial systems correct (or when they will fail), not
to mention how hard it is to prove non-trivial systems correct!

Brad Knowles, Sun System Administrator, DCA/JDSSC/JNSL, The Pentagon BE685,
Washington, D.C.  20301-7010    (703) 693-5849      |     Autovon: 223-5849

Disclaimer: Nothing I have done or said in the above should be construed as an
official position or policy of the Defense Communications Agency or the United
States federal government.  The author is the sole person responsible for the
content of this message.


Re: Predicting system reliability (RISKS 11.02)

Jeff Johnson <jjohnson@hpljaj.hpl.hp.com>
Thu, 07 Feb 91 10:32:49 PST
This discussion is muddied by a failure to distinguish between reliability and
functional sufficiency.  I believe that there is a tradeoff in systems
engineering between reliability and sufficiency.  If a problem exists that is
to be solved by a system, the system designers are often faced with the choice
of designing a system that solves the problem or one that is reliable and
maintainable.  The reliable design usually solves some sub-problem.

Many system designers don't realize that they are faced with this tradeoff:
they design a system to solve the full problem and only after it is built do
they discover that it is so complex and bug-ridden that it cannot be relied
upon or maintained.  Or, perhaps seeing the tradeoff, perhaps not, they aim low
and deliver a reliable, maintainable system that doesn't do what was desired.
Either way, the resulting system doesn't solve the customer's problem.

Often, as a system designer, I've had programmers respond to design
specifications by saying: "Providing functional capability X would require me
to write a non-modular program.  I won't make it non-modular, so you can't have
feature X."  This argument is of course false — any thing that can be done
"non-modularly" can also be done "modularly" — but it illustrates the
programmer's tendency to sacrifice functional sufficiency for reliability and
maintainability.  Customers, salespeople, and those who write functional
specifications typically exhibit the opposite tendency: requesting or promising
functionality that would exceed developers' ability to produce a reliable
system.

An example from SDI: Initially, the plan was to design a system in which there
was a great deal of inter-component communication in order to coordinate the
defense.  Critics correctly shot down this plan as hopelessly unreliable.  The
SDIO responded by advocating a decentralized design involving significantly
less inter-component communication.  Under further criticism, control was
further decentralized until we ended up with "Brilliant Pebbles", in which at
least the "business-end" components are supposedly autonomous, and which
managed to win over a few SDI critics.

My reaction to Brilliant Pebbles and some of the other decentralized plans that
preceded it was that even if they are more reliable than centralized designs
(and this is debatable), they wouldn't solve the problem from a functional
point of view: massive inter-component communication may well be *necessary*
for the system to accomplish its task.  Unfortunately, massive inter-component
communication also implies a level of unreliablility that, for SDI, is
unacceptable.

If handed a functional specification for SDI, I could write a ten-line C/unix
program that would be (I assert) highly reliable.  But it wouldn't meet the
functional specification.  Some SDI proposals (and some proposals for other
ambitious projects) are simply schemes to develop slightly-larger "ten-line"
programs that won't do the job.

Computer Science has partially addressed the issue of reliability and
maintainability by developing: 1) formalisms for proving correctness of
programs, 2) programming languages and tools providing more support for program
correctness (e.g., modularity, strong-typing), 3) improved methodologies for
software engineering.  These don't insure reliable systems, but they help.  Has
Computer Science developed anything analogous for analyzing functional
sufficiency of programs (beyond a few proofs that certain extremely ambitious
functional specifications can't be met)?  Might such an analysis be useful here?

JJ, HP Labs


Electronic telephone directory

<MFMISTAL@HMARL5.BITNET>
Wed, 6 Feb 91 22:19 N
In the Netherlands, printed telephone directories provide telephone numbers
by using the name as an index. Currently, there is also an electronic
version of those directories available by means of a VIDITEL service.
Here it is also possible to ask for a telephone number by providing the
street name, the house number and the city. This involves an inherent risk.
When one observes that there are apparently no people in a house, one can
ask for the phone number, dial that number and when no one replies....
it may be safe for burglars to go in. It seems also to be an invasion of
one's privacy, since one need not to know a name in order to place
haressing/obscene phone calls.

The only thing one needs is a PC and a modem. The costs: 35 cents (20 $cents) a
minute.

Jan Talmon, Dept of Medical Informatics, University of Limburg, Maastricht,
The Netherlands

Please report problems with the web pages to the maintainer

x
Top