The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 6 Issue 46

Friday 18 March 1988


o Incorrect computer data entries hide bridge dangers
Jon Mauney
o Re: Held at Mouse Point
Bruce N. Baker
o Federal Archive Integrity
Fred Baube
o Credit-limit handling found overly restrictive
Wayne H. Badger
o First-hand problems with Social security numbers
o RISKS in Bell lawsuit
Scott E. Preece
o Teller Machines
Jon Mauney
o Program prejudice; ATMs; self-test; unknowns; viruses
Larry Nathanson
o Viruses go commercial
Norman S. Soley
o The trouble with "Experts"
Ewan Tempero
o Thoughts on viruses and trusted bulletin boards
Richard Wiggins
o Info on RISKS (comp.risks)

Incorrect computer data entries hide bridge dangers

Jon Mauney <>
Thu, 17 Mar 88 12:30:30 est
The Sunday March 13, 1988 edition of the Raleigh, NC, News and Observer
contains a story on accidents on the Northeast Cape Fear River Bridge in
Wilmington NC.  It seems that the steel grid deck of the drawbridge is slippery
when wet, causing cars to skid into oncoming traffic.  The highway department
investigated at the request of the Attorney General's office, which was paying
settlements to accident victims.  When the highway department pulled records on
the Cape Fear Memorial Bridge for comparison, it found that most of the
accidents attributed to Memorial had in fact occurred on Northeast.  Quoting
the newspaper, which was quoting the assistant state traffic engineer:

   "When we got into actually pulling the accident reports for Cape Fear
   Memorial Bridge — the actual hard copies — we saw that some of those
   did not belong on Cape Fear Memorial Bridge,"  Mallard said.  "In fact,
   they belonged on Northeast Cape Fear.  That's when we realized we had
   the coding problem."

   The locations of most accidents had been coded wrong, sometimes by the
   investigating officers and sometimes by employees of the Division of
   Motor Vehicles.  Accidents on the bridge were recorded as happening on
   U.S 17, U.S. 74, or U.S. 421, or some other highway, instead of the
   proper route, U.S. 117.  [All four highways pass through Wilmington]

On checking the data, they found that the accident rate was not 11 in 3 years
but 28 in 3 years.  The article goes on to say that the state made skid tests
on three steel grid deck bridges, including the two Cape Fear bridges
mentioned, in 1982.  The Northeast Cape Fear bridge performed the *worst* in
the test, but nothing was done, because of the low accident record.  State
officials were not sure that the skid test was applicable to steel decks.  The
bridge was only one year old in 1982.  Most of the miscoded accidents occurred
since then, and have increased as the steel and worn smoother.

The article does not make clear what kind of code was improperly entered
in the accident reports, nor what kind of technology was used to store
and retrieve the data.  The reference to "actual hardcopy" gives a strong
hint.  The dangers of "coding" data, and of ignoring test results,
will be familar to RISKS readers.

Re: Held at Mouse Point (RISKS 6.31)

Bruce N. Baker <BNBaker@KL.SRI.COM>
Thu 17 Mar 88 10:45:17-PST
The individual referred to in RISKS 6.31 under the heading, "If he had
another brain it would be lonely" department, may have the last laugh after
all.  As you may recall, the training instructor told the students "to point
and click with the mouse."  One individual complained that nothing was 
happening.  The instructor discovered that the student was pointing with his
forefinger at the correct spot on the screen while clicking the mouse.

Well, support has arrived just in time via the Contaq PointScreen.  Unlike
traditional touchscreens, the PointScreen uses ultrasonic sensors mounted on
the monitor frame to respond to a pointed finger that does not touch the 
screen.  The $695 PointScreen adapts to monitors with screens 9 to 26 inches
across.  The system connects to the computer through a serial port and includes
an interface card and software.  (*High Technology Business*, Feb 1988, p. 10)

The point for RISKS is that acts that sound dumb represent both risks and new
product opportunities.  For example, talking to the mailbox (a la the famous 
Candid Camera item) may be the next last laugh.  It would sure beat talking 
to my clerk.  I recently went to the local post office window and asked if an
urgent letter could be processed directly there for a local address with a post
office box about 10' to 12' away from the clerk.  I was told there was no way
that local mail could be handled locally.  All mail must go through the 
regional processing center in San Francisco.  He suggested I drive to the 
company location, about 4 miles distant from the post office box sitting
there tantalizingly close behind him.

Here's one risk of automation that I was dumb about.  I used an extra blank
window envelope supplied by a credit card company in its previous billing to
me to post a check to a *different* creditor, not noticing the little bars
running along the bottom edge of the envelope.  Of course, the check first
went to the address indicated by the little bar code, a clerk there drew an
arrow, pointing to the window address and re-posted it, then it came back to
me, and I finally taped over the little bars to enable it to be processed to
the address appearing in the window.  Elapsed time: 10 days, resulting in a
finance charge.

Bruce N. Baker, SRI International

        [If the bill in the second case had been from the Electric Company,
        and the address had been a local P.O. Box, as in the first case, 
        you could have tied a brick to the bill and tossed it into through 
        the P.O. Box window.  Then you could write a book about such
        experiences, entitled The Finance Charge of the Light Brick Aid.  
        But you might have to do it BEHIND little bars, with NO windows.  PGN]

Federal Archive Integrity

Fred Baube <>
Thu, 17 Mar 88 15:24:37 -0500
sco!sethk@ucscc.UCSC.EDU writes:
> Archive's Black Hole
> [..] Don Wilson, the Federal Archivist [said] before a House
> subcommittee last month .. that "data held on computers is
> frequently altered or updated" - shades of the deeds done by
> Oliver North and Fawn Hall - and that much material never
> reaches the National Archives ..

If this doesn't sound like setting the stage for *1984*, I don't know what does.

How about supplying the Archives with lots of write-once ultra-bulk-storage
devices, and secure communications links to federal agencies for (say) daily
downloading.  Could this minimize excuses for non-compliance with mandatory
and timely (i.e. before unauthorized editing) archiving?  Maybe also set up
a fast review system within the judiciary for timely resolution of disputes
about just what information *does* fall under this scheme?  (There would be
disputes about working papers, drafts, notes, etc.)

Regarding a role for the judiciary, "national security" shouldn't be a
stumbling block.  The US already has a secret federal court here in DC [or
is it NYC?] *now*, for electronic surveillance cases.  (See
_The_Puzzle_Palace_, Bamford)

One could debug proposed schemes with Gedanken Experiments involving Ollie's
PROFS notes ..

Credit-limit handling found overly restrictive

Wayne H. Badger <badger%fang@xenurus.Gould.COM>
Thu, 17 Mar 88 10:37:06 CST
I just had an unsettling and embarrassing experience with Mastercard/Visa.
I had a Mastercard charge denied, when I supposedly had more than sufficient
credit.  After some querying, I found out what the problem was.

I had just made a large (for me) purchase with Mastercard that was more
than half of my credit limit.  The company immediately sent a computerized
authorization request to Mastercard, which was accepted.  This purchase
was done over the phone.  However, some of the articles I wanted to
purchase were not in stock, so the company did not actually bill for the
entire amount.  As a result, I now had an authorization *and* a bill
credited against my limit, which pushed me over the limit.  Any further
attempts to charge anything were denied, even though I was well under my
limit for actual bills.

The problem is that companies send authorizations for different amounts
than they actually bill.  For example, a restaurant will send an
authorization for the amount of the bill, plus "a couple of dollars" to
cover the tip.  The tip that you write on the Mastercard slip will hardly
ever match the authorization.  You have just doubled the amount credited
against your credit limit.

I called my Mastercard bank and they informed me that authorizations
remain in effect for 10 days if not removed.  Authorizations can be
removed in two ways:

    1.  If a bill comes in for the exact amount of the authorization
        on the same day, the authorization will be replaced with
        the bill.
    2.  A company can remove the authorization by arrangements through
        their bank in what is apparently a difficult procedure.

Apparently, Mastercard does not cross check the company when comparing
authorizations and bills.  This seems rather silly.  The Mastercard
operator could not tell what company had made any of the authorizations
in my account.  The Mastercard operator also refused to remove any

It seems to me that whoever designed Mastercard's computerized
authorization didn't think that anyone would ever send a bill for a
different amount than the related authorization.  Unfortunately, this
appears to be the rule, rather than the exception.

What this all means is that, in the worst case, a credit limit for a bank
card is less that half of the stated limit, so I asked Mastercard to double
my credit limit.  They declined.  Maybe it's time to just go get the Amex
card.  Sigh.

BTW, this is the second Mastercard that I have tried.  Both had the same
problem.  Has anyone seen this problem before?  Is it just me?

Wayne H. Badger,  ...!ihnp4!uiucuxc!ccvaxa!badger

First-hand problems with Social security numbers

16 Mar 88 14:41:56 EST
   [The following message is from a contributor who has requested anonymity.]

I came to this country in Fall 79 on an F-1 visa.  I was a full-time student
from then to mid '85.  In the beginning of '85 I received a job offer and tried
to get a 6 month practical-training permit so that I could start on my job.

I did not hear from the INS for a few months.  In the meantime I really scared
because this is a routine procedure and should not take more than a few weeks.
Finally I called the INS after 4 months.  I was informed that the INS was going
to start deportation procedures against me.  They claimed that I had been
working illegally for the last five years. (It is illegal to work on an F-1

I was stunned.  I had clear proof that I had never been anything but a
full-time student all the time and I told them so.  They said they would check
into it.

Next day I called them back and I told them the following.

1. They claimed that I had entered the US from Miami.  This was
   wrong.  I had entered from New York.  The date of entry was
   also wrong by 2 weeks.

2. They claimed I was a Columbian National who had obtained a visa
   in Venezuela.  This was wrong.

3. They claimed that I had worked in Florida and Texas.  I had
   letters from my advisors that I had been at school the whole time.

They called me in to their office and checked the above from my Passport.  Then
they said they would get back to me.  I never heard from them again about the
deportation proceedings.  In a month I received my work permit and I joined
work.  I only lost a few months wages.

Last year I requested my Social Security statement.  Sure enough there are
payments into my account from '81 - '83. I have not heard from the IRS about
this and I hope I do not.  I don't know whether to worry about this or not.
The only thing I have going for me is that my company attorneys are excellent.

It is scary to know that somebody out there is using my name and social
security number and there is nothing I can do about it.  Why me?

RISKS in Bell lawsuit

Scott E. Preece <preece%fang@xenurus.Gould.COM>
Thu, 17 Mar 88 08:56:26 CST
  From: Alan Wexelblat <wex%SW.MCC.COM@MCC.COM>
  >  "[The settlement] stems from Bell's computerized accounting
  >  system which government investigators claim shifted costs
  >  among the contracts..."
  > [note how the computer is blamed, not the programmer, nor the people who
  > used it nor the people who ordered it programmed/used in that way!]

Funny, my reading skills are pretty adequate and I read that sentence as
blaming the acounting system, not the computer.  An accounting system
includes a lot of components, some of them human.  I think it's fair to
assume that even a newspaper reporter knows that pointing at a program
is really pointing at the author.

scott preece, gould/csd - urbana, uucp: ihnp4!uiucdcs!ccvaxa!preece

Teller Machines

Jon Mauney <>
Thu, 17 Mar 88 12:31:34 est
RE: teller machine errors.

When I was starting graduate school in 1977-78, Wisconsin banks
were installing the TYME teller machine network.  State banking
laws effectively required all teller machines to be part of a
single statewide network.  The system (or at least my bank) had
a lot of teething problems.  It was not uncommon for a withdrawal
request to be rejected because of timeout on the acknowledgement/
authorization from the host computer.  A retry would usually succeed,
resulting in a double-posting of the debit.  Usually double postings
would be caught and corrected when the books were balanced, and I
got to be quite accustomed to having lots of extraneous debits
and credits on my statement. I also learned how to find the back room
of the bank where the harried man with the printouts of all TYME
transactions could correct any problems that the bank had overlooked.

One month, however, they got carried away, and manually re-applied an
incorrect debit that had been manually corrected the previous month,
causing me to bounce several checks.  Apparently electronic networks
are not the only systems that suffer from echo and delayed packets.

It may be silly of me, but while I love to use teller machines for
withdrawals,  I *never* entrust my deposits to them.

Program prejudice; ATMs; self-test; unknowns; viruses

Larry Nathanson <bucsb!>
17 Mar 88 04:36:41 GMT
  On the writing of a program that simulated the admissions selections, to a
probability of better than 90 percent: This was done, with the prejudice
intended to mimic human decisions.  What if one wrote a program to devise an
algorithm that would match an acceptance pattern, and then examined the
algorithm for prejudice.  For example, you would give this program the
application of each student and it would work out an algorithm whose output
of acceptances and rejections would come out better than 90 percent.  The
algorithm could then be put through extreme scrutiny (much more than just
the raw data alone would be subject to), and the school/person/company who
was being simulated might then be held accountable.  This is extremely scary
considering someone might simulate you (given your reactions to several
situations) and find out a lot about your inner psyche.  Your answers to a
few meaningless questions on a job interview could be interpreted for drug
use, integrity of character, and watching Saturday Morning Cartoons.  This
had already been attempted (to an extent) in a program called "Mind Prober"
(available for small PC's.)  One answers 70-100 yes/no questions about a
person, and it spits out a psychoanalytic report, from a psych101 textbook.

  On video-cameras in ATM's, I don't think that the camera does any pattern
recognition.  I think it just stores a few seconds of each transaction, with
a time stamp, in case a dispute comes up later.  A third hand anecdote: A
college sophomore, who though he could beat the system, placed a check (for
his credit limit) in an envelope, and deposited it, with cash back (it
immediately gives back an amount of cash, up to the person's credit limit),
to a nearly empty account and walked away.  The trick: there was nothing in
the envelope, and he had the cash in his hand.  The next time he went to the
machine it told him to see the manager.  The manager told him they were wise
to his game, and that they were removing the balance of his account, and he
still owed them the rest.  When the cheat told the manager, he had no
knowledge of the deposit, and had nothing to do with it, the manager showed
him the cameras in the machines, and told him that if he made them go
through the film to find his picture, they would involve the authorities.
(Though it might have been a bluff: back to the risk of threats of using
technology...)  He surrendered his ATM card, and eventually paid back the
money.  Ways around this are left up to your imagination.

  On self-tests:  Note that the purpose of a self-test is to determine whether
or not the device running the test is operating correctly. A situation similar
to this:  There are two men before you.  One is a truth-teller and one is a
liar.  You ask both, 'are you a truth-teller' and both reply yes.  This is not
surprising.  Then why should one expect a meaningful warning from a
malfunctioning machine.  If the machine is working, it will return that it is
working.  If the machine is not working, it may well return that it is working:
it is a broken machine (as in a liar).  If you get an error message, it means
that the liar decided to tell the truth.  Lucky break... not one I'd like to
rely on.  So... just because your calculator (or anything else) says that it is
working, remember that the output 'I am working' may well be a part of the
malfunction.  What one needs is not a self-test but an 'other-test'.  Let's
hope that it is working.

  On the UNKNOWN front, a story goes about the new police clerk who was given a
few reports, and told to check each one in the computer for warrants.  All
turned up negative, except for one, LNU, FNU (apparently a rather evil oriental
man) turned up with the most outstanding report imaginable.  When she brought
it back, her superviser cracked up, hysterically laughing, as did anyone she
showed it to.  As it turns out, FNU LNU was the ``acceptable input form'' for
First Name Unknown, Last Name Unknown.

  Finally, on viruses: Who says that someone has to sneak a virus onto your
system.  You can do it yourself.  Many people type in programs from magazines.
The changing of one byte, in an object code listing, could change a read to a
write, and screw up a lot of people before the magazine could get a bulletin
out to its subscribers.  Talk about the ultimate virus: It convinces you to
nuke your own disk drive.

Larry Nathanson, Boston University.

Viruses go commercial

"Norman S. Soley" <soley%ontenv.uucp@RELAY.CS.NET>
17 Mar 88 17:17:04 GMT
It continues to get curiouser and curiouser;

>From the "Toronto Star" March 16,1988:

  First Virus found in commercial software

  A computer virus has infected a commercially available personal computer
  product for what is believed to be the first time, calling into question the
  safety and reliability of software sold in retail stores.

  [This] has led one software company to change the way it manufactures
  software and will likely force other companies to do the same.

[... the concept of a virus is explained, we know this all too well...]

  Although the virus discovered last week in FreeHand, a Macintosh design
  program from Aldus Corp. of Seatlle, was a harmless "message of peace," a
  more destructive virus could have wiped out expensive computer data or years
  of work.

  Until this incident, personal computer viruses were though to
  be hidden only on non-commercial software.  [...shareware and
  BBS's are explained, more stuff we know...]

  Computer experts had said viruses could be avoided if users didn't use
  freely distributed software and instead used only off-the-shelf programs.

  But the infection of the Aldus software shows that isn't the case.

  The virus was inadvertantly passed to Aldus by Marc Canter, president of
  MacroMind Inc. of Chicago, which makes training disks for Aldus.

[Canter's personal machine caught the virus from a copy of Mr.  Potato Head
and was later used to work on the training software for Aldus]

  Without either Canter or Aldus realizing it, the computer virus was copied
  onto disks that were sold to consumers. When the comnsumers used the disks
  their computers became infected.

  The virus is thought to be harmless now. It was designed to pop up on
  Macintosh screens on March 2, the anniversery of the introduction of the
  Apple Macintosh SE and Macintosh II.

  "The time bomb already went off" said Donn Parker, a computer security
  specialist as SRI in Menlo Park, Calif.

  All Aldus programs will be developed on "isolated computers" in the future
  to avoid the incident from recurring, an Aldus spokesman said.

  Canter fears that more of his customers may have been infected with the
  virus. MacroMind's clients include Microsoft, Lotus, Apple, and Ashton-Tate.
  [Microsoft says they know their software is safe, all others delined to

Well I guess the virus program as a concept is here to stay, as software
becomes more complicated (gooey interfaces and the like) there are more and
more places to hide them. I wonder how long it will be before we see our first
OS/2 virus?

A potentially more important risk is the economic one to our industry.  What
will happen to the commercial software marketplace if more such incedents
occur? This article appeared prominently in the business section of the paper,
not buried in the weekly high hech feature article where previous virus stories
have run. Will such publicity sour investor and consumer confidence in specific
companies or the industry as a whole?

If a company spreads a damaging virus in commercial software are they liable
for the damages caused? Will they have to take out "software malpractice"

Norman Soley, Data Communications Analyst, Ontario Ministry of the Environment
UUCP:   utzoo!lsuc!ncrcan!---\          VOICE:  +1 416 323 2623
    {utzoo,utgpu}!sickkids!ontenv!norm  ENVOY:  N.SOLEY

The trouble with "Experts"

Ewan Tempero <>
Thu, 17 Mar 88 10:57:04 PST
The Seattle Times has a column called "Troubleshooter", which investigates
problems of various kinds that people might have. In yesterday's column
(Wednesday, March 16) there was a story about erroneous US Sprint telephone
bills. What caught my eye was the following paragraph:

    Well, according to U.S. Sprint Communications Co., "toll fraud,"
    or a computer virus caused by hackers, was responsible for errors 
    on the phone bill for 

Thoughts on viruses and trusted bulletin boards

Thu, 17 Mar 88 01:34:30 EST
Before the recent spate of viruses, the commonly accepted advice seemed to
be that if one is concerned about reliability of public domain software, one
should load from trusted sources and should only load items that the braver
have tested.

If the practice of spreading virsuses continues to be a problem, it seems to
me that a few measures on the part of bulletin board operators would greatly
reduce the risk.

To wit:

-- All providers of software must provide source code for each
   submission to the bulletin board operator.

-- The bulletin board operator will compile / assemble the
   provided source, and distribute only the resulting binary

-- The bulletin board operator will insist on a verifiable
   identification of the author of all submissions.  At a
   minimum, the operator will phone the author and speak
   to him or her over the supplied telephone number.

This scheme doesn't prevent viruses.  It makes it a lot easier
to identify what programs have viruses built in, and to track
down the author when a time bomb should go off.

Authors who don't want source distributed to the public could
so specify, but the operator would still insist on receiving
the source, compiling it, and archiving source while making
object available to users.

Naturally, this notion implies all sorts of costs for the bulletin
board operators.  Probably it would only be viable for larger
operations, perhaps commercial ones.  For instance, a small
bulletin board wouldn't be able to afford all the popular
compilers and assemblers required.

If we cannot devise a means whereby public domain software can
be trusted, it will disappear out of consumer fear.  One simply
cannot trust an executable file without knowing what the source
code does, or at least knowing one can go back and find out what
the source code did.

Richard Wiggins, Lead Systems Programmer, Michigan State Univ.  517-353-4955

Please report problems with the web pages to the maintainer