The RISKS Digest
Volume 6 Issue 53

Friday, 1st April 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Virus attacks RISKS
Martin Minow
First International Conference on Secure Information Systems
Wednesday's time trouble at SRC (and fault-tolerant systems)
Tim Mann via Jim Horning
Two old viruses
Bill Kennedy
Credit card limits
Richard Wiggins
Bankcard authorizations
John Pershing
Things that go POOF!
Vander-Vlis
Diving tables
Joel Kirsh
Keith Anderson
Re: Terminals and checking the facts
A.E. Mossberg
Info on RISKS (comp.risks)

 

<minow%thundr.DEC@decwrl.dec.com>
1 Apr 88 00:00
      (Martin Minow THUNDR::MINOW ML3-5/U26 223-9922)
Subject: Virus attacks RISKS 

Today, I'm afraid I must confess that one of my recent postings to Risks
contained a Virus that Peter (no doubt inadvertently) distributed to the
RISKS audience.  The virus doesn't infect your programs or data files
directly, but in a manner analogous to the "Christmas card" virus discussed
here a few months ago, it causes increased network traffic.

As the virus establishes itself, you will note its affect by the increased
amount of electronic mail you receive every day.  For some of you, the
increase is linear; but for others, I'm afraid you're on the early part of a
exponential curve.

Although the virus was easy to create, I'm afraid that I don't know how to cure
it.  In fact, I believe I'm beginning to note its effects on my own system.

Humbly, Martin Minow

   [I've been wondering where the dramatic increase was coming from.  PGN]


First International Conference on Secure Information Systems

Dennis Ritchie <drm@reserch.uucp>
31 Mar 88 16:00:00 PST
FOR IMMEDIATE RELEASE  %  FOR IMMEDIATE RELEASE   [O.K. You are released.  PGN]

The System Security Society of Southern Saskatchewan and the University
of North Saskatchewan, Hoople campus announce the First International
Conference on Secure Information Systems. This conference will feature
a star studded panel of security and system experts from across the
computing spectrum giving boring papers and comparing notes on
security problems and possible solutions for existing and future operating
systems ane networking environments. 

Papers that will be given at the conference include:

    Richard Brandow, MacMag magazine: Computer Viruses as a form
        of social terrorism

    Dennis Ritchie, AT&T: Trojan Horses: Security Hole or Debugging Aid?

    Richard M. Stallman, Free Software Foundation: Passwords are a 
        Communist Plot, or Give Me Access to Your Computer, Dammit!

    Chuq Von Rospach, Fictional Reality: A Secure USENET, an Exercise
        in Futility.

    Greg Woods, NOAO: Benign Dictatorships in Anarchic Environments: A
        Case Study

    Peter Honeyman, University of Michigan: Security Features in
        Honey-DanBer UUCP, or Why a Flat Name Space is Good.

    John Mashey, MIPS Computers: RISC security risks on Usenet

    Peter G. Neumann, SRI: The RISKS Of Risk Discussion, or
        Why This Conference Should be Classified.

    William Joy, Sun Microsystems: Unix is Your Friend.

    Donn Parker, SRI: Breaking Security for Fun and Profit: A Survey

    Lauren Weinstein, The Stargate Project: Stargate Encryption;
        Turning Free Data into Revenue.

    Mark Horton & Rick Adams, The UUNET project: Security Aspects
        of Pay for Play on USENET.

    C. Edward Brown, National Security Agency: How to get USENET
        feeds when you don't exist, A Case Study.

    Gordon Moffett, Amdahl Corp.: The USENET anarchist's cookbook;
        An alternative to the backbone cabal

    John Quarterman, University of Texas: The USENIX social agenda
        and national security; A summary of Usenet discussions
        from Star Wars to Tar Wars.

    Landon C. Noll & Ron Karro, Amdahl Corp.: Public Key Encryption
        in Smail3.1; How to send E-mail that the NSA can't read

    A. I. Gavrilov, KGB, North American Information Bureau: Exporting
        American Military Information via Encoded USENET Signatures,
        Theory and Practice.

The Conference will be held March 2 through 4, 1989 on the campus of the
University of North Saskatchewan in Hoople, Saskatchewan, Canada. Registration
is $195 until December 1, 1989, $295 afterward. For more information please
contact Professor Peter Schickele, Department of Computer Science, University
of North Saskatchewan, Hoople, Saskatchewan, Canada 1Q5 UI9. 

Note: This conference is a rescheduling of the conference originally
scheduled for October, 1988 but cancelled after the United States Department
of Commerce decided that the material was too sensitive to allow
non-American citizens to read (including the material written by the
Canadians on the committee). Because of this, the conference has been moved
to Canada, which doesn't have a complete Freedom of Speech written into it's
constitution, but has better things to do than worry about ways of
circumventing civil rights. Americans having trouble getting their papers
cleared for distribution at the conference should contact Professor Shickele
about setting up a direct uucp link for the troff source.

   [I received FIVE copies of this important announcement, so I must assume
   that some of you may have received multiple copies.  However, for those
   of you who missed it, it seemed worth including here.  I fixed the 
   mispeling of Prof. Schickele's name.  I'm sure he wouldn't mind.  
   I also fixed the spelling out of Sask., for esthetic reasons.  Otherwise
   this is as the message was received.  PGN]


Wednesday's time trouble at SRC (and fault-tolerant systems)

Jim Horning <horning@src.dec.com>
1 Apr 1988 1440-PST (Friday)
     Forwarded Message:

Date: Fri, 1 Apr 88 14:12:02 PST                             [Not a joke.]
From: mann (Tim Mann)

I've learned a bit more about what went wrong with our time service on
Wednesday; here are the details for those who are interested.  

Background:  SRC's time service is based on three master clocks.  Two of
the clocks get their time signals from radio station WWV in Colorado,
while the other gets its time signal from the GOES earth satellite.  The
master clocks are plugged into Fireflies, which periodically read them and
broadcast the time on the net.  Every Firefly on the net receives these
broadcasts, and takes a fault-tolerant average to get the time to which it
adjusts its local clock.  This amounts to taking the median if all three
time providers are heard from, the mean if two are heard, or the reported
value if only one is heard.  So we tolerate any single fault:  if one time
provider gives out bogus times, but the other two still work correctly,
clients are not fooled.  If two providers fail, clients can be fooled.

Around March 23, Mike Schroeder had trouble with his Firefly, which hosts
one of the WWV clocks.  Our hardware guys came up and fixed it, but left
the console baud rate switch in the wrong position, so the time server
couldn't read the clock.  Now there were only two time servers, so clients
took the mean and still got the right time.  Unfortunately, the current
time service implementation doesn't send a message to a human when this
happens; it just logs the event in a place that's seldom looked at.  So
Mike's clock stayed down until yesterday afternoon, March 31.

Then on Wednesday afternoon (March 30), something really unusual happened.
The WWV clock connected to my Firefly suddenly decided that it was July 8
(the 190th day of the year) instead of March 30 (the 90th day).  About two
hours later it switched back to March 30.  But the incorrect readings had
some bad consequences.  

First, because there were now two faulty clocks, the client hosts could no
longer cope.  They took the mean of the two time providers that were reporting
and started trying to advance their clocks to the 140th day of the year by
running fast.  The speedup was limited to 10% by a sanity check I put into the
implementation, so it took quite a while before anyone noticed the incorrect
time on his Firefly.  (Again, when the 10% limit is hit, the current
implementation just logs the event in an obscure place.)

The second bad consequence came from the way the current implementation
initializes the time on bootup.  Instead of averaging all the time
servers, it just believes the first one it hears.  So two people rebooted
their machines on Wednesday afternoon, noticed that the time read "July
8", and phoned me.  At that point I got to work picking up the pieces,
and phoned the WWV receiver's manufacturer.

The next day one of the chief technical people from the time receiver
company came out to try to figure out what had happened.  In the end he
ascribed it to a mysterious bug in the firmware release we were running,
and gave me a new set of PROMs with an improved algorithm for rejecting
erroneous data that shows up due to noise in the radio signal.

This incident teaches two lessons about engineering a fault-tolerant system,
neither of which should come as a surprise.  First, a fault-tolerant system
must report the faults it tolerates so they can be fixed, rather than masking
them entirely.  Second, a fault-tolerant system must tolerate faults in all
phases of its operation---it is not okay if faults during normal operation are
tolerated, but faults during initialization cause undetected errors.
                                                                       --Tim

Two old viruses

Bill Kennedy <bill@ssbn.wlk.com>
29 Mar 88 19:41:16 CST (Tue)
Someone asked for a virus dated prior to 1984.  Back in 1974 I was working at a
large firm with no fewer than three 360's lashed together and a bright young
fellow wrote a program named "rabbit".  When rabbit was submitted it had found
a way of taking a copy of itself and tossing it back, twice, into the ASP input
jobstream.  One of ASP's famous qualities was how it got stingier and stingier
about talking to its console when it began to get constipated.  Needless to
say, rabbit constipated it so it was harder to kill the longer it ran.  The
bright young fellow was (justifiably) discharged.

Also in response to the computer theft story I know a fellow who founded the
first retail computer store in Texas.  One day Dallas police came into his
store (not in Dallas) and asked if he was familiar with a particular brand of
Southwest Technical Products (*that* dates it!) video terminal.  He said he
was.  They asked him if he knew how to operate the computer that came with the
SWDP terminal, he was.  Would he comne down to headquarters and look at some-
thing?  Sure...  When he got the system to boot up he was unsure what the
police wanted.  They explained that they had just arrested a burglar and this
computer was in his apartment.  Neither the computer nor the terminal were
"hot", the police had found sales receipts for each and the way they found the
store was from a receipt for repair work done to the terminal.  When the disk
directory was played out for the detectives they nearly jumped for joy! The
burglar had carefully and faithfully recorded each job, goods stolen, where
fenced if fenced, and where stored if not fenced.  Dallas and the surrounding
cities cleared about eighty offenses just on a simple printout of the burglar's
data files.  The thief had also programmed it himself!

Bill Kennedy ...{rutgers,cbosgd,ihnp4!petro}!ssbn!bill  or bill@ssbn.WLK.COM


Credit card limits

<Richard_Wiggins@um.cc.umich.edu>
Wed, 30 Mar 88 00:11:56 EST
A standard problem with credit card limits is that a firm can run your card to
the limit with a hold, and you are then out of credit until the hold is
resolved. (I, for one, would like definitive word on when holds are removed.)

Two cases in which holds for estimated amounts are used:

When you check into a hotel, they guess how much you are likely to spend based
on the number of days and the room rate, plus a fudge factor for food or phone
charges you might ring up. If you stay beyond your original plans, they
continue to call in for additional authorizations, usually at the same
estimated rate, regardless of how much you may have spent.

If you have an accident in a rental car, and you don't have the damage
insurance from the rental agency, they may tie up your credit — up to the
limit, of course — until you make a settlement. When a car I'd rented from
National in Salt Lake City was struck by a deer a couple of years ago, they
were quite sanguine when I called to report the problem. When I physically
returned a few days later, they looked at the police estimate of $1100 and
wanted to charge it to my credit card. I persuaded them that I was adequately
insured, but they insisted on running through a blank charge slip and making me
sign it. Since it was a long walk to the terminal I very reluctantly agreed.
(My insurance paid, not my plastic.)

Now, one could imagine cases where a negligent or hostile clerk typos in the
authorization process, and say, sends through a request for 10X the proper
amount. You may have enough credit for that, but not for the next charge!

In light of all this, it seems prudent to carry more than one credit card,
even if the same "brand".


Bankcard authorizations

John Pershing <PERSHNG@ibm.com>
1 Apr 88 09:31:31 EST
The credit authorization process is essentially one big calculated risk.
What typically happens is:  the authorization request is submitted to the
merchant's bank, which forwards it to the appropriate clearinghouse (one
for each of the major cards).  If the clearinghouse does not respond
promptly (e.g., within 10 seconds), it counts as a tacit approval.  We
mustn't keep the customer waiting!

The clearinghouse's computer looks up the card number in its "negative file" of
cards that are lost, stolen, or in arrears, and rejects the transaction if it
finds an entry.  Otherwise, it forwards the transaction to the bank that owns
the account.  If the owning bank does not respond promptly (e.g., within 5
seconds), it counts as a tacit approval.  The clearninghouse then sends the
answer ("yea" or "nay") back to the merchant's bank, and thence to the
merchant.

Assuming that the computers at the banks and clearinghouses are all up 100% of
the time, along with the communication networks, and that they are not so
bogged down that they cannot respond in time, then the system always works.
It's that simple! (...chuckle...)

Anybody want to venture a guess at the transaction load seen, e.g., by the
MasterCard clearinghouse during the week before Christmas?  Does anybody still
wonder how a few bad authorizations manage to slip through the cracks?  Can
anybody think of a better way?

      John A. Pershing Jr.,       IBM Research, Yorktown Heights


Things that go POOF!

<Vander-Vlis@DOCKMASTER.ARPA>
Thu, 31 Mar 88 08:35 EST
Having worked in a NYC bank for five years I must disagree with an earlier
statement that a decomposed check is untraceable.  The only way that the check
could disappear without a trace is if it decomposed before the teller could
process it.  This is usually done at the end of the day.  Each check must be
marked with the banks cancellation stamp.  This stamping is performed by a
machine which, at the same time, takes a photograph of the check for bank
records.  When that check finally decomposes, there will be an accounting
discrepancy between two financial institutions which (believe it or not) will
be traced back to that photograph.  This knowledge comes from the painful
personal experience of sitting with a microfilm reader looking through all
checks processed on a certain day in search of one bleepin' check.

This plot would more than likely be uncovered even if the check decomposed in
the teller's drawer.  If you ever watched your teller when making a deposit you
know that he/she writes down the amount of cash as well as the amount of each
check.  If while proving their till for the day the teller can't come up with
matching debits and credits they will cross-check their deposit slips with
their checks and find the slip which doesn't have a check to go with it.
Although they can not fault the depositor for the loss of the check, if this
were to happen frequently, the bank would eventually become aware of it.
Incidentally, this is an old scam.  I'm surprised that it actually made the
news at all.


diving tables

Joel Kirsh <KIRSH@NUACC.ACNS.NWU.Edu>
Wed, 30 Mar 88 10:12 CST
The user interface on the new diving computers is certainly critical, but most
divers are still using the standard US Navy tables (which are orders of
magnitude cheaper).  These tables contain still another RISK, that of making
unreasonable assumptions about the relevant characteristics of the user.

The allowable depths, times, and recommended decompression stops in the USN
tables were determined from a population of physically fit, well-trained, and
highly motivated subjects (ie USN divers).  Even so, when followed exactly, the
tables are expected to result in a finite percentage (on the order of 5%) of
decompression injuries.


Diving Computers

Keith 'Dain Bramaged' Anderson <KANDERSON%HAMPVMS.BITNET@MITVMA.MIT.EDU>
Wed, 30 Mar 88 11:08 EST
I recently read a letter in this digest questioning the safety of the new
diving computers ("The Edge" and "The Skinnydipper", by a company I forget the
name of) and decided to add my 2 bits.

I have to explain a little about diving to explain these computers.

Air is made up of approximately 80% nitrogen.  At sea level, our bodies are
saturated with nitrogen.  When a diver decends, the pressure exerted on his or
her body increases one atmosphere for every 33 feet he decends (one at sea
level, two 33 ft under, three at 66 ft under etc.).  This increase forces more
nitrogen into solution in the diver's body.  If the diver absorbs too much
nitrogen, it will bubble out of solution as he or she acends to lower pressure.
Nitrogen bubbles in the bloodstream are bad (ahem).  The Navy compiled tables
using _men_ in the prime of health, of limits as to the amount of time a diver
could stay at any depth (down to 150 ft) and then surface normally and not get
the bends.  These are the maximum No-decompression limits tables.  These tables
have a 5% failure rate.  Another way to avoid the bends is to dive to a certain
depth for a certain time, and then on acending, stop at 10 feet under for a a
certain time to allow nitrogen to be outgassed, and then surfacing.  These are
called decompression dives, and also have a set of tables.  What none of these
tables allow for is the fact that if a diver dives to 90 feet for a while, and
then acsends a little and spends the rest of his or her dive at 60 feet, the
nitrogen absorbed at 90 feet will be outgassed at 60 ft until the 60 saturation
point.  What the new computers do is credit the diver with time spent at a
lesser depth, and debit him or her for deeper depths.  These computers also
follow tables that are more conservative than the standard Navy tables, thus
making for a safer dive.

The message that appears telleing a diver to acend more slowly is to prevent a
different problem.  The air coming out of a SCUBA tank arrives in the lungs at
the same pressure as the surrounding water.  If a diver fills his or her lungs
with air at 90 psi (the pressure it is recieved at at 20ft), and then ascends
to 10 ft under, the pressure decreases to 45 psi, so the air in the divers
lungs tries to double its volume.  The lungs have no nerves that tell the brain
that they are being stretched too much, so they tear.  This also is bad.  The
simple way to avoid this problem is to ascend slowly enough that the air has a
chance to be expelled, and a new, lower pressure breath may be taken in.
Divers have a bad habit of swimming to the surface too quickly ( the new
optimum rate of acent is 20 feet per minute (!) this means it should take 5
minutes to acend from 100 feet down), and so the computers constantly warn
divers to acend more slowly.

As for the question of bad human interfacing, a diver checks his or her air
pressure frequently (wouldn't you ?), and the computers are designed to clamp
onto the same hose that leads to the pressure guage, thus making it rather hard
to miss.

Keith Anderson, Hampshire College, Kanderson@hampvms

P.S. you have just received the majority of the classwork in a SCUBA course.


Re: Terminals and checking the facts (RISKS-6.52)

a.e. mossberg <aem@miavax.miami.edu>
Fri, 1 Apr 88 13:49:20 EDT
    I'm afraid Jerry's flames are well deserved, before sending the
letter I merely checked the TVI9220 and WYSE 85 manuals (the first, a
'vt220' compatible, the second, a 'vt200' compatible).  They list a vt 
command for entering block mode (DECEDM) but only wyse and televideo 
specific commands for sending the contents of the screen.  In the case
of the televideo, the command is only in 9220 mode.

a.e.mossberg Bitnet: aem@miavax.miami.edu@cunyvm
             uucp: ...!uunet!miavax!aem     SPAN: aem@mthvax.span (3.91)

Please report problems with the web pages to the maintainer

x
Top