The RISKS Digest
Volume 16 Issue 41

Thursday, 22nd September 1994

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Computer disk crash causes misprinted ballots
Lani Teshima-Miller
Internet Gets First False Ad Charge
PGN
Uninterruptable thought patterns
Phil Agre
Re: Digital Logos
Peter J. Denning
Reason 55: National Security and the FBI Wiretap Bill
Marc Rotenberg
Yet More daring tales of address disasters!
Paul T. Keener
High Security Digital Payment Systems
Michael Waidner
The Fuzzy Systems Handbook
Rob Slade
Neural redlining
Andrew W Kowalczyk
Peter J. Denning
Fernando Pereira
Thomas E. Janzen
Jan Vorbrueggen
Bob Frankston
John Turnbull
Info on RISKS (comp.risks)

Computer disk crash causes misprinted ballots

Lani Teshima-Miller <teshima@uhunix.uhcc.Hawaii.Edu>
Tue, 13 Sep 1994 18:14:35 -1000
(As reported on KHON-TV2).

The Hawaii Republican Party was up in arms today when it was discovered that a
"hard disk crash" caused a number of absentee ballots on the island of Maui
(state of Hawaii) to inadvertently omit two candidates running for state
legislature.

The crash, which reportedly occurred at a California company which had been
contracted to print the ballots, means that over 140 Maui residents are now
being asked to go in and re-cast their ballots. This printing error was
discovered when friends couldn't find their friend's name on the ballot.

The security risks are obvious--if the printing of ballots is so dependent on
a single hard drive, it would seem fairly straightforward that people could
sabotage elections this way.

Lani Teshima-Miller (teshima@uhunix.uhcc.Hawaii.edu) "Sea Hare"
UH School of Library & Info Studies.


Internet Gets First False Ad Charge

"Peter G. Neumann" <neumann@csl.sri.com>
Thu, 22 Sep 94 9:25:34 PDT
An AP item from 15 Sep 1994 (e.g., San Francisco Chronicle, D3) noted that the
Federal Trade Commission filed a false-advertising complaint against Brian
Corzine of Sacramento, California, for making false claims in promoting a
credit-repair program.  A federal court ordered the promotion stopped.
Corzine maintained he was merely a reseller.


Uninterruptable thought patterns

Phil Agre <pagre@weber.ucsd.edu>
Fri, 16 Sep 1994 15:41:19 -0700
In his famous paper "The relation of habitual thought and behavior to
language" (published in his collected papers, "Language, Thought, and Reality"
by MIT Press), the American linguist Benjamin Lee Whorf began with some tales
from his day job as an insurance risk expert, in which he described the lapses
of reasoning that led to accidents that his company had to pay for.  He says,
for example, that people often assume that an "empty barrel" is wholly
innocuous — even if it recently contained gasoline and can thus be reasonably
expected to be exhaling flammable vapors.  He suggests that part of the
problem is with the word "empty", which derives from a particular folk model
of objects and materials that unfortunately does not correspond accurately
enough to reality.

I am sure that Whorf is cringing in his grave at the power outage that hit
O'Hare Airport this past Wednesday morning, which is described in a brief
article in the Wall Street Journal:

  Daniel Pearl, A power outage snarls air traffic in Chicago region,
  Wall Street Journal, 15 September 1994, page A4.

It would seem that power was lost for two hours when someone shorted out the
power system for the Aurora air traffic control center while testing the
uninterruptable power supply.

  "There are certainly some additional precautionary measures that
  have to be taken to make sure this doesn't happen again", said Stan
  Rivers, the FAA's deputy associate administrator for airway facilities.

  One precaution, he said, is not to perform the work when air traffic
  is at its peak.  The power outage occurred at 8:45am CDT at the
  Aurora, Ill., facility, which serves traffic to and from the world's
  busiest airport, Chicago O'Hare International Airport.

Now, I have no first-hand knowledge of this event, but I do think it is
legitimate to ask what could possibly have possessed someone to mess with the
power supply for O'Hare Airport's traffic control at 8:45 in the morning.  I
would like to suggest that, just maybe, it has to do with the phrase
"uninterruptable power system".  Says the Journal,

  It was the second time this year that the installation of an "uninterruptable
  power system" interrupted power at an air-traffic center.

Granted the first occurrence was more freakish than the second (a falling
ladder a hit "break glass and hit button" button).  But do you suppose that
these things would happen during the morning rush hour, as opposed to 4AM, if
uninterruptable power supplies were called "back-up kludges for highly
sensitive and fragile power supplies"?

Where do hubristic terms like "uninterruptable" come from?  They come from a
very narrow understanding of "the system" — if the power supply can overcome
those mishaps that can happen within the narrow technical conception of the
system that is found in box-and-arrow type diagrams, then it's
"uninterruptable" so long as the world outside remains totally pure and safe.

And if you don't believe me, I've got an "inherently safe nuclear
reactor" to sell you — I kid you not!

Phil Agre, UCSD


Re: Digital Logos (Lawrence, RISKS-16.40)

Peter J. Denning <pjd@cne.gmu.edu>
Mon, 12 Sep 94 12:33:12 EDT
Dennis Lawrence reported on an ad from TigerDirect, Florida, offering a set of
650 high-quality logos of major corporations.  Logos are copyright by their
owners and in many cases trade marked.  This means that the copyright owner
has sole rights to control who gets copies and who distributes.  If
TigerDirect has the explicit permission of the owners of the logos, all is
well.  If not, then not only they, but anyone else using the logo without
authorization, is breaking the law.  Anyone who would use a logo,
authorization or no, to commit a fraud is also breaking the law.

Peter Denning


Reason 55: National Security and the FBI Wiretap Bill

Marc Rotenberg <rotenberg@washofc.epic.org>
Tue, 13 Sep 1994 22:48:15 EST
100 Reasons to Oppose the FBI Wiretap Bill

Reason 55:   The largest purchaser of telecommunications equipment
             in the federal government said the FBI wiretap plan
             would have an *adverse impact* on national security.

     In 1992 the General Services Administration wrote that the FBI
wiretap plan would make it "easier for criminals, terrorists, foreign
intelligence (spies) and computer hackers to electronically penetrate
the phone network and pry into areas previously not open to snooping."
The confidential memo was obtained as a result of a Freedom of
Information Act request.

What To Do: Fax Rep. Jack Brooks (202-225-1584).
Express your concerns about the FBI Wiretap proposal.

100 Reasons is a project of the Electronic Privacy Information Center
(EPIC) in Washington, DC.  For more information: 100.Reasons@epic.org.


Yet More daring tales of address disasters!

Paul T. Keener <keener@upenn5.hep.upenn.edu>
Sat, 17 Sep 94 14:11:32 EDT
Reading Peter Ladkin's account of his colleague's address woes brought to mind
an incident that occurred to my friend recently.  He moved and sent an address
correction to a company in which he holds some stock.  The company
acknowledged his change of address, but sent it to his *old* address.  One
wonders if the database update occurred after the letter was generated or if
it never happened.  He has not received any mail since.

Paul T. Keener      keener@upenn5.hep.upenn.edu


High Security Digital Payment Systems

Michael Waidner <waidner@ira.uka.de>
Fri, 16 Sep 94 17:24:04 EDT
The following is the abstract of a paper that will be presented at
ESORICS 94. The full text is available via anonymous ftp from
ftp.uni-hildesheim.de in /pub/publications/Sirene/publications, file
BBCM1_94CafeEsorics.ps.

  The ESPRIT Project CAFE:
  High Security Digital Payment Systems

  Jean-Paul Boly, Antoon Bosselaers, Ronald Cramer, Rolf Michelsen,
  Stig Mjolsnes, Frank Muller, Torben Pedersen, Birgit Pfitzmann,
  Peter de Rooij, Berry Schoenmakers, Matthias Schunter, Luc Vallee,
  Michael Waidner

  CAFE (Conditional Access for Europe) is an ongoing project in the
  European Community's ESPRIT program. The goal of CAFE is to develop
  innovative systems for conditional access, and in particular,
  digital payment systems. An important aspect of CAFE is high
  security of all parties concerned, with the least possible
  requirements that they are forced to trust other parties (so-called
  multi-party security). This should give legal certainty to everybody
  at all times.  Moreover, both the electronic money issuer and the
  individual users are less dependent on the tamper-resistance of
  devices than in usual digital payment systems. Since CAFE aims at
  the market of small everyday payments that is currently dominated by
  cash, payments are offline, and privacy is an important issue.
     The basic devices used in CAFE are so-called electronic wallets,
  whose outlook is quite similar to pocket calculators or PDAs
  (Personal Digital Assistant). Particular advantages of the
  electronic wallets are that PINs can be entered directly, so that
  fake-terminal attacks are prevented. Other features are:
     * Loss tolerance: If a user loses an electronic wallet, or the
       wallet breaks or is stolen, the user can be given the money
       back, although it is a prepaid payment system.
     * Different currencies.
     * Open architecture and system.
  The aim is to demonstrate a set of the systems developed in one or
  more field trials at the end of the project. Note that these will be
  real hardware systems, suitable for mass production. This paper
  concentrates on the basic techniques used in the CAFE protocols.


Michael Waidner, Universit"at Karlsruhe
(currently on leave to IBM Zurich Research, email: wmi@zurich.ibm.com)


The Fuzzy Systems Handbook

"Rob Slade, Ed. DECrypt & ComNet" <roberts@mukluk.decus.ca>
Sat, 17 Sep 1994 20:12:47 EST
BKFUZHBK.RVW  940616

Academic Press, Inc.
955 Massachusetts Avenue
Cambridge, MA 02139
Josh Mills, Marketing, jmills@acad.com
publisher@igc.org
"The Fuzzy Systems Handbook", Cox, 1994, 0-12-194270-8

We dinosaurs of procedural programming language orientation tend to have
problems with functional programming languages such as Prolog.  It is difficult
to reorganize your thinking into the existential model of the expert systems
programmer.  We have similar problems with object oriented programming.  The
difficulties we have with fuzzy logic are probably for the same reason.  Take
heart, fellow dinosaurs.  At the very least, this book explains *why* we find
fuzzy systems so troublesome.  They are simply expert systems with a better
conceptual grasp of probabilities.

The trade media has hyped fuzzy systems as the new and coming thing.
Information systems professionals, who have lived through a great number of
"coming things" still know little more, basically, than the fact that control
systems are supposed to be better with fuzzy logic, and that close now counts
in both horseshoes and fuzzy systems.

The reason for the confidence in this science of imprecision can, in part, be
demonstrated in the control realm.  Suppose you are building an automatic
collision avoidance system for cars.  It is fairly straightforward to program
in the sequence of actions to be taken to slow the car as it approaches another
object.  If, however, the system fails, then what happens if you do hit the
object?  Will the "distance" become a negative number?  If so, will the brakes
bind or release?  Will the drive train stop, accelerate, or go into reverse?
This situation is simplistic, but the outcome, in a procedural language, must
be accounted for, prepared and tested.  Fuzzy systems deal with ranges, and it
is much easier to see and understand that the concept of "close" should also
include "hit" — even before you start to build the actions to be taken.

The potential disasters associated with systems that would flip planes upside
down when they flew over the equator are not confined to control systems, as IS
professionals are all too well aware.  Financial disasters can be precipitated
by "decision support" software which can generate market crashes.  Similar
damage can be done on a smaller scale by specialized programs which may have
undiscovered, and unintended, assumptions.  As with control systems, working
with ranges may make the pitfalls more obvious than working with static and
sterile values.

Even so, it is difficult for the programmer to translate the concepts of fuzzy
logic into code to play with.  Cox has, therefore, given you code to play with.
A high density MS-DOS format floppy contains C++ source code to mess around
with.  (C programmers should be able to work with most of it, and, since it is
source code, Mac devotees should be able to use it, as well.)

For those wishing to explore this new field "hands-on", a slightly high-toned,
but very useful, introduction.

copyright Robert M. Slade, 1994   BKFUZHBK.RVW  940616


Re: Neural nets and redlining (Baube, RISKS-16.40)

Andrew W Kowalczyk <AKOWALCZ+aLIFDR1%Allstate_Corp+p@mcimail.com>
Mon, 12 Sep 94 15:26 EST
I have attended several presentations from neural net software vendors.  If
these sales pitches are any indications, then neural nets are being used for
just the opposite of the dark purposes Fred Baube fears.

Successful lenders (and insurance companies) are driven by two basic business
rules: avoid "bad" risks and sell more product (increase market size).  Very
broadly applied underwriting rules (don't write mortgages in ZIP codes with
high default rates, don't insure Corvettes driven by young males) support the
first rule but run counter to the second.

There is only so much market share that can be taken from the competition.
What used to be considered a marginal market needs to be reconsidered in order
to fuel the business growth that Wall Street demands.  Many companies are
essentially saying: "on first blush you sound like a bad risk - find me an
excuse to sell to you" - and neural nets help find that excuse (maybe 23 year
old male Corvette drivers who are Ph.D. candidates residing in a rural area
are excellent risks).

The initial "training set" for the neural net may be current underwriting
practices.  This would be the same starting point as a rules-based, inference
engine driven expert system.  Neural nets promise to gain the advantage over
this because more factors can be considered and the "model" is self correcting
based on actual sales results and loss experience.

Andy Kowalczyk


Re: Neural nets and redlining (Baube, RISKS-16.40)

Peter J. Denning <pjd@cne.gmu.edu>
Mon, 12 Sep 94 12:41:10 EDT
F. Baube speculates that a neural net might hide an unlawful decision-process.
Two comments on this:

1.  A neural net — indeed any other machine whether in black box or not --
cannot "be responsible" for actions that break the law.  In Baube's example,
it is the loan officer who makes the declaration that the loan is granted (or
not), not the neural net.  It is on the loan-officer's authority that the loan
is granted; the machine has no authority at all.

2.  It is easy enough to test whether a black box machine (e.g., a neural net)
is breaking the law.  One subjects it to a battery of test cases and sees what
it is advising.  In Baube's example, it would be satistically easy to
determine if there is probable cause to believe the loan officer is engaging
in redlining by relying on the advice of the machine.  Such tests can be
performed even if the "training set" has been lost.

Peter Denning


neural redlining

Fernando Pereira <pereira@research.att.com>
Mon, 12 Sep 1994 19:14:34 -0400
Fred's comments will hold not only of neural nets but of any decision model
trained from data (eg. Bayesian models, decision trees). It's just an instance
of the old "gigo" phenomenon in statistical modeling. It is true that with
neural nets is that they typically have so many parameters that it is
difficult to "blame" a particular parameter setting for particular decisions.
However, proper evaluation of complex statistical models cannot depend on
examining particular parameter values anyway. Instead, evaluation requires the
behavior of the model on held-out test data to be documented, and the test
data to be made available for verification and certification purposes.

If the model has acquired biases from its training data, those should be
inferrable from its performance on its test data or new test sets, and the
original developer should be held responsible for those biases. In the same
way as ignorance of the law is not an excuse for breaking it, bad choice of
training data is not an excuse. For example, if the data comes from the
decisions of human beings (eg. loan officers) on particular cases, a lot of
care must be exercised to ensure that those decisions are not affected by
prejudice or confusion, rather than by examination of outcomes. On the other
hand, if the data is derived from outcomes (eg., for loans, cases that ended
in default vs cases that didn't, for loans with similar terms) and there is
reasonable statistical coverage of all relevant case types, the model
developer may be able to reasonably certify that any hidden correlations that
the system discovers (eg. between location and default rates) are not
demonstrations of intent to discriminate.

Overall, the whole issue of evaluation, let alone certification and legal
standing, of complex statistical models is still very much open. The
traditional machinery of linear or log-linear statistics does not apply
directly to nonlinear models such as neural nets or decision trees, so for
example it is difficult in general to compute model error estimates, and thus
the only source of such estimates is performance on test data. Unfortunately,
these difficulties are often swept under the carpet in the hype that surrounds
neural nets and allied methods.

(This reminds me of a possibly apocryphal story of problems with biased data
in neural net training. Some US defense contractor had supposedly trained a
neural net to find tanks in scenes. The reported performance was excellent,
with even camouflaged tanks mostly hidden in vegetation being spotted.
However, when the net was tested on yet a new set of images supplied by the
client, the net did not do better than chance. After an embarrassing
investigation, it turned out that all the tank images in the original training
and test sets had very different average intensity than the non-tank images,
and thus the net had just learned to discriminate between two image intensity
levels. Does anyone know if this actually happened, is is it just in the
neural net "urban folklore"?)

Fernando Pereira  2D-447, AT&T Bell Laboratories  600 Mountain Ave, PO Box 636
Murray Hill, NJ 07974-0636  pereira@research.att.com


RE: Neural Redlining == Plausible Deniability ?

"Thomas E. Janzen" <tej@world.std.com>
Mon, 12 Sep 1994 19:40:52 +0059 (EDT)
F. Baube was concerned that a disingenuous bank might use a neural
network trained to redline loans, and then destroy the training
materials to hide the intent.

Why would artificial neural networks have this advantage over the biological
ones that banks have used to redline mortgages with for years?  A human loan
officer could be trained with redline-based materials which were later
destroyed.  Human neural nets are also opaque, although they often seem
transparent.

Tom

Tom Janzen - tej@world.std.com  Real-time C, unix, VMS, AmigaDOS, Musical
software, writer.  See my video Dilettante at the DeCordova near Boston.


Re: Neural Redlining == Plausible Deniability ? (RISKS-16.40)

Jan Vorbrueggen <jan@neuroinformatik.ruhr-uni-bochum.de>
Tue, 13 Sep 1994 12:25:34 +0200
The decision-making process isn't really opaque. The are techniques for
running a standard, feed-forward net "backwards", which will tell you what
types of input are required in order to get the desired output. If you do this
systematically, you can easily evaluate which areas of the input space result
in a certain decision.

Sure, if the training set contained redlining, the net would learn it - but
that's part of the job, isn't it? Any other statistical technique (and in this
case, a net trained by, e.g., backpropagation is just a non-parametric
estimator) would suffer from the exact same problem. In both cases, the
original data of which rules or weights are based could be hidden or
destroyed.

Jan Vorbr"uggen, Institut f. Neuroinforamtik, Ruhr-Universit"at Bochum
jan@neuroinformatik.ruhr-uni-bochum.de


Re: Neural Redlining == Plausible Deniability ?

<Bob_Frankston@frankston.com>
Fri, 16 Sep 1994 11:12 -0400
But we already use a neural net for redlining — the human brain which can
hide all sorts of reasoning and, more likely, alternatives to reasoning. One
of the big issues in red-lining and other areas where social policy meets
business policy is that the argument is often over results and premises. More
to the to the point the same issue occurs when business policy meets business
policy.

The opacity of neural nets reasoning is still an issue. Though, perhaps, more
of an issue when trying to build systems with reproducible results. In
general, however opacity can be a result of any complex system.


Re: Neural Redlining == Plausible Deniability ?

John Turnbull <turnbull@turnbull.wariat.org>
Sun, 18 Sep 94 20:27 EDT
I remember a case similar to this coming to light in England about 7 years
ago.  It was in relation to a medical school that was using some form of
artificial intelligence program to screen applicants.  One item of data that
was not available to the program was the race of the applicant, which should
have helped ensure a relatively unbiased decision.  However after complaints
and some investigation it was determined that length of last name did have an
effect on the outcome, and it was deemed that it was there because many
students of, among others, Indian descent have significantly longer last names
than the Anglo-Saxon applicants.  So, it seems even this method can be caught,
but obviously requires careful study to determine.

John Turnbull

Please report problems with the web pages to the maintainer

x
Top