The RISKS Digest
Volume 4 Issue 42

Friday, 23rd January 1987

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

A scary tale--Sperry avionics module testing bites the dust?
Nancy Leveson
Computer gotcha
Dave Emery
Re: Another Bank Card Horror Story
Robert Frankston
Stock Market behavior
Howard Israel
Gary Kremen
Engineering models applied to systems
Alan Wexelblat
Re: British EFT note
Alan Wexelblat
Train Wreck Inquiry (Risks 2.9)
Matthew Kruk
Cost-benefit analyses and automobile recalls
John Chambers
Info on RISKS (comp.risks)

A scary tale--Sperry avionics module testing bites the dust?

Nancy Leveson <nancy@ICSD.UCI.EDU>
21 Jan 87 12:53:51 PST (Wed)
I just spoke to a man at the FAA who is involved with aircraft certification.  
He told me that Sperry Avionics, who are building the computerized automatic
pilot among other things for a future new aircraft, is trying to convince
them to eliminate module testing for the software.  According to this man,
Sperry argues that programmers find module testing too boring and won't stay
around to do it.  Instead of module testing, Sperry wants to use n-version
programming and perform only functional system test.  As long as the results
from the two channels match, they will assume they are correct.

I am not concerned that they are using n-version programming, but that 
they are arguing that the use of it justifies eliminating something that
is considered reasonable software engineering practice.  The FAA has agreed 
to allow them to try this.  According to my FAA source, the FAA is not 
thoroughly comfortable with this, but the autopilot is only flight-crucial 
on this aircraft during about 45 seconds of the landing.  Also, their tests 
have found that pilots can successfully recover from an autopilot failure
during this period (by performing a go-around) about 80% of the time.  

I am going to talk further about this with some people at the FAA 
who are involved with certification.  If anyone else shares my concern
(or would like to allay my fears), I would appreciate hearing your
opinions and arguments.  I will convey them to the FAA unless you state
that they should remain confidential.
                                        Nancy Leveson (nancy@ics.uci.edu)

(P.S.  Anybody want to join me in writing Congress about saving Amtrak?)


Computer gotcha

Dave Emery <emery@mitre-bedford.ARPA>
Tue, 20 Jan 87 15:12:04 est
Here's a computer gotcha for you...

Like many other people, I was trying to close on a new house before the end of
the year, for tax reasons.  We had our down payment wired from our bank in New
Jersey to our bank in New Hampshire, supposedly a fail-safe transaction.
Unfortunately, the Bank of New England, which was (one of) the middleman in the
wire transfer failed.  Apparently, their system was overloaded, and crashed.

My mother is a teller in a bank in Pittsburgh.  She says that, at least at her
bank, system crashes are a way of life.  Fortunately, she says, they rarely
lose any money.  
                Dave Emery, MITRE Corp, Bedford MA

P.S.  Bank of New England recovered later that day, and we got our money after 
we signed the papers.  The legal transaction was recorded the following day.


Re: Another Bank Card Horror Story

<Frankston@MIT-MULTICS.ARPA>
Tue, 20 Jan 87 23:42 EST
The issue of ATM accountability reminds me of a problem I am having
untangling my Mastercard transactions here.  In general, the reports
generated on the statements fail to provide the minimal information
necessary for untangling messes.  Information like which card was actually
used is entirely missing.  Only American Express seems to understand that
each instance of a card should be tagged.  This is especially annoying when
the bank doesn't seem to mind that a stolen card is used for 8 months to buy
tickets on the Eastern Shuttle.  Credit transactions don't give a hint as to
what transactions they are being counted against.

While some of this just reflects ineptness and neglect in the bank's DP
department, it also is indicative of what is going to become a real
issue as we attempt to connect our personal computers with existing
services.  (or even banks connect to each other).  Electronic banking
services in general do an inadequate job of export/import of data.  Such
concepts as unique ID's for tagging and tracking transactions don't
really exist.  In the previous mastercard card example, the transaction
id is just some characters associated with the transactions and are
likely to not be unique.  It seems as if my home processing of this
information is more sophisticated than the bank's!

It reminds me of my attempt to setup an equipment tagging system.  I
decided to order two sets of tags — red for permanent stickers and
black for removeables so that we can tag loaner equipment.  The office
manager followed through on this but both sets were numbered from 1 to
1000.  It was difficult to explain why this was a problem since it was
obvious which was red and which was black.

The problems are manifestations of the issue of fundamental information
processing literacy.  While some of us working with computers have
learned techniques to deal with aspects of this, the knowledge is not
well distributed through society, nor even the DP profession.  But the
use of computers is becoming pervasive.

This conflict is at the heart of a large class of computer-based risks.  In
the short term, the best we can do is point out the issues.  Pointing out
solutions is harder — especially when they are obvious to us.  The real
question is how we can convey this understanding to the society at large.

Are there any references that exist to try to explain the concepts of
dealing with complicated systems and their interactions?  Maybe even
gather a list of such obvious things as checksums (and limitations on
simple checksums), unique ids (and the low cost of using a lot of
integers), redundancy (and its low cost/benefit) etc.

On the other hand, maybe these difficulties are really blessings since
an efficient EFT system, for example, might be a serious threat to
privacy so that these annoyances and even risks are worth it till we
understand how to deal with the system once it works smoothly.


RISKS 4.41, Stock Market behavior

Howard Israel <HIsrael@DOCKMASTER.ARPA>
Tue, 20 Jan 87 16:17 EST
  >For the previous two witching hours, and for the forseeable future, market
  >makers are now required to publish their required major stock trades several
  >hours in advance on these Fridays.  

Minor correction.  According to the Washington Post business section that
described this new SEC strategy of disclosure, it is not "required", but
recommended.  All major brokerages complied except one (I believe it was
Drexel Burnham Lambert Inc.), which caused a minor fervor as traders acted on
"incomplete information", thus giving Drexel a slight advantage.  Drexel was
criticized for not disclosing their intended trades but countered that it did
not violate any SEC "rules" and thus acted properly.

The intended affect of the disclosure is to give traders advance notice, in
effect, reducing the "shock" factor as well as allowing the "market makers" to
adjust their inventories of stocks to prepare for the expected orders.  

Note: that a trade can be put in by a trader to be executed "at the market
closing price" for a given stock.  Regardless of what the price is, the trade
will be executed.  The deluge of orders on the "triple witching hour" at the
"market closing price" often caused the ticker to be delayed up to a half hour
at the closing.


Stock Market behavior

<kremen@aerospace.ARPA>
Wed, 21 Jan 87 13:48:33 -0800
  >The problem of the ``triple witching hour'' is that during a few hours on
  >the third friday of each third month (typically from 3-4PM) there is an 
  >immense burst of market activity as major participants rearrange their
  >computer selected portfolios.  (This particular time is triggered by the 
  >expiration time of a key financial component in these ``computer based'' 
  >trades.)

Not really true, most of the problem occurs in the last 10 minutes of
trading when the "unwinding" of stock index futures, options
on those futures, and the underlying equities occur. Usually the brunt
of the unwinding occurs in Chicago, where the futures are traded. We
only see a portion of this when one looks at the volume on the New York
Stock Exchange. Also, not all unwinding occurs on "expiration day". If
conditions are favorable, stock positions can be unwound earlier.

  >Before these trading programs, hearing that someone needs to sell 100,000
  >shares of IBM quick, like in the next 5 minutes, meant that there was a
  >major problem at IBM.  Many people still react in panic when they hear 
  >such news.

NO ONE panics. Since 1982, when stock market indexes (such as the Major
Market Index or the S&P 100) started to be traded, the "triple witching
hours" have occurred. Only within the past two years have the underlying
markets been liquid enought to make it really worthwhile. Anyway
institutions frequently sell (or buy) 100,000 shares of IBM for normal
trading purposes.  [...]

For more information see the December 29, 1986 issue of Insight magazine.


Engineering models applied to systems

Alan Wexelblat <wex@mcc.com>
Tue, 20 Jan 87 10:49:53 CST
In Burnham's _The Rise of the Computer State_, MIT Professor Jeffrey A. Meldman
is quoted as follows:
    "In engineering, there is a principle which holds that it is
    frequently best to have a loosely-coupled system.  The problem
    with tightly coupled systems is that should a bad vibration
    start at one end of the machine, it will readiate and may cause
    difficulties in all parts of the system.  Loose coupling is 
    frequently essential to keep a large structure from falling down.
    I think this principle of mechanical engineering may be applicable
    to the way we use computers in the United States."

The context of the quote is a chapter on the aggregations of power that can
accrue in large, centralized computer systems and the risks (and temptations)
of abuse of this power.

RISKS readers have previously dismissed other engineering models as
inapplicable to software systems.  Comments on this one?

Alan Wexelblat
ARPA: WEX@MCC.ARPA or WEX@MCC.COM
UUCP: {seismo, harvard, gatech, pyramid, &c.}!ut-sally!im4u!milano!wex


Re: British EFT note

Alan Wexelblat <wex@mcc.com>
Tue, 20 Jan 87 10:54:42 CST
It is worth reminding RISKS readers that a British "billion" is a million
millions (1,000,000,000,000) rather than the American thousand millions
(1,000,000,000).     --Alan Wexelblat

                  [This was also noted by Howard Israel.  By the way, I
                  observe that the BBC radio broadcasts on PBS now routinely 
                  use "thousand million" and "million million"...  PGN]


Train Wreck Inquiry (Risks 2.9)

<Matthew_Kruk%UBC.MAILNET@MIT-MULTICS.ARPA>
Fri, 23 Jan 87 08:46:12 PST
Just caught bits and pieces on the morning radio news about this item mentioned
in Risks 2.9:

An inquiry into the collision between a VIA passenger train and a Canadian
National freight train near Hinton, Alberta last year, has put the blame on
human error. The freight crew were said to have ignored various safety
procedures. Also, Canadian National was accused of ignoring too many minor
safety infractions and for letting crews work without sufficient rest periods
between shifts.

Computer error was not mentioned as a contributing factor.


Cost-benefit analyses and automobile recalls (RISKS DIGEST 4.39)

John Chambers <jc@cdx39.UUCP>
23 Jan 87 19:06:07 GMT
>     Moreover, even if the [cost-benefit] analyses were performed
>     correctly, the results could be socially unacceptable. [...]  In the
>     case of automobile recalls, where the sample size is much larger, the
>     manufacturers may already be trading off the cost of a recall against
>     the expected cost of resulting lawsuits, although I hope not.

Sure they are.  Have you ever heard of "liability insurance"?  

There is also the general observation that, the way most forms
of cost-benefit analysis work, ignoring (i.e., failing to assign
an explicit value to) some factor is mathematically equivalent
to assigning it a cost of zero.  In other words, a cost-benefit
analysis can't generally distiguish an unknown cost from a zero
cost.  Similarly for benefits.

> The "value of a human life" is not a constant.  The life of a volunteer or
> professional, expended in the line of duty, has always been considered less
> costly than the life of innocents.  

Huh?  Most military organizations that I've heard of consider the cost
of training a soldier to be significant; the value of innocents (i.e.,
civilians) is generally ignored, and thus considered to be zero.  This
was painfully obvious during the Vietnamese war, for instance.

> As for the dangers of incorrectly estimating risks, I think that the
> real danger is in not estimating risks.

If you listen to public discussions of risky situations, it soon becomes
quite clear that few people are able to distinguish "We know of no risks"
from "We know there are no risks".  

...

>    One can only imagine the reaction of the program authors when they 
>    discovered what one last small change to the program's scoring function 
>    was necessary to make it match the panel's results.  It raises  
>    interesting questions of whistle-blowing.

Even if the programmers looked at the regression function, it's 
not clear that they would have even seen a racial component.  For 
a nice example of how things can go wrong, consider that you can 
do a rather good job of sex discrimination while totally ignoring 
sex; you just use the person's height instead.  There have been 
published studies that imply that this is widespread practice;  
it's been known for some years that [in the USA] a person's height 
is a better predictor of their income than their sex, and if height 
is included, knowing their sex adds no further information.  It's 
likely that in the UK there are several attributes that are strongly 
correlated with race, so you need not necessarily use race at all.  

    John M Chambers         Phone: 617/364-2000x7304
Email: ...{adelie,bu-cs,harvax,inmet,mcsbos,mit-eddie,mot[bos]}!cdx39!{jc,news,root,usenet,uucp}
Codex Corporation; Mailstop C1-30; 20 Cabot Blvd; Mansfield MA 02048-1193

Please report problems with the web pages to the maintainer

x
Top