The RISKS Digest
Volume 8 Issue 64

Wednesday, 26th April 1989

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

DARPA studying high-tech surveillance for drug wars
Jon Jacky
Re: SKYDOME
Michael Wagner
Cursing the Darkness?
Ronald J Bottomly
Data Checking at Osco's
Scott Turner
Re: Common thread in recent postings: People
Hugh Miller
John Karabaic
Re: Use of "Standard" ...
Pete Schilling
Steve Bellovin
Info on RISKS (comp.risks)

DARPA studying high-tech surveillance for drug wars

<JON.JACKY@GAFFER.RAD.WASHINGTON.EDU>
26 Apr 1989 09:05:20 EST
The following excerpts are from FEDERAL COMPUTER WEEK, vol 3 no 17, April 24
1989, pages 1, 53:

DARPA PROGRAM TO BATTLE WAR ON DRUGS, TERRORISM by Gary H. Anthes

The Defense Advanced Research Projects Agency is quietly putting together a
multimillion-dollar program to develop advanced computer technology for the
wars on drugs and terrorism.

The technology is likely to be built on a foundation of artificial intelligence
and parallel processing, and it will be applied in situations that the Defense
Department refers to as special operations/low intensity conflicts, or SO/LIC.

The new program is headed by William Marquitz, deputy director of DARPA's
Information Science and Technology Office and a veteran of the Central
Intelligence Agency and the Pentagon's command, control, communications and
intelligence unit.

According to Marquitz, much data that could be useful in counternarcotics and
counterterrorism --- for tracking currency, cargo shipments and phone call
patterns, for instance --- is readily available.  But the government has
generally has not brought to bear fast computers that can examine trillions of
bits of information per day and smart software able to distill out the tiny
amounts of useful information. ...

One agency with a small research budget is the Drug Enforcement Administration.
According to Marquitz, DEA manually reviews printouts of international
telephone calls, looking for suspicious patterns.

For example, repeated calls to South America from a private residence in
Miami might trigger some sort of investigation.

Obviously such a procedure is tedious and error-prone.  Marquitz envisions a
fast parallel processor running an expert system that can examine millions of
telephone calls a day and discern subtle and complex patterns for follow-up by
law enforcement officials.  Marquitz says it isn't a problem of data collection
but of data fusion and reduction, a process he calls ``digging the signal out
of the noise''.

Opportunities to marry AI, parallel processing and pattern recognition
techniques exist in several other areas, Marquitz said.  A great deal of
cocaine enters the country in cargo containers that mysteriously disappear for
days at a time and then magically reappear.  ``The data to track these
containers is available in manifest records and can be readily supplied, but it
is not automated,'' Marquitz said.  A computer system could track the movement
of these boxes on a near real-time basis, looking for anomalous conditions, he
said.

In another example, Marquitz said currency-tracking schemes could be devised,
not for checks and credit card transactions, which drug distributors never use,
but for greenbacks based on their serial numbers.

Marquitz said DARPA's current focus on counternarcotics has roots in the past.
``During the [presidential] campaign, there was a lot of debate about drugs;
the campaign highlighted the issue.  We were already looking at the more
general problem of SO/LIC, so we were up to speed about thinking about these
problems.  Now we are way out ahead,'' he said.

Marquitz also said DARPA officials are working on a five-year plan for research
and prototype development in SO/LIC.

- Jonathan Jacky, University of Washington


Re: SKYDOME (Risks 8.62)

Michael Wagner <MICHAEL@vm.utcs.utoronto.ca>
Wed, 26 Apr 89 16:05:54 EDT
>  [This of course contradicts the myth that smaller programs run faster.  PGN]

It also contradicts the story that I heard. One of my clients is an architect
who lives 2 blocks away from the SKYDOME. Living next door, he is very
interested in the SKYDOME (including the undercapacity transit plans, but
that's better left to another RISKS submission), and being an architect, he
hears many interesting things. According to him, the stress on the dome from
opening and closing the dome was badly underestimated, and current estimates
are only 4 times a year will be safe (down from an original projection in the
30-40 range), i.e. any more will dangerously stress the machinery and lead to
early failure. I tried, without success, to determine what sort of assumptions
this revised estimate was based on, but it's not his area of expertise, so he
couldn't help me much.

From the little he told me, I can't determine whether a software rewrite would
be capable of "solving" this problem or not.
                                                        Michael


Cursing the Darkness?

Ronald J Bottomly <Bottomly@DOCKMASTER.NCSC.MIL>
Wed, 26 Apr 89 15:20 EDT
I know this is usually reserved for computer risks, but I've discovered a
heretofore-unknown (at least to me) advantage.

Last night my block experienced a power outage.  And since my place is
all-electric, I was left totally in the dark (without candles, flashlights,
etc).  The only self-contained light source that I could find was the eerie
blue glow emitted from my lap-top computer.

As I wandered about like a computer-age Diogenes, I thought it ironic that the
only thing to operate during a blackout was a computer.


Data Checking at Osco's

<srt@aerospace.aero.org>
Wed, 26 Apr 89 13:09:10 -0700
As an example of "anti-risk" I was interested to observe during a recent
shopping trip that the computerized registers at Osco's (a local drug store
chain) query the cashier when a questionable price is entered, apparently
according to the category of the item (which the cashier enters separately).
In this case, the cashier had entered a price of $79 for a skin care product,
and the register politely inquired whether he had made a mistake (as, indeed,
he had).
                    — Scott Turner


Re: Common thread in recent postings: People ( RISKS-8.63)

Hugh Miller <MILLER@vm.epas.utoronto.ca>
Wed, 26 Apr 89 09:49:47 EDT
        The requirement that systems be kept simple is itself too simple to
"manage" technology — if, indeed, WE manage IT, and not the other way around.
Nor is simplicity necessarily a virtue if a simplified tool magnifies its
potential for control, control of events AND persons.  Making a tool less
"option-rich," even at the cost of decreased flexibility, is not the answer to
our problem, because the problem it IS an answer to (how to smoothly adapt a
control function to an irregular domain) is not the central one.  The central
problem is whether all this technology serves a good, "The Good" if you will.
Crucial to that problem is the question of whether nature — our own included
-- can and should be adequately characterised as an "irregular domain" for the
possible employment of technology.

        That it does we take more or less for granted, at least at the
policy-planning level.  (Our consciences, we hold, are our "private" affair.)
A recent posting in the alt.fusion newsgroup castigated the opinions of people
like Paul Ehrlich and Jeremy Rifkin, who have questioned whether such a thing
as a "Utah tokamak" cold-fusion reactor might not be bad for us after all.
"What pieces of disgusting slime they are," wrote the author. "Fortunately,
with something this important they will be ignored, and, if they interfere,
steamrollered."  I fear his attitude, more temperately phrased, is shared by
the great majority of us.

        Steven Kull, a psychoanalyst, interviewed a number of DoD, military,
and defense-related industry types for his recent book _Minds at War_ (New
York: Basic Books, 1988).  His aim in the book was to examine the
psychological attitudes of nuclear strategists.  Especially interesting (and
chilling) was an observation he made about interviews on the subject of the
big push for hard-target kill capability (GPS, Navstar, earth-penetrating
warheads, etc.).  Those of us who work in similarly computer-intensive milieux
would do well to adapt it to our own work:

              A rather curious widespread attitude was that the
        United States "might as well" improve its hard-target
        capability given that it had the technological ability, as
        if the effort to improve such capabilities was virtually
        costless. Even respondents who understood and were actually
        sympathetic to concerns about the instabilities engendered
        by hard-target kill capability often shook their heads as if
        to say that only an overwhelming logical argument could stop
        such technological developments.  There was a pervasive
        feeling that despite multibillion-dollar costs, building new
        weapons with greater accuracy was virtually effortless,
        while refraining from doing so was a gargantuan effort. Some
        simply asserted that the weapon in question was a good
        weapon in a technical sense and therefore should be built.
        In a few cases, respondents even seemed surprised when
        pushed for a stronger rationale based on strategic
        considerations.

        Hugh Miller, University of Toronto


Re: Common thread in recent postings: People (RISKS-8.63)

John Karabaic <fuzzy@aruba.arpa>
Wed, 26 Apr 89 12:14:43 EDT
[...] this brings up another RISK: the effect our organizations have on the
use/misuse of technology.  The problem I'll tell about was not a technological
one, it was a bureacratic one, but if it had not been solved a technological
RISK would have resulted.

I used to work in an Air Force Systems Command System Program Office as an
Avionics Project Manager.  One of the boxes I had responsibility for was the
Intercom Set Control Panel (ICSCP), which controlled the radios, the
pilot/weapon systems officer intercom, and voice warnings on one particular
fighter.  Voice warnings occurred for events like low fuel; a digitized female
voice would say, "Bingo Fuel.  Bingo Fuel."  The warning would occur everytime
that interrupt occurred, so if you were jigging the plane around to avoid
getting killed, fuel slosh might generate many warnings.{\footnote The voice,
on early versions of the aircraft, belonged to a woman from Florida known as
"Bitchin' Betty".  When we decided to redigitize for our aircraft, her voice
quality had changed and "Caustic Kristen" took her place.  Female voices are
preferred because pilots (overwhelmingly male) react more quickly to them.}

Here's where the fun begins: the voice warnings attenuated all radio and
intercom messages by 23dB.  So let's say you're doing the aforementioned
jigging: you could miss a critical radio transmission ("Number two, you're
going to hit the ground", "Number three, you have a MIG on your tail") and die.
This problem had been corrected on earlier versions of the aircraft by just
removing the attenuation; the pilots could correctly distinguish and interpret
two or three simultaneous messages.  The requirements fellows had told my
office to correct it for the new version.  This got lost in the organizational
cracks; I dug it up when I was going through old message traffic after I was
given management of the program.  The attenuation was embedded into an approved
high-level, baseline specification and had trickled down into at least two
lower-level specs.  An engineering change proposal would have to be made if we
followed the rule book.

We were well into full-scale development, and two or three preproduction
ICSCP's had been made.  The firmware was really firm; we would have to rip out
the old chips (and fatigue the boards) and install new ones.  The way the USAF
does business, it takes two years before you can even have the contractor start
work on an engineering change (it takes that long to jump through all the
hoops). By that time, about forty systems would be in the field.  We would have
to retrofit each one at a cost of millions of dollars for the entire program.
The change was not safety-critical; that had been determined when they changed
the older versions of the aircraft, so we couldn't put out an "urgent" change.
A safety-critical mod would not have looked good when our budget went before
Congress, either.

So, what to do?  I made a visit to the ICSCP subcontractor with my main
contractor and we determined that we could either cut a single wire or make a
two-line software change.  Wire-cutting was a kludge I wanted to avoid, but it
started looking awfully attractive: the USAF was not buying any software
documentation, so we hadn't a clue about what to change if we wanted to modify
the software organically and wire-cutting is something the intermediate shops
could do.  Remember that the contractors have little interest in actually
making the change *now*: the baseline has been approved and they stand to make
lots of money off the retrofit program if we go the software way (and maybe
even if we cut the wire).  But by some gentle convincing and appeals to
professional pride (and the promise to tell the Advanced Tactical Fighter SPO
what a great job they had done in averting a costly retrofit), I told them to
make the change to the spec and modify the software and I would make sure the
SPO would approve it.  It was done, and the retrofit avoided.  I called some
friends at ATF and told them the whole story.

Lessons learned: The machinery we set up to manage our large projects carries
RISKS of its own.  If we had gone by the book, forty+ aircraft would have been
in the field with a RISKy condition.  By having a very professional contractor
with a liberal dose of enlightened self interest, a problem with our
organizational machinery was circumvented.  I'm sure other organizations have
had similar problems, and solved them in similar ways.

Lt John S. Karabaic, WRDC/TXI, WPAFB, OH 45433-6543             513 255 5800
These opinions are mine.

BTW, the operators (pilots and WSO's) hated the voice warnings with a vengeance.
They much preferred simple tones and warning lights.  But high-tech sells...


Re: Use of "Standard" on sensitive applications

"ALBTSB::SCHILLING1" <schilling1%albtsb.decnet@aldncf.alcoa.com>
26 Apr 89 15:12:00 EST
In fields that are mature enough to have liability associated with wrong
actions, users of standard products have the protection of the law.  Real
standards exist for things like steel beams, so a designer or builder can order
beams and expect that they will perform as specified in the standard.  The
standard specification for steel beams requires tests using standard methods
which are understood and accepted by competent engineers, who use standard
terminology in communicating with one another.  If a beam fails in service,
then the builder's lawyers call the beam maker's lawyers to discuss things like
compensatory and punitive damages.  The threat of liability for wrong action
keeps most people honest enough to avoid lawsuits.

Real engineers would laugh at the idea that lex and yacc are "standard"
products.  What standard specification do they satisfy?  What standard
test methods verify that the particular version of lex or yacc used to
develop a system conforms to the specification?  What standard rules of
design, developed by recognized groups of experienced professionals,
guide other competent professionals as to how lex and yacc should be
used?  What standard defines the terms in which competent professionals 
communicate about these tools?  Who do a system builder's lawyers talk 
with if the tools fail in service?

Pete Schilling, Applied Mathematics and Computer Technology Divn., Aluminum Co.
of America, Alcoa Center, PA 15069 Alcoa Laboratories 412/337-2724


Use of "Standard" on sensitive applications

<smb@arpa.att.com>
Tue, 25 Apr 89 23:53:28 EDT
I don't think there's an absolute rule here; a lot depends on the application
and its history.  For a task that's very well understood theoretically — i.e.,
parsing or lexical analysis — a good tool is likely to be far more reliable
than a hand-coded equivalent, and far more consistent besides.  A similar rule
can be applied to very complex tasks, such as protocol design; if your primary
goal isn't (for example) to design a new transport protocol, you're much better
off using a standard one.  The bugs are often subtle, and today's protocols are
the product of years of experience.

It's the middle range where I'm more skeptical; one needs reason to trust
something.  If an application is complex, an existing tool often doesn't quite
fit; adding just a few little hacks is a sure road to disaster.  We often see
this in newspaper horror stories about municipal accounting systems that are
years late and millions over budget — even though the general concept is
straight-forward enough, all the little special cases can kill the project.

            --Steve Bellovin

Please report problems with the web pages to the maintainer

x
Top