The RISKS Digest
Volume 8 Issue 24

Monday, 13th February 1989

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Massive counterfeit ATM card scheme foiled
Rodney Hoffman
PGN
Computer blamed for 911 system crash
Rodney Hoffman
Risks of Selective Service
Rob Elkins
Re: Engines and probabilities
Barry Redmond
Robert Frederking
Re: Structured programming
Jim Frost
Re: Engineering vs. Programming
John Dykstra
Henry Spencer
Robert English
Shawn Stanley
Info on RISKS (comp.risks)

Massive counterfeit ATM card scheme foiled

Rodney Hoffman <Hoffman.ElSegundo@Xerox.com>
12 Feb 89 14:25:22 PST (Sunday)
Summarized from a 'Los Angeles Times' (11 Feb 89) story by Douglas Frantz:

The U.S. Secret Service foiled a scheme to use more than 7,700 counterfeit
ATM cards to obtain cash from Bank of America automated tellers.  After a
month-long investigation with an informant, five people were arrested and
charged with violating federal fraud statutes.  

"Seized in the raid were 1,884 completed counterfeit cards, 4,900 partially
completed cards, and a machine to encode the cards with BofA account
information, including highly secret personal identification numbers for
customers."

The alleged mastermind, Mark Koenig, is a computer programmer for Applied
Communications, Inc. of Omaha, a subsidiary of USWest.  He was temporarily
working under contract for a subsidiary of GTE Corp., which handles the
company's 286 ATMs at stores in California.  Koenig had access to account
information for cards used at the GTE ATMs.  According to a taped
conversation, Koenig said he had transferred the BofA account information
to his home computer.  He took only BofA information "to make it look like
an inside job" at the bank.  The encoding machine was from his office.

Koenig and confederates planned to spread out across the country over six
days around the President's Day weekend, and withdraw cash.  They were to
wear disguises because some ATMs have hidden cameras.  Three "test" cards
had been used successfully, but only a small amount was taken in the tests,
according to the Secret Service.

The prosecuting US attorney estimated that losses to the bank would have
been between $7 and $14 million.  BofA has sent letters to 7,000 customers
explaining that they will receive new cards.


Massive counterfeit ATM card scheme foiled

Peter G. Neumann <neumann@csl.sri.com>
Mon, 13 Feb 1989 13:33:12 PST
Note that, whether authorized or surreptitious, any access to the card
information database — IDs and PINs — makes this kind of fraud rather easy.
Unfortunately, the remedies are not easy.  Even if the PINs were stored
encrypted, a preencryption attack (offline) enumerating the 10**6 possible PINs
would compromise ALL of the PINs.  Thus I conclude that the vulnerabilities
here are considerable — and grossly underestimated.  (We have noted before in
RISKS that the extent of credit card fraud is actually enormous, although the
banks merely pass the losses on to the customers.)  PGN


Computer blamed for 911 system crash

Rodney Hoffman <Hoffman.ElSegundo@Xerox.com>
12 Feb 89 14:39:12 PST (Sunday)
Summarized from a 'Los Angeles Times' (12 Feb 89) story by Hector Tobar:

The Los Angeles city emergency 911 telephone system crashed twice Saturday
afternoon.  Pacific Bell said the shutdown was caused by "a power failure
in the computer's signalling mechanism."  For four hours, the system was
only partly functioning as Pacific Bell engineers worked to repair
computers at the dispatch center.

Operators first discovered the phone lines out about 1 pm.  Pacific Bell
engineers restored parts of the system 5 minutes later, but at 3 pm the
system crashed again.  A backup system also failed, stopping all emergency
calls for 45 minutes.  Engineers once again restored the system's phone
lines at 3:45.  But the system's computers were still not working by late
afternoon and the 25 operators at the dispatch center were forced to
process calls manually.  Computers normally display the address and phone
number of the person making the call.

Operators at the center receive about 200 calls per hour.  Callers who were
unable to get through received a recorded message.  Many then called police
and fire stations directly.  All calls were being answered by late
Saturday.  A police officer said it was fortunate that the breakdown
occurred Saturday afternoon, during a quiet part of the weekend.

The computerized 911 system was installed in Los Angeles in 1983.  Located
four floors below ground level, it is designed to withstand a major
earthquake.  "It's a very good system," said a Pacifc Bell spokeswoman.


Risks of Selective Service

Rob Elkins <relkins@vax1.acs.udel.edu>
10 Feb 89 17:20:42 GMT
Recently, a fellow student was sent a letter (computer generated of course)
from the U.S. Selective Service System.  The letter was a final warning to
register in the selective service system before their name was submitted to the
Justice Department for felony prosecution.  What is ironic about all this is
that this person is female and not required to register for Selective Service.
Apparently, the computer system that they use maintains a list of female names
and assumes that any individual whose name is not on this list must therefore
be male.  Her full name is Brantley Elizabeth Riley and when she called them to
straighten the situation out, they told her that the system doesn't check
middle names either.
                                 Rob Elkins, University of Delaware


Re: Engines and probabilities — independence

"Barry Redmond, Lecturer, Telecoms Dept, KevinSt" <BREDMOND@dit.ie>
Fri, 3 Feb 89 09:59 GMT
In RISKS-8.17 Craig Smilovitz writes:

>In the discussion about multi-engine aircraft failures, we've seen a lot of
> mathematical probability exercises that forget about analyzing the basis
> assumption about probability theory.  That assumption is the *independence*
> of the events in question.

Exactly.

There are many reasons why the probabilities are not independent.  Just think
about all the factors in common between the engines (2 or 3) on any single
plane:

     -The engines are the same design
     -They were manufactured by the same company
     -They were fitted in the same factory
     -They were fueled from the same tanker
     -They were serviced by the same team
     ...and I'm sure you can all think of others.

If someone makes a mistake on one engine at any of these times, there is a
high probability that they will make the same mistake on the other engine(s).
The probabilities of failure are not independent because if one engine fails it
immediately increases the probability of another failing.

In other words it's another manifestation of the old software debugging rule:
"The more bugs you find, the more likely it is that there are more."

Barry Redmond, Dept of Electronics & Communications, 
Dublin Institute of Technology, Kevin St, Dublin 8 Ireland  


Re: Engines and Probabilities — Good math, bad case analysis

"Robert Frederking" <ref@ztivax.siemens.com>
10 Feb 1989 08:33-MET
There was of course a mistake in my submission on the 2/3 engine controversy.
While the P(one or both engines out) I gave was correct for two engines,
this isn't the P(crashing), since it can fly on one engine.  What I should
have used was simply P(both engines out) = p**2, which is indeed smaller
than the P(2 or 3 out) for the three engine case.

Robert Frederking, Siemens AG/ZFE F2 INF 23, 
Otto-Hahn-Ring 6, D-8000 Munich 83  West Germany       Phone: (-89) 636 47129


Re: Structured programming

Jim Frost <madd@bu-cs.BU.EDU>
12 Feb 89 16:56:02 GMT
It's possible that this topic has been overkilled, but I'd like to add
several recent experiences to the "structured programming" argument.

My company markets a graphical database used for designing and testing event
networks, which work much like PERT charts.  Since the program is very
graphical, it was written for the Silicon Graphics series of machines.  Until
recently, the SGI machines went for $50,000 or more, somewhat limiting our
market.  I was hired to port the system to less expensive hardware.

On paper, the design of the system was very good.  There were several major
portions of code (graphics system, a thing like an interpreter, and a
database).  Unfortunately, the graphics system was written in a very
unstructured format, using many global variables and splitting functions up in
odd ways.  Instead of being able to make small modifications to work on other
graphical systems, a complete rewrite had to be undertaken.  After this
rewrite, porting to new architectures has taken less than a week per
architecture.  The newer graphical systems are very, very structured.

Worse than the graphics, the other portions of the system were written by a
programmer who must have had job security in mind.  They are deliberately
obtuse.  Portions are very modular while others are so delocalized as to cause
me to pound my head against the wall while trying to decipher them (why should
a database manager make checks to see if the memory manager is still
consistent?).  We found that almost all of the modular portions ported with
fair ease, while all of the delocalized portions had to be thrown out and
rewritten.

What's the net result of giving proper structure to the product?  The older
system was so unreliable as to force our clients to run it under a debugger
(gag).  The newer is faster, easy to port, and much MUCH more reliable.

I'd say there's a lesson there.

jim frost, associative design technology                   madd@bu-it.bu.edu


re: Engineering vs. Programming

John Dykstra - CDC Workstation Software <JohnD@CDCCENTR.BITNET>
Thu, 9 Feb 89 13:43:16 CST
I've had the opportunity to be a logic designer on a high-end mainframe
development project and an operating systems developer.  There's one difference
that leaps out at me:  Hardware designers get to do the same thing more than
once, while software designers, at least in the operating system area, always
seem to be cutting new trails through the underbrush.

In these days of software-compatible product lines, it's not uncommon for the
same hardware development team to implement an external architecture (the
assembly language programmer's view of the machine) several times over the
course of a decade.  My company has two teams that are each on their third
implementation, and you can see progressive improvements in the internal
architecture (what the microprogrammer or customer engineer sees).  Things get
simpler, perform better and are more reliable.  Of course, the constraints the
team works under change (new LSI gate array technologies, requirements to
support more multiprocessing, etc.), but hardware engineers get to learn from
their mistakes.

The "building blocks" of hardware design also have not changed very much over
time.  We're still using registers, adders, control logic, microprogramming
stores, etc.  Techniques get extended over time, but I expect that you could
show a 1950's digital engineer the logic prints for (say) an IBM 3090, and
s/he'd be able to follow them easily.  I don't think that someone who worked in
AutoCoder back in 1960 would be able to read an Ada or LISP program, or even
understand some of the basic concepts of the language.

Operating systems seem to take 5 to 10 years from beginning to maturity, so
most of us only do one or two in a career.  Over a decade, requirements change
enough that you can't just re-implement the previous system.  For example, in
the 60's batch processing was most important, while in the 70's you optimized
for timesharing, and in the 80's distributed processing and networking are
king.  Hardware architectures also change, as support for virtual memory and
hardware-support protection schemes is added, and of course we're using much
different languages.

Basic design principles don't change, and sometimes they get codified into
rulebooks such as "structured programming," and used by people who don't
understand the "why" behind them.  But the problems I'm trying to solve with my
software design work are very different from the ones faced by the guy who
occupied this office in 1979.  Sometimes I'm thrilled by the challenge of
finding my way through this wide-open universe of possible solutions, and
sometimes I wish for the safety of designing yet another pipelined arithmetic
unit.

I'll bet that the hardware designers on this list believe that they're in a
tough situation, and that operating system design is easy! Does anyone want to
make a case for that?

John Dykstra - NOS/VE Design - Control Data Corporation
  (612) 482-2874               JohnD@CDCCentr.BITNET
All opinions expressed, etc.


Re: Engineering vs. Programming

<attcan!utzoo!henry@uunet.UU.NET>
Fri, 10 Feb 89 00:16:58 -0500
>When you design a program, the design and the program can be one and the
>same, so a lower level of design documentation is possible.

On the one hand, this is certainly true.  I heard the same thing from an
EE friend of mine over a decade ago:  he preferred software over hardware
because changing the software was so much less hassle than updating drawings
and such for hardware.

On the other hand, it is not obvious that this automatically means poorer
quality.  What we have here, actually, is a new version of the classical
debate over whether word processing (or the typewriter!) leads to poorer
writing.  The more powerful tool definitely makes it easier to be
sloppy, because less effort and thought is needed to get something out.
But it also makes it easier to be perfectionist, because doing multiple
iterations to get something absolutely *right* is much less hassle.

I think a fairer statement would be that the shift from hardware to software
magnifies differences in how systematic and conscientious people are, and
makes it harder for traditional hardware-oriented procedures (and older
hardware-oriented managers) to catch the sloppy ones.

                                     Henry Spencer at U of Toronto Zoology
                                 uunet!attcan!utzoo!henry henry@zoo.toronto.edu


Engineering vs. Programming

Robert English <renglish%hpda@hp-sde.sde.hp.com>
Thu, 9 Feb 89 10:37:57 pst
The discussions about software system design have given many reasons for why
those systems fail.  It seems to me that those reasons can be broken into two
main categories--technical and economic--and that any approach that attacks the
first without somehow attacking the second as well is doomed to failure.

The immense complexity of software systems makes them difficult to build
correctly.  A large program can contain well over a hundred thousand lines of
code, each of which would represent a moving part in a physical system.  No one
would expect a physical system that large and complicated to work reliably, at
least not without extensive testing and redundancy.

But the technical problems go deeper than that.  Physical systems are
inherently modular.  Each part has well-defined boundaries and performs
well-defined tasks.  While interactions between parts must be accounted for,
these interactions are the exception, rather than the rule.  Unless
considerable care is taken, the opposite applies to software systems (even with
modular software, the objects themselves tend to be more complex and less
reliable than physical components).

Computer science has made great progress in addressing these technical issues.
The need for documentation is well known, the benefits of proper coding
practices have been well publicized, and the risks of buying or selling
untested code have been demonstrated time after time.  The economic problem
remains unaddressed.

Physical systems take time to build.  Every new part has to be designed and
manufactured.  Prototypes have to be built before they can be evaluated.  It
might take over a year to build a prototype for an automobile, and two or three
years to set up an assembly line for a totally new model.  Compared to such
long product lead times, three or four months of testing are inconsequential.
Developers are not, in general, the people doing the testing, so continued
development is not seriously affected.

Software systems, on the other hand, take very little time to build.  Each
change makes its way into the system in the time it takes to recompile and
reload.  Thus, a software system that takes a year to build will have many
times the complexity of a physical system taking the same amount of time, and
the corresponding testing period should also be many times as long.  In
addition, effective software testing usually requires the same level of skill
as software production, so that investment in testing adversely affects
investment in future development.  By greatly reducing the development time for
a given function, software has greatly increased the relative cost of making
that function reliable.

In a marketplace where time to market is the controlling factor in business
success, there is very little an individual programmer, system designer, or
company can do to oppose these forces.  A company that invests heavily in
building reliable systems will lag far behind the market in other measures of
system quality, such as functionality and performance, and will find itself
limited to those niche markets where reliability is the overriding concern.
Only if the marketplace changes so that those niche markets dominate will
software reliability improve.
                                                 --bob--


Re: Engineering vs. Programming

Shawn Stanley <shawn@pnet51.cts.com>
Thu, 9 Feb 89 14:37:44 CST
There will probably always be a difference in opinions between engineers and
programmers.  Although they interact, they are not closely related fields, and
thus have totally different problems.

For example, you can't take a test meter and check a program for discontinuity.
And you generally don't heat-test a program for industrial use.

There are vast differences in debugging techniques, as well as design
techniques.

UUCP: {uunet!rosevax, amdahl!bungia, chinet, killer}!orbit!pnet51!shawn
INET: shawn@pnet51.cts.com

Please report problems with the web pages to the maintainer

x
Top