The RISKS Digest
Volume 19 Issue 33

Friday, 22nd August 1997

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Public loo guilty of making nuisance calls
Nick Rothwell
Risks, Reliability, Regulation, and Infrastructures
Willis H. Ware
Communications lines, redundancy and diversity
Marion F. Moon
The risks of no long-term planning
David Mortman
Re: SET risks
Jacob Sterling
Re: Unprovoked threatening spam from Samsung's Lawyers
Sean Eric Fagan
Phillip M. Hallam-Baker
SPAM-L — the SPAM Fighters' List
Pete Weiss
Mir problem corrections
Dennis Newkirk
Re: Risks of dummy addresses
Elizabeth Zwicky
Stephen Sprunk
Re: No Surfing on the Senate Floor
William B. Henry
Info on RISKS (comp.risks)

Public loo guilty of making nuisance calls

Nick Rothwell <nick@cassiel.com>
21 Aug 1997 15:39:14 -0000
From *Computer Weekly* (UK), 21st August 1997:

A woman who was phoned repeatedly by a public lavatory asking her to
fill it with cleaning fluid had to ask BT to put a stop to the calls.

The case is one of a growing number of nuisance calls generated by
programming errors.

About 15% of all nuisance calls are caused by errors, most of which are
traceable to faulty programming, according to a BT spokesperson.

The most common type of computer-controlled nuisance call is from soft
drink vending machines which need refilling. Wrongly programmed fax
machines and modems are another cause of complaints.

In a recent case, a North Sea oil rig called the wrong number at regular
intervals to ask for a service. Potentially serious cases involve traffic
lights, boilers and hospital refrigerators.

"The calls are mainly silent, because they are intended for modems to pick
up, but some give a recorded message," said a BT spokesman.

Nick Rothwell, CASSIEL  http://www.cassiel.com
  contemporary dance projects music synthesis and control

  [Not a new story in RISKS, but it seems to be happening more often.  PGN]


Risks, Reliability, Regulation, and Infrastructures

"Willis H. Ware" <willis@rand.org>
Thu, 21 Aug 97 13:45:44 PDT
In another discussion group, a thread started on the topic of using
electromagnetic spectrum vs. cable systems to deliver TV.  That led to an
observation (Willis H. Ware <willis@rand.org>) that cable systems were more
likely to be extensively damaged, and therefore, be unavailable, during a
major emergency such as hurricane or earthquake.  Hence, they would be less
dependable as a means of emergency communications to the public.  This
remark was interpreted by Charles Brownstein (cbrownst@cnri.reston.va.us) to
mean "sustain a broad mix of (information) transport capabilities."

He then related his own experience in which a power surge in his
neighborhood took out not only the usual POTS service but also his ISDN
service — not to mention electrically frying various other things, such as
his motion-detecting intruder lights.  To top it off, the batteries in his
cell phone were flat.

All of that led me observe as follows: Common points of failure at work
again!  And in fact, it's a small example of just the issue that the
President's Commission on Critical Infrastructure Protection is concerned
about.  In this instance, the electrical power grid was the common
vulnerability and its misbehavior — a surge followed by a several hour
outage — had repercussions on other utilities and devices.

The inverse situation is also well understood; namely, the PSTN — or PSN or
POTS as also known — is a central and single point of vulnerability to all
manner of information systems nation-wide.

But have we inadvertently RISKed the country and put it in a less robust
posture as a result of trying to increasingly decentralize and deregulate
all manner of things?  As various utilities become more and more
deregulated, it will get increasingly harder to worry about continuity of
service, responsibility for emergency problems, FEMA-type emergency
obligations, contingency planning, etc.  To put it simply, there is no one
in charge of the top level system considerations; there is no focal point to
which some level of government could turn to for action.

Consider the situation in California; it is deregulating its 60 Hz power
offerings.  Some company will own and run the wire distribution systems;
suppliers of power will "rent" capacity over the wire grid to deliver power
to consumers.  The media reports that there is currently a small army of
carpetbagger sales-people with glitzy brochures trying to persuade big users
of power to desert their traditional regulated utility source and go with an
alternate supplier.

Since it's a bit difficult to tell one 60Hz cycle from another — no concept
of taggants in the power business — who knows where one's power really will
come from?  For Californians, it could be the traditional sources; it could
be the Columbia River power complex; it could be the TVA if it has surplus
electricity to peddle; it could even be the huge power facility in Quebec.
While some of these inter-vendor transactions are exchanges of
power-generating capacity here for some there (i.e., paper transactions in
an accounting system), much of it will be real with honest-to-goodness power
flowing all over via extensive inter-tie lines.

And who, I wonder, is pondering the overall system behavior of such a
configuration, and who is wondering about continuity of service to critical
consumers, and who is addressing the legal obligations and fiscal
responsibilities (to the end-user) of all the players?  Certainly not the
politicians; they settle for simply making a policy that says "we will have
electrical power deregulation."

It remains to be seen whether such broad reaching issues can be adequately
handled by an industry on its own, or whether there will emerge an
unavoidable requirement for the Federal government to intervene and play
some regulatory role.  The 50 states cannot do it; the electrical power
industry has become an interstate, even international, business through
deregulation.

Willis H. Ware, RAND, Santa Monica, CA


Communications lines, redundancy and diversity

"Moon, Marion F" <mmoon@msmail2.hac.com>
21 Aug 1997 09:04:34 -0800
Bob Ratner's note on loss of communications lines points out the all too
common problem with the vulnerability of such lines.  While barges and tugs
may be bit more exotic, the ordinary backhoe represents the overwhelmingly
common cause of loss of communications.  Simply providing a microwave link
as backup may not reduce the vulnerability as much as many commonly think.
Simple redundancy always appears to solve the problem for many people.  But,
redundancy with physical DIVERSITY is the only approach that begins to solve
the problem.

The failure to understand the diversity concept is apparent in any number of
US systems but I'll use the new Chek Lap Kok airport in Hong Kong as a good
(bad?) example to avoid embarrassing those closer to home.  Two redundant
shared computer centers maintain data for all airport operations.
Unfortunately, these two centers are located in the terminal building
back-to-back with only a brick wall separating them. The vulnerability is
obvious.  Architects and engineers like symmetry.  Redundant lines from
these two centers connect to outlying facilities.  The security (police and
fire) facility is in an outlying building and connected to the two computer
centers by redundant lines --- laid in the same trench. Again, the
vulnerability is obvious.

I keep saying obvious.  A surprising number of otherwise knowledgeable
network designers fail this test so it not as obvious as I think it should
be.  I recently learned that only about 5 percent of the population is
capable of seeing in three dimensions so this failure to see in three
dimensions is likely to be one source of the problem.

Marion Moon


The risks of no long-term planning

"David Mortman" <mort@juggling.org>
Thu, 21 Aug 1997 15:57:46 -0500
I work for a large electric power company in northern Illinois.  We are in
the process of moving to a new (supposedly bigger building).  In the process
we are selling our current building which we built approx.  50 years ago.
At that time some brilliant designer said, "We don't need to install power
meters because we're the power company."

Well, when we sold the building, we had to install power meters.  This
involved having 8- to 10-hour power-outages two weekends running.  All
unnecessary machines were to be turned off and all critical machines would
have special power running to them.

Of course, we don't have any UPS systems, so the systems had to be shut down
to route the new power from the other side of the building, then after the
power outage had to be shut down again so the power could once again be
re-routed for the following weekend's power shutdown.

Needless to say, lots of hard drives were quite unhappy with being off for
that many hours.  Stiction was one of the words of the day.

-Mort

  > PGN responded, "What is STICTION?  I need a better Stictionary."
  > David replied thusly:

When a drive runs for a long time, the long-chain carbon molecules that make
up the lubricant tend to align.  This means that when the drive is turned
off, the lubricant can crosslink and polymerize, turning it into a hard
plastic.  Needless to say this is a problem when turning the drive on again.

The cure for stiction (sticktion?) is fun, however.  You either pick up the
drive about 4-6 inches above a table and drop it.  Or you smack it fairly
hard just as you are turning on the power to the drive.  This cracks the
plastic and lets the drive spin up.


Re: SET risks (Svigals, RISKS-19.31)

<Jacob_Sterling@mastercard.com>
Thu, 21 Aug 1997 08:36:11 -0500
Jerome Svigals wrote in RISKS-19.31 about the Secure Electronic Transaction
protocol, and risks associated with the use of this protocol.  I'd like to
clarify a few items, having worked in the credit-card industry for a while
and having written a paper on SET.  While I wasn't on the team here at
MasterCard that co-developed SET, there are a few things I can speak to.  (I
must also add that the views expressed herein are not necessarily shared,
endorsed or otherwise sanctioned by my employer.)

Briefly, SET calls for the encryption of financial data across an open
network in the following fashion.  The transaction information is encrypted
with a symmetrical, secret-key encryption algorithm.  A new key is generated
for each transmission.  The information and secret key are then encrypted
using an asymmetrical public-key algorithm.  The public key is provided by
the receiver of the transmission.  The double encryption is referred to as a
"digital envelope."  The envelope can only be decrypted by the private
component of the public-key pair, which is held at the merchant site.

There is also the option of attaching a checksum, called a "message digest,"
at the sender side, to authenticate that the message hasn't changed en route
to the receiver.  The digest is specified as a 160-bit result of an
algorithm run using the sender's private key.  The receiver could then
decrypt the digest using the public component, and compare the value to that
generated by running the received message through the same algorithm.  The
odds of a message digest being the same for two separate messages has been
computed at one in 10^51.

In order to receive a public-key pair, the merchant (or any other party)
must provide proof of identity to a designated Certificate Authority, which
will issue a digital certificate.  Yes, SET does not know who is presenting
the certificate, but both the sending and receiving party have ample
information about whom each "claims to be" to produce an audit trail vastly
superior to most telephone and mail-order transactions today (card
transactions with the highest incidence of fraud).  With these as the other
primary methods of non-point-of-sale financial transactions, I would bet on
the security of a SET message every time.

It is true in a sense that SET "relies on vendor software to provide
security," but the risk associated here is not with SET itself.  "Vendor
software" in Svigals' example is already in place, and has been for quite
some time, in the "brick-and-mortar" world of reality.  The points of risk
at the merchant site, the acquiring bank (the one representing the merchant)
and the issuing bank already exist, and have existed for the entire history
of credit card transactions.  This is not new.  For any "card security
product" in existence, there will continue to be the same software packages
present.  This is simply inherent in the transaction medium.

SET is a protocol specification, not a process.  Naturally this is simply a
semantic issue, but to me it's like calling Ethernet a "process."  Sounds a
little funny, doesn't it?  Products can follow the SET specifications, just
as off-the-shelf networking packages are designed for certain environments;
but the environments themselves are not the product in this example.
Svigals is comparing two disparate concepts when he claims that there are
over 50 other products suitable for evaluation.

Svigals is right in noting that the "prime source of fraud in credit card
transaction systems" is the people with access to the system.  Well, of
course people are the source of fraud.  Would machines steal numbers for
themselves?  Remember, also, that this is true of ANY type of card
transaction — this risk is NOT limited to the Internet.  Therefore, this
risk should NOT be viewed as having a special relationship with SET.

In my opinion, the real risk with SET is the same risk posed by any card
security solution.  In the final analysis, the public is not going to have
much of an option as to what solution each merchant site uses.  This
decision will be up to the merchants and the banks, and I doubt they will
solicit public input.

Jake Sterling  jacob_sterling@mastercard.com


Re: Unprovoked threatening spam from Samsung's Lawyers (RISKS-19.32)

Sean Eric Fagan <sef@Kithrup.COM>
Thu, 21 Aug 1997 10:34:00 -0700 (PDT)
Here's a risk indeed: someone decides to sully Samsung's good name by
sending out tens (or even hundreds) of thousands of threatening messages...
and people believe it, without asking Samsung or checking Samsung's Webpage.

In other words, this is a hoax.  It is a revenge spam — Samsung apparently
has an idea who did this, and their lawyers are *not* happy.

Doing a few basic checks would have verified this.

I think the risks are huge.  Others agree with me — since this sort of
thing seems to be on the rise.  (Don't like someone?  Send out a million
messages in that person's name!  Most people won't bother doing any
verification, and will gladly spread the word that the target is a weasel!)

In case it's not obvious, I am not only disgusted with the people engaging
in this kind of attack, but with the people who don't try even the simplest
of verifications before spreading the word.

  [Sidney Markowitz <sidney@communities.com> points to a news.com item
    <http://www.news.com/News/Item/0,4,13307,00.html>.  That it was a
  hoax was also noted by Bruce R Koball <bkoball@well.com>,
  David Damerell <damerell@chiark.greenend.org.uk>, and
  "Diane Wilson" <thwilson@nortel.ca>.  PGN]


Re: Unprovoked threatening spam from Samsung's Lawyers (RISKS-19.32)

"Phillip M. Hallam-Baker" <hallam@ai.mit.edu>
Thu, 21 Aug 1997 17:21:44 -0400
Ooops! Internet Meme alert...

The "Samsung" spam has since been demonstrated to be a hoax created
to defame Samsung by an irate Spamer upset about being booted from the
Samsung-owned 'sailahead' ISP.

In response to the original question, I doubt that any jury or for that
matter judge would have sympathy for a person who generated a large quantity
of unwanted mail and then attempted to sue when he received a large quantity
in return.  The practical difficulty at this point is that most SPAMs use
forged headers, causing the backlash to hit someone else.  So far, in cases
where the victim of this backlash has chosen to sue, the target has been the
original SPAM instigator.

Phill


SPAM-L — the SPAM Fighters' List

Pete Weiss <Pete-Weiss@psu.edu>
Thu, 21 Aug 1997 14:08:55 -0400
Many of the problems associated with SPAM are discussed on the (high
volume) SPAM-L list.  The FAQ reference is:
  http://oasis.ot.com/~dmuth/spam-l/

Instructions on subscribing:
  http://oasis.ot.com/~dmuth/spam-l/#spam-l-subscribe

Pete Weiss at Penn State


Mir problem corrections (Re: PGN and Baube, RISKS-19.32)

Dennis Newkirk <rusaerog@mcs.net>
Wed, 20 Aug 1997 22:57:42 -0500 (CDT)
I have no doubt these posters believe they are reporting the facts in the
posts below, but the sources listed are not reporting the facts or are
highly misleading.

The proper order of events is the computer failed, automatic attitude
control was disabled, while station keeping the Progress detected an
unexpected Mir attitude change (free drift) so it aborted its automatic
docking and remained at station keeping distance. Solovyov continued the
docking with the manual TORU remote control system. The crew then put Mir
into a slow spin to stabilize Mir's solar arrays roughly on the sun to
prevent a complete power generation outage.

>Without the computer system, Mir is spinning ...

Mir does not "spin", it justs retains its attitude as it flies around the
world.  Orientation to the sun is typically the most important factor in
attitude, although solar heating and earth observation attitudes are often
desirable, free drift is desirable in most material processing experiments.
Attitude control rocket firings are periodic and very breif, not continual.
To save fuel, sometimes Mir is put into a solar oriented spin to help
stabilize it.

> ... the discovery of erroneous information sent to Mir.

Erroneous information was sent to Progress not Mir. Progress is the active
element in docking, Mir just maintains its attitude during docking.

>The June 1997 crash ... is being attributed to the failure of the
>cosmonauts to adjust the automated approach controls to compensate for an
>extra ton of weight that had been added to the cargo vessel.

That's only a rumor, the accident investigation is not complete.  New
software was being tested in the TORU remote control docking system also.

>Other recent problems were not computer related — an oxygen fire in
>February, failure of both oxygen generators in March, an antifreeze leak in
>April, Vasily Tsibliyev's heart irregularities in July

All the above events are not new to Mir or Soviet/Russian space stations.
US astronaut Norm Thagard also had heart irregularities while on Mir, it is
common in long term missions. Fires have broken out before but have been
well hidden from the public. February's seems to be the largest fire as far
as what is commonly known. The reason the press has pushed this story so
hard has more to do with NASA's near paranoia about fire in spacecraft after
the old Apollo 1 fire, but that's another story.

Electron oxygen generators mentioned have 1-2 backup systems at all times.
There were no Electron's on Mir at all for over a year after its launch.
Cooling leaks have been happening for years, the press only pickup up on the
leaks to further sensationalize their stories in my opinion. Leaks also
happened during Shannon Lucids mission to Mir but the press never mentioned
them.

>the accidental disconnecting of a power cable that effected Mir's
>orientation system for a day also in July.

Out of all the issues listed here this is the only one which has yet to be
explained away.  Present reports state that the crew was not supposed to
disconnect anything, so this may be a simple failure in communications.

>... the relief crew had to make a sudden manual docking with Mir.

Nothing at all unusual happened during this docking. On final approach to
Mir Solovyov could not see the periscope cross hairs against the docking
target given the lighting conditions, so he aborted the automatic docking
and manually took control. Once he let the Soyuz drift a bit he could see
the cross hairs which had been so perfectly aligned he hadn't been able to
see them easily. Manual dockings are very typical, cosmonauts like to fly
their ships, prove their capabilities and training, gain experience, and
extra pay for completing the task. Often in the 20 years of automatic
dockings, cosmonauts have taken manual control to dock. It is noteworthy
that automatic docking systems have never failed to dock a spacecraft when
they are used. There have been some retried attempts, but ultimately they
have never failed.

>Subsequently, when Tsibliyev and his flight engineer returned to earth, a
>booster rocket failed that was intended to ease their landing.

Landing rocket irregularities are also not completely uncommon, they are used
for comfort, and are not necessary for survival.

>The difficulties relating to the space station serve as another poignant
>reminder ...  [PGN]

I would not attempt to draw any conclusions about system design from the
above non-factual or misleading quotes!

Russian spacecraft design is exceedingly well developed, robust and easy to
use and repair. They have lacked powerful computers and associated
systems. When Mir was launched it was designed as the most highly automated
space station ever. As it grew and aged its systems have become less
flexible and automated, but it was designed for a 3-5 year life, not 11
years.  It is a tribute to the foresight and skills of Russian aerospace
engineers that it can still be inhabited at all. It can be said that it has
been operating for half its extended life on design margins. NASA is
learning a lot from this experience so that ISS operators will not be so
surprised when they have these problems in the years ahead.

>Something to help one understand just how grave (or overblown) the
>current situation is?  ... Fred Baube

The news reports often exaggerate the situation, mostly out of
ignorance. The real serious stories are not always being made public by
Russia or NASA in a timely basis. "Thumps & bangs" probably ruptures in the
Spektr module were not made public by Russia or NASA until an independent
eavesdropper to spacecraft communications alerted the media.

The Soyuz can be used for escape in almost any conceivable accident. The
station would have to be spinning fast (sort of like the Gemini 8 incident)
to cause the docking latched to bind. Once retracting the latches, springs
force the Soyuz away from Mir.

Dennis Newkirk, Cosmonautics Editor - Quest Magazine
Editor - Russian Aerospace Guide  http://www.mcs.net/~rusaerog/


Re: Risks of dummy addresses (RISKS-19.32)

zwicky <zwicky@pterodactyl.neu.sgi.com>
Thu, 21 Aug 1997 14:09:37 +0200
Dummy addresses were a hot issue for us in writing "Building Internet
Firewalls"; my co-author in particular had become sensitized to it when he
unthinkingly used a dummy address in a firewalls paper and it turned out to
belong to AT&T. The relevant people *were* amused, but they've been teasing
him about it ever since.

In early drafts, we were careful to take dummy numeric addresses out of the
reserved address ranges, but we were pasting in genuine example text from
real, Internet connected machines, and we were using as examples hostnames
we were familiar with.  Then at the end we fixed things up. (Most of them; I
literally just now found a particularly bad example where we left in not
only a genuine hostname, but a genuine telephone number as well. At least
it's a business number...)

That led to a number of problems. To start with, if you have an early
printing of the book you may wonder why two otherwise intelligent people
claim that looking up "ftp.somewhere.net" in DNS involves a query for the
nameservers for the "com" domain.  Obviously we search-and-replaced the
original .com hostname, without also searching for the string "com" by
itself.

Because there's no equivalent in host names to the reserved address ranges,
we didn't have a lot of options to keep from colliding with real
names. (There is one test domain, but that doesn't go very far in
examples...) We avoided names that were registered at the time, but at least
one of our dummy names ("longitude.com") has now been registered. In some
cases, we used top-level domains that we felt were probably not going to
fill up soon, leading me to locate dummy machines in Cape Verde (.cv),
Cameroon (.cm), the British Indian Ocean Territory (.io), and the British
Virgin Islands (.vg). This isn't terribly satisfactory, as the resulting
machine names look unsettlingly odd.

When we edited the main text, we replaced names with completely different
ones, but when I edited the log entries, I amused myself by creating what is
effectively a roman a clef; the first component of the machine name has been
changed, and either the top-level domain or the second-level domain has been
changed, but the intermediate components have been left the same. I am glad
to say that the people at the machine very casually disguised as
pansy.csv.warwick.ac.cv have not complained. (Although, just in case either
they or in the inhabitants of Cameroon are upset, I take full
responsibility.)

As long as you're willing to distract people with odd host names, the
British Indian Ocean Territory appears to be a safe bet; unlike .cv, .cm,
and .vg, there is still no name service for .io, making it ripe for
fictional hosts. According to my almanac, it contains the Chagos
Archipelago, with a surface area of 23 square miles and no civilian
population whatsoever, although both the UK and the US "maintain a military
presence". Presumably any computers it contains will be in .mil or .uk.

Elizabeth Zwicky  zwicky@greatcircle.com


Re: Risks of dummy addresses (RISKS-19.32)

Stephen Sprunk <sprunk@csi.net>
Wed, 20 Aug 1997 20:42:30 -0500
The Internet Assigned Numbers Authority holds EXAMPLE.COM in the public
trust so that authors will have a domain to use in instructional and other
texts.  There are no legal risks to using example.com; however, there are
risks if the author uses their own domain in a text: misconfigurations by a
reader attempting to duplicate the author's work may render the author's own
domain unusable!  The risks of using any other domain in a text (ie one
registered to any other party) are obvious.

Stephen

  [Although we almost always remove irrelevant trailers from RISKS
  contributions, PGN thought this one is quite interesting:]

  Unsolicited commercial/propaganda e-mail subject to legal action.  Under US
  Code Title 47, Sec.227(a)(2)(B), Sec.227(b)(1)(C), and Sec.227(b)(3)(C), a
  State may impose a fine of not less than $500 per message.  Read the full
  text of Title 47 Sec 227 at http://www.law.cornell.edu/uscode/47/227.html


Re: No Surfing on the Senate Floor (Spainhower, RISKS-19.29)

"William B. Henry" <mustang@erols.com>
21 Aug 97 14:59:50 +0000
If cellular phones, pagers, and notebook computers are
prohibited, shouldn't teleprompters also be prohibited?

Bill Henry

  [But Senators can read, you know.  PGN]

Please report problems with the web pages to the maintainer

x
Top