The RISKS Digest
Volume 21 Issue 43

Tuesday, 29th May 2001

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Xcel Energy wants to close Denver call center
William Kucharski
Topeka KS water treatment outage
Jerry James
WA public schools switching to risky new system?
Phil Kos
The World Bank meets on the Internet
Andres Silva
Eurocops want seven-year retention of all phone, Net traffic
Hawkins Dale
McDonald's testing cashless payments
NewsScan
Re: The Faith-Based Missile Defense
Brian Clapper
Re: Parnas's book on software
John Graley
Bugless = utopia
Andrew Fleisher
Another fear of Risks
Bob Frankston
Re: Word file turns into two disjoint texts
Jeanne Sheldon
REVIEW: "Demystifying the IPsec Puzzle", Sheila Frankel
Rob Slade
Info on RISKS (comp.risks)

Xcel Energy wants to close Denver call center

<William Kucharski <kucharsk@mac.com>>
Mon, 14 May 2001 03:53:11 -0600

According to the Rocky Mountain News, Xcel Energy, once Public Service of
Colorado, wants to close their customer call center in Denver, meaning calls
regarding service outages in Colorado would instead be routed to call centers
in Minneapolis, MN, Eau Claire, WI and Amarillo, TX, with Xcel eventually
wanting to consolidate all its call centers into one location.

The full story can be found at:
    http://www.rockymountainnews.com/drmn/business/article/
    0,1299,DRMN_4_453567,00.html   [URL broken for readability]

Aside from the other obvious risks, given the problems Washington D.C. has
with their 911 database (comp.risks v.21.40), I am not looking forward to
how call centers several states away will react to a line being down or a
natural gas leak in a rural or newly developed area which will likely not
even exist on their maps.

Note that Xcel already serves twelve states with just the four call centers
listed above...

William Kucharski <kucharsk@mac.com>


Topeka KS water treatment outage

<james@eecs.ku.edu (Jerry James)>
24 May 2001 12:45:03 -0500

The *Lawrence Journal-World* (www.ljworld.com) reported on 22 May 2001
that Topeka (pop. ~122,000), the capitol city of Kansas, had suffered a
water-treatment plant outage due to a power failure on Sunday.  A storm
passed through the area and knocked out power to some parts of the city,
including the part containing the water-treatment plant.  As a result,
Topeka residents had to boil their water or buy bottled water, and
drinking fountains across the city were turned off.

The article quotes a Lawrence official, who reassures residents of the
smaller city (pop. ~80,000) that such an event is much less likely for
Lawrence, since it has two water treatment plants on nearly opposite sides
of the city.


WA public schools switching to risky new system?

<Phil Kos <PhilK@solthree.com>>
Wed, 23 May 2001 18:56:14 -0700

An AP article dated 22 May that I read on aol.com
  http://my.aol.com/news/news_story.psp?type=1&cat=0100&id=0105220240181754
says that Washington state's public school IT cooperative (WSIPC,
<http://wsipc.org/>) is spending $20M on a new system from a company called
Skyward (<http://www.skyward.com/>) that seems to raise more questions than
it answers.

Among other things, student grades and attendance data will now be available
on the Internet, and the system will supposedly also integrate such
functions as administration, accounting, and scheduling.  Some pretty
specious comments are made regarding system security, e.g.:

"Skyward uses the same security measures that online retailers like
Amazon.com use for credit card purchases over the Internet.  The system also
resists tampering because teachers continually revise the site."

Securing a system like this is theoretically possible, if the software
itself is written well.  But will security actually be implemented? Do the
schools have people who are knowledgeable enough to administer the systems
without leaving gaping security holes? I kinda doubt it.  And if holes do
pop up, the results could get pretty ugly.  A lot of students will probably
do their darnedest to hack the system.  I expect that in most schools it
won't turn out to be very hard, either by exploiting poor network
configurations or just through basic social engineering, to find a back
door.

The biggest problem I see with this change however is that it is likely to
become an attempt to replace a somewhat unwieldy but functional system (the
current "old-technology" interface between parents and school officials)
with one that has a totally different set of usage assumptions and failure
modes, many of which will be confusing to anyone who wasn't involved in the
production of the system.  There's a strong possibility that schools
adopting the new systems will try to switch over to using them exclusively,
leaving technophobic--or just "unconnected"--parents out in the cold.

There's a logic error common in technological industries these days that
says that a new technology will make any similar older technology obsolete,
but this rarely works out the way the new tech evangelists think it will.
In reality new technologies tend to co-exist with the old ones rather than
replace them.  (When I mentioned this error in response to yet another
glowing futurist tech article predicting the death of CRTs, one of my
colleagues here astutely replied "That's true, I heard it this morning on my
radio.")

I sincerely hope that WSIPC gets things right here and this effort isn't a
washout.  $20M is a fairly large wager when you're playing techno-craps.
(Note that WSIPC headquarters is on Casino Road in Everett, WA... ;)


The World Bank meets on the Internet

<Andres Silva <asilva@fi.upm.es>>
Mon, 28 May 2001 14:35:35 +0200

Having fear of anti-globalization activists, the World Bank announced the
cancellation of its 2001 Annual Bank Conference on Development Economics
(ABCDE) in Europe, originally slated to be held in Barcelona on 25-27 Jun
2001.

This is the WB news release on the subject:

 http://wbln0018.worldbank.org/EURVP/web.nsf/068c530cca07c3bac12569ed005af420/
 6b160161ef128660c1256a1e00541840?OpenDocument  [URL SPLIT]

As Barcelona streets seem dangerous, they are planning to move the
conference to a "safest" place, as the internet (!). In the news release
they say that "Fortunately the internet means that academic debates can now
take place on line" and "plans are being made for an on-line
discussion". OK. Let's see...

Andrés Silva  http://www.ls.fi.upm.es/UDIS/miembros/asilva

  [I thought about retitling this "The World Bank Meets The Internet",
  but thought better of it.  PGN]


Eurocops want seven-year retention of all phone, Net traffic

<"Hawkins Dale" <hawkins_dale@watsonwyatt.com>>
Tue, 22 May 2001 17:44:06 -0700

Civil liberties publication Statewatch claims to have obtained leaked
documents from the Council of the European Union (the 15 EU governments),
which recommend the long-term retention of "every phone call, every mobile
phone call, every fax, every e-mail, every website's contents, all internet
usage, from anywhere, by everyone, to be recorded, archived and be
accessible for at least seven years."

See http://www.statewatch.org/soseurope.htm .

It gets scarier!

The law enforcement agencies, argues the proposal, must have access to "user
addresses, equipment identities, user name/passwords, port identities, mail
addresses etc" The agencies are also to be provided with "the full name of
the person (company), the residential address and credit card details."

Are they mad?  One barely knows where to start enumerating the risks of such
an undertaking.

Hawkins Dale  hawkins@REMOVEMEpobox.com


McDonald's testing cashless payments

<"NewsScan" <newsscan@newsscan.com>>
Tue, 29 May 2001 08:43:31 -0700

McDonald's Corporation has begun testing the use of a cashless payment
system that uses the kind of radio transponder technology that was first
developed by state highways to allow motorists to drive through toll plazas
without having to stop to make a payment. McDonald's customers will wave the
"Speedpass," a small transponder, at a drive-through window or at device
inside the restaurant, and their transactions amounts will be immediately
deducted from a "FreedomPay" account they've established on the phone or
Internet backed by a major credit card. Similar systems have been used at
Mobil gas stations and at some other fast-food restaurant chains.
(Reuters/*The New York Times* 28 May 2001; NewsScan Daily, 29 May 2001
  http://www.nytimes.com/reuters/technology/tech-leisure-mcdonald.html]

  [To ensure that no one can forge the cards, I imagine they will use a BIG
  MAC (that is, a long Message Authentication Code, perhaps also serving as
  Mandatory Access Control).  However, MACs tend to be much weaker than
  cryptographic checksums security-wise.  Of course, in the spirit of FAST
  FOOD, if you are really in a hurry to pay for your meal with your
  Speedpass and then ingest it rapidly, you might phone ahead on your car
  phone to order your burger and fries in a liquid form: a BIG-MAC SHAKE,
  with liquified meat, cheese, and bun (and perhaps including your cold
  drink all in the same convenient take-out cup), which you could then wolf
  down in one big gulp while driving and talking on your hands-free phone.
  (I understand that vegetarians in some places already have a beef with
  their fries.  <Pun intended.>)  But, given this new opportunity for
  ULTRA-FAST-FOOD, I think I'd rather FAST.  After all, speed (with multiple
  meanings) can often lead to arrest (with multiple meanings).  PGN]


Re: The Faith-Based Missile Defense

<Brian Clapper <bmc@WillsCreek.com>>
Thu, 24 May 2001 08:49:28 -0400

Does anyone else find George W. Bush's global missile shield proposal eerily
reminiscent of Reagan's Strategic Defense Initiative (a.k.a., "Star Wars")?

Here are some excerpts from a recent Bush speech (as transcribed on the
Brookings Institution's web site [1]):

  We must seek security based on more than the grim premise that we can
  destroy those who seek to destroy us. This is an important opportunity
  for the world to rethink the unthinkable and to find new ways to keep
  the peace. Today's world requires a new policy, a broad strategy of
  active nonproliferation, counter-proliferation and defenses.  [...]

  We also recognize the substantial advantages of intercepting missiles
  early in their flight, especially in the boost phase. The preliminary
  work has produced some promising options for advanced sensors and
  interceptors that may provide this capability. If based at sea or on
  aircraft, such approaches could provide limited but effective defenses.

  We have more work to do to determine the final form the defenses might
  take. We will explore all of these options further. We recognize the
  technological difficulties we face, and we look forward to the
  challenge. Our nation will assign the best people to this critical
  task. We will evaluate what works and what does not.

  We know that some approaches will not work. We also know that we'll be
  able to build on our successes. When ready and working with Congress,
  we will deploy missile defenses to strengthen global security and
  stability.

For comparison, here's a quote from Reagan's March 23, 1983, speech, which
kicked off the SDI effort (as transcribed on the Federation of American
Scientists web site [2]):

  What if free people could live secure in the knowledge that their
  security did not rest upon the threat of instant U.S. retaliation to
  deter a Soviet attack, that we could intercept and destroy strategic
  ballistic missiles before they reached our own soil or that of our
  allies?

  I know this is a formidable, technical task, one that may not be
  accomplished before the end of this century.

  Yet, current technology has attained a level of sophistication where
  it's reasonable for us to begin this effort. It will take years,
  probably decades of effort on many fronts. There will be failures and
  setbacks, just as there will be successes and breakthroughs. And as we
  proceed, we must remain constant in preserving the nuclear deterrent
  and maintaining a solid capability for flexible response. But isn't it
  worth every investment necessary to free the world from the threat of
  nuclear war? We know it is.

  In the meantime, we will continue to pursue real reductions in nuclear
  arms, negotiating from a position of strength that can be ensured only
  by modernizing our strategic forces. At the same time, we must take
  steps to reduce the risk of a conventional military conflict escalating
  to nuclear war by improving our nonnuclear capabilities.

Surely, there are differences between the two initiatives, but it's the
similarities that strike me.

The very first issue of the Risks Forum Digest contains a news item from
*The New York Times* announcing the resignation of David L. Parnas from an
advisory panel on anti-missile defense. Parnas essentially asserted that the
SDI would never work. [3]

Parnas' essays on the topic were ultimately collected and published in
Communications of the ACM [4]. Here's an excerpt from Parnas' introduction
to the CACM collection of essays:

  The individual essays explain:

  1. The fundamental technological differences between software engineering
     and other areas of engineering and why software is unreliable;
  2. The properties of the proposed SDI software that make it unattainable;
  3. Why the techniques commonly used to build military software are
     inadequate for this job;
  4. The nature of research in software engineering, and why the
     improvements that it can effect will not be sufficient to allow
     construction of a truly reliable strategic defense system;
  5. Why I do not expect research in artificial intelligence to help
     in building reliable military software;
  6. Why I do not expect research in automatic programming to bring
     about the substantial improvements that are needed;
  7. Why program verification (mathematical proofs of correctness)
     cannot give us a reliable strategic defense battle-management
     system;
  8. Why military funding of research in software and other aspects
     of computing science is inefficient and ineffective.

Have we really made sufficient advances in software engineering--in the way
we build large systems, in reliability, in safety, in testability--so that
this kind of project is more workable now than it was 18 years ago? Would
David Parnas be less likely to resign from such an advisory panel today?

Perhaps my perspective is skewed from reading RISKS for 16 years, but I
doubt we're substantially more prepared to build a such missile shield today
than we were in the 1980s.

Brian Clapper, bmc@WillsCreek.com

References:

[1] http://www.brookingsinstitution.org/fp/projects/nmd/bush20010501.htm
[2] http://www.fas.org/spp/starwars/offdocs/rrspch.htm
[3] The Risks Digest, Volume 1, Issue 1 (1 August 1985),
    http://catless.ncl.ac.uk/Risks/1.01.html#subj6.1
[4] Communications of the ACM, Volume 28, Issue 12 (December, 1985),
    pp. 1326-1335. (ACM members can obtain a copy of this article through
    the ACM Digital Library.)


Re: Parnas's book on software (Horning, RISKS-21.42)

<John Graley <jgraley@arm.com>>
Tue, 29 May 2001 14:13:01 +0100

A better experiment is to try it out and see if it works.

Without going into the highly debatable specifics, there's no doubt that we
currently have a number of programming "paradigms" propagating around the
software world purely or mainly because they are hard to argue with.  I
suspect other disciplines may be seeing this too.

Methodologies that are hard to argue with are likely to propagate though
means such as: when people read books, when people study for qualifications,
when they are trained by an employer, or when an employer instigates new
procedures. OTOH, schemas that work in practice are typically propagated
through experimentation and shared experience.

That the former process is currently outpacing and outstripping the latter
suggests that there remains something "unfocussed" about the way we approach
methodology these days... maybe too much talking and not enough
doing. That's a risk, for me anyway.

  [In fact, Parnas *has* tried out many of these ideas in
  practice, for many years, with considerable success.  PGN]


Bugless = utopia

<Andrew Fleisher <andrew8@start.com.au>>
Fri, 25 May 2001 13:48:49 +1000

> In this case, I can't resist wondering how one can debug complex software
> before deploying it. The danger is more in assuming one can and not
> preparing for failure than in not doing complete debugging. This doesn't
> mean one should not do any testing, just that the limits must be
> recognized.

A source or corollary to the danger you cite is the expectation many people
have that testing can prove there are no bugs. Testing can only prove there
is/are bug/s.

> I'm a great admirer of MIR — the ability to keep it going with just the
> "chewing gum and bailing wire" (to use an old metaphor) impresses me more
> than a design which is "perfect".

In my opinion, a practical person expects a 'perfect' design to include very
easy repairability and maintainability. This is a significant source of risk
reduction.

Andrew


Another fear of Risks

<"Bob Frankston" <rmf2gOther@bobf.Frankston.com>>
Wed, 23 May 2001 20:49:38 -0400

I'm using IE 6.0 and it works pretty much like 5.0. With one notable
exception — UPS explicitly checks for it and doesn't let me use their
service with an unapproved browser. I presume that feel it is better for
them to lose customers than risk .. risk what?

I had a similar problem with IE 4=>5 with both UPS and Fleet. Fleet paid a
price for this because they were totally unprepared for IE 5 when it shipped
and it took a few days to fix their bugs.

UPS is loses two ways. They force me to use other services and they lose the
value of users doing testing for them. They can warn me that they haven't
tested with my browser but disallowing it is not only short-sighted, it
represents a basic misunderstanding of the PC and the large effort put in to
assure compatibility with previous versions of programs. Old MIS (before
they were called IT) departments did have a great fear of upgrades since
each mainframe system was extensively patched. But that reasonable fear is
now a phobia.

Bob Frankston  http://www.Frankston.com <http://www.frankston.com/>


Re: Word file turns into two disjoint texts (Page, RISKS-21.40)

<"Jeanne Sheldon" <jeannes@microsoft.com>>
Tue, 29 May 2001 08:39:15 -0700

  [This item is an out-of-band response to Clive that is included here with
  the permission of Jeanne Sheldon.  It provides an interesting case history,
  especially with the Unicode wrinkle, and seems RISKS-worthy even if it
  may seem like an old problem.  PGN]

Here's a summary of what I've been able determine about the document.

The document was created in Word 97.

Word was set to allow "Fast Saves", which is a non-default setting that
performs incremental rather than complete saves.  It is a feature
intended to speed the save operation.  More information on fast save can
be found in Microsoft Knowledge Base articles:
Q71999 WD97: "How to Disable the FastSave Option in Word for Windows"
Q190733 WD97: "Opening Word Document in Text Editor Displays Deleted
Text"(this was first documented in Q113052 CREATED: 23-MAR-1994)
Q192480 WD97: "Frequently Asked Questions About 'Allow Fast Saves'"

The document was saved three times; the second save was to a different
filename.  Because the second save initiates a second pass over the
document, Word was able to compress the Unicode so that it was readable
as ASCII characters and all incremental changes that were Fast Saved
were collapsed.  The first letter was then deleted and the letter to Dr.
Page was composed.  A single save was then performed to a local
(non-network) drive using the same filename.  Because "Fast Save" was
enabled, the deleted text stream was identified but not actually
deleted.  Because a single save is a single pass and Unicode compression
requires a second pass, the text remained as uncompressed Unicode.  On
Unicode compression, see: Q168967  "File Size Twice as Big When Compared
to Earlier Version."  While a non-Unicode aware tool would be unable to
read the second set of text (the letter to you), it is actually quite
readable on a Unicode-enabled text reader.

Extra notes:  The document contains a unique identifier, indicating that
the version it was authored on did not include the fix which removes
that identifier.  See Q222180  Unique Identifiers and Microsoft Office
97 Documents.

The document title, under properties, is generated automatically from
the first line of the document on the first save.  It is not
subsequently updated, so it may contain text that is no longer in the
document.

Comprehensive information on the topic:
Q223790  How to Minimize Metadata in Word Documents.

  From Word 97online documentation:
  The difference between a fast save and a full save
  If you select the Allow fast saves check box on the Save tab in
  the Options dialog box (Tools menu), Word saves only the changes to a
  document. This takes less time than a full save, in which Word saves the
  complete, revised document. Select the Allow fast saves check box when
  you are working on a very large document. However, a full save requires
  less disk space than a fast save. If you are working on a document over
  a network, clear the Allow fast saves check box. Fast saves cannot be
  performed over a network.

  You should do a full save in the following situations:
  * Before you share a document with other people
  * When you finish working on a document and save it for the last time
  * Before you begin a task that uses a lot of memory, such as
    searching for text or compiling an index
  * Before you transfer the document text to another program
  * Before you convert the document to a different file format

  Note:  If you select the Always create backup copy check box on the Save
  tab in the Options dialog box (Tools menu), Word clears the Allow fast
  saves check box, because backup copies can be created only with full saves

... Clive, thank you very much for the time and effort that you have put
into this.  Although the Word setting that caused the document to be
created in such a manner goes back to a time when electronic document
exchange was not the norm (and, over the past 7 years, much effort has
gone in to attempting to assure that private information is not
accidentally included) it is humbling and daunting to realize once again
how difficult it is to correct the mistakes of past versions with software
patches, bulletins and product documentation.

Jeanne Sheldon, Microsoft Corporation


REVIEW: "Demystifying the IPsec Puzzle", Sheila Frankel

<Rob Slade <rslade@sprint.ca>>
Mon, 28 May 2001 15:58:15 -0800

BKDMIPSP.RVW   20010511

"Demystifying the IPsec Puzzle", Sheila Frankel, 2001, 1-58053-079-6,
U$75.00
%A   Sheila Frankel sheila.frankel@nist.gov frankel@artechhouse.com
%C   685 Canton St., Norwood, MA   02062
%D   2001
%G   1-58053-079-6
%I   Artech House/Horizon
%O   U$75.00 800-225-9977 fax: 617-769-6334 artech@artech-house.com
%P   273 p.
%T   "Demystifying the IPsec Puzzle"

With its reference to the dim and distant past when Bill Gates was
working on his fifth billion, the first sentence of the first chapter
makes you suspect that this book will be a fun read.  Which is a very
strange thing to think about a security text.  But the readability
aspect becomes understandable when the author points out that this is
not solely a work designed to turn out IPsec implementors (who may
need additional references), but to inform purchasers and users.

IPsec is both a part of the "next generation" IPv6 standard, and a
security option (or add-on) in the current IPv4.  It is governed by
some two dozen Internet RFCs (Request For Comments documents).  While
other security measures work only with specific programs, or at the
transport layer, IPsec functions at the IP (Internet Protocol) or
network layer, in order to address the widest range of applications
and problems.  It can address both confidentiality and authentication,
as well as dealing with a number of denial of service (DoS) attacks
that other security systems cannot.

Chapter one provides a general introduction, and a brief and apposite
background of the Internet and IP layer functions.  The author has
culled a minimal foundation from the normal barrage of design and
history, and even the description of IP headers is clear and important
to the matter at hand.  The Authentication Header (AH), which assures
the detection of corruption or modification en route, is discussed in
chapter two.  The material also introduces basic structures such as
the security association (SA) database, and provides some detail on
implementation issues and concerns.  The Encapsulating Security
Payload (ESP) is described in chapter three, although not quite as
lucidly as was the case for prior material.  However, there is also an
excellent section outlining design considerations for the protocol.

Chapter four details the symmetric key algorithms used for AH and ESP
operations, but does not go deeply into the asymmetric systems used by
the Internet Key Exchange (IKE).  IKE itself is discussed, in general
in chapter five, with respect to remote users in chapter six, and
listing additional options in chapter seven.  The PF_KEY application
programming interface for IPsec is described in chapter eight.
Chapter nine deals with issues of policy and policy enforcement.  An
overview of PKI (Public Key Infrastructure) is given in chapter ten.
Chapter eleven looks at the special problems of multicast.

The book finishes off as many others start, with an analysis of
whether IPsec can be the right solution to the problem.

The title of this tome is quite appropriate.  It provides a clear
outline and, if it isn't always articulate about the implications of
portions of the system, it does a good enough job that the persistent
reader will be able to work out other aspects.  Not a book for the
masses, perhaps, but for those who need either to purchase IPsec, or
to choose between IPsec and other technologies, a very useful guide.

copyright Robert M. Slade, 2001   BKDMIPSP.RVW   20010511
rslade@vcn.bc.ca  rslade@sprint.ca  slade@victoria.tc.ca p1@canada.com
http://victoria.tc.ca/techrev    or    http://sun.soci.niu.edu/~rslade

Please report problems with the web pages to the maintainer

x
Top