The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 18 Issue 89

Wednesday 12 March 1997


o President's Commission on Critical Infrastructure Protection
o Alberta Stock Exchange Shuts Down
Mich Kabay
o Hot and cold running randomness
Dan Wing
o Vietnam will censor Internet content
David Farber
o More RISKS-relevant ACM awards
o The Ariane 5 explosion: a software engineer's view
Robert L. Baber
o Usability and Security re: Authenticode
Mary Ellen Zurko
o CaptiveX/Authenticode
Henry G. Baker
o Continual Risk/Benefit Analysis
Benedikt Stockebrand
o Re: Trusting the software vendor
David Collier-Brown
Daniel Hicks
o ActiveX Security for Dummies
Peter Gutmann
o The real goal of Authenticode
Mark Seecof
o CFP: DIMACS Workshop on Formal Verification of Security Protocols
Catherine A. Meadows
o Info on RISKS (comp.risks)

President's Commission on Critical Infrastructure Protection

"The President's Commission" <COMMENTS@PCCIP.GOV>
Thu, 27 Feb 1997 13:01:29 -0500
The President's Commission on Critical Infrastructure Protection (PCCIP)
advises and assists the President of the United States by recommending a
national strategy for protecting and assuring critical infrastructures:
telecommunications, transportation, electric power, oil and gas, banking
and finance, water, emergency services and continuity of government services.

The PCCIP Web site contains background on the Commission and information
about its activities and mission.

The site's home page may be accessed at <URL:>.

Submitted to RISKS by the PCCIP Comments Desk,

  [The Commission is now in full swing.  Seven of the commissioners plus
  some staff members attended a Workshop on Protecting and Assuring Critical
  National Infrastructure at Stanford University on 10-11 Mar 1997,
  sponsored by the Stanford University Center for International Security and
  Arms Control and co-sponsored by the LLNL Center for Global Security
  Research.  Many other luminaries also participated (e.g., Bill Perry --
  former co-director of CISAC and former SecDef).  For more information on
  this and earlier CISAC workshops, contact Sy Goodman, Center for
  International Security and Arms Control, 320 Galvez, Stanford University,
  Stanford, CA 94305-6165 <>.  Sy hopes they
  will have two reports on the most recent workshop out in a couple of
  months.  The workshop was a feast for the various RISKS devotees who had
  been invited.  (Note: Sy is a member of the ACM CCPP group that sponsors

Alberta Stock Exchange Shuts Down

"Mich Kabay [NCSA]" <>
Wed, 12 Mar 1997 12:49:40 -0500
Angela Barnes and Brent Chang of the *Globe and Mail* ("Canada's National
Newspaper") report today (97.03.12, p. B1) that (quoting):

``For the second time in six sessions and the third time this year, the
Alberta Stock Exchange has lost a day of trading because of problems with
its leading-edge computerized trading system.''

The authors make the following key points:

* System failure occurred at opening of trading on 97.03.11 at 07:30 MST.

* Technicians worked until 13:00 and tried to restart software but programs
  failed immediately.

* Bug fixes continued all night.

* Previous software errors stopped trading on the ASE for an entire day on
  4 March and during January; two other day-long halts occurred between in
  1996 after the software went online in May.

* Brokers depend on the software to trade through modem links from their

* The trading floor is supposed to be closed permanently by 21 March.

* Consequences of the breakdowns include lost commissions, lost business
  opportunities, and loss of confidence in the ASE.

* EFA Software Services of Calgary responsible for trading software for
  other exchanges around the world, including the Palestine Securities

I personally called EFA public relations and left a message requesting
further details.  More if and when I receive them.

Mich: M. E. Kabay, PhD, CISSP (Kirkland, QC), Director of Education
National Computer Security Association (Carlisle, PA)

Hot and cold running randomness

Dan Wing <dwing@Cisco.COM>
Mon, 10 Mar 1997 13:10:36 -0800
TBTF's 9 Mar 1997 issue carried this item:

#..Hot and cold running randomness
#    Perhaps for the first time, anyone with an Internet connection can
#    tap a source of true randomness. The creator of HotBits [16], John
#    Walker <>, describes it as
#      > an Internet resource that brings genuine random numbers,
#      > generated by a process fundamentally governed by the inherent
#      > uncertainty in the quantum mechanical laws of nature, directly
#      > to your computer... HotBits are generated by timing successive
#      > pairs of radioactive decays... You order up your serving of
#      > HotBits by filling out a [Web] request form... the HotBits
#      > server flashes the random bytes back to you over the Web.
#    Walker modified an off-the-shelf radiation detector to interface to
#    a PC-compatible serial port, and ran a cable three floors down from
#    his office to a converted 70,000-litre subterranean water cistern
#    with metre-thick concrete walls, where the detector nestles with a
#    60-microcurie Krypton-85 radiation source.
#    If you're in the mood for an anti-Microsoft rant of uncommon eloquence,
#    Walker can supply that too [17].
#    Thanks to Keith Bostic <> for the word on this
#    delightful service.
#    [16] <URL:>
#    [17] <URL:>

An interesting idea, but hopefully no will use it -- it is too easily
spoofed via DNS, and the host itself could be hacked to return the same
'random' number all the time.  (Maybe after we have IPsec, SecDNS, _and_ you
trust the host we could use services like this on the Internet).

Dan Wing

Vietnam will censor Internet content

David Farber <>
Wed, 12 Mar 1997 06:15:30 -0500
All information coming into Vietnam through the Internet will be censored.
The Vietnamese government announced on 11 Mar 1997 that it will control who
has access to online services.  It also will limit the gates through which
Internet servers in Vietnam are linked to the Internet.

A senior official at the General Department of Post and Telecommunications
told Reuters the measures will take effect this month, but added they are
still working on details and planned to meet in two months time to discuss
implementation.  The controls were issued in a decree by Prime Minister Vo
Van Kiet, who said information servers must be based in Vietnam.  This will
ensure that information entering and leaving Vietnam goes through a
government-filtered gateway, the Communist Party newspaper, The People,

  [To our Vietnamese subscribers, please try to let us know
  if you ever fail to receive any particular issues of RISKS --
  or if your subscriptions seem to have vanished.  PGN]

More RISKS-relevant ACM awards (Re: RISKS-18.87)

"Peter G. Neumann" <>
Fri, 7 Mar 97 7:46:21 PST
Having mentioned in RISKS-18.87 that the Kanellakis award was given to six
crypto luminaries at ACM '97, I might as well note that two other ACM '97
awards also have RISKS relevance.

* Amir Pnueli is the recipient of the A.M. Turing Award, for his ``seminal
work introducing temporal logic into computing science and for outstanding
contributions to program and system verification.''  This work has been
characterized as ``the most important contribution to program verification
in the last 20 years.''  He is at the Weizmann Institute of Science in

* Peter J. Denning is the winner of the Karl V. Karlstrom Award, cited for
his core-curriculum work and for communicating the intellectual substance of
computer science to other scientists and engineers.  Peter has long been
concerned with the role of fostering responsible and ethical behavior within
the core curriculum.  Denning has just become Vice Provost for Continuing
Professional Education at George Mason University.  (He was previously Chair
of the Computer Science Department, and for a few more months will still be
Associate Dean for computing.)  The award cited his long standing efforts to
shape our field and convey its nature to computer scientists and to the
broader scientific community, and noted that his vision, leadership and
early writings on operating systems played a key role in making that area a
respected part of the core curriculum.  He is a charter RISKS contributor,
going back to volume 1 number 1.

The Ariane 5 explosion: a software engineer's view

"Robert L. Baber" <>
Tue, 11 Mar 1997 11:43:38 +0200
  [Relevant to formal methods (e.g., the preceding item in this issue
  on Amir Pnueli) and RISKS, Robert Baber's message is timely.  PGN]

My web page "The Ariane 5 explosion as seen by a software engineer"
shows how the software anomaly that caused the destruction of the Ariane 5
and its payload (a DM 1200 million loss) could have been avoided by a simple
application of correctness-proof techniques.  It also highlights the
importance of strict preconditions and the inadequacy of ordinary
preconditions for practical applications.

Prof. Robert L. Baber, Computer Science Dept, University of the Witwatersrand
Johannesburg, 2050 Wits, South Africa  +27-11-716-3794

Usability and Security re: Authenticode (Atkinson, RISKS-18.85)

Mary Ellen Zurko <>
Wed, 12 Mar 1997 12:55:08 -0500
In Bob's useful and interesting explication of Authenticode's
design goals he states:

> IMHO, the most important innovations of Authenticode on prior general
> practice in the industry lie in the area of usability, especially as related
> to the user's understanding of and administration of trust.

He goes on to discuss this paradigm and some UI details.  However,
compelling and logical explanations of how usable some software is or will
be don't make it so.  If intelligent engineers arguing about usability
produced it, we would have solved the problem years ago :-).  It's a good
start (always better to consider the user than not), but there are a bunch
of techniques for designing and testing software and other interfaces for
its usability.  Those approaches do not guarantee usability (see the
previous comment on being solved), but they do get us closer and provide
more compelling evidence of the claim, just as with performance evaluation
requires tests to prove the results of a promised innovation.  I would
expect safety-critical systems to use these techniques regularly.  So, since
usable security is of particular interest to me, I'd like very much to hear
about what design techniques and user testing were used to produce and
verify this innovation in understanding trust.  I know Microsoft has both
the resources and the expertise to run these tests.



Henry G. Baker <>
Sat, 8 Mar 1997 08:31:58 -0800 (PST)
A major risk I have seen with respect to CaptiveX/Authenticode discussions
is the effect it is having on unsophisticated users.  These discussions
happen not only among sophisticated people on comp.risks, but in the New
York Times, the Wall Street Journal, MSNBC, CNN, etc.

In my not-very-scientific survey of people that I talk to, most people don't
expect perfection from software, and therefore aren't very surprised, much
less particularly outraged, by problems in CaptiveX.  They appreciate the
information about the problems, make notes to themselves to check with to get the fix, and go on with their lives.

We thus have the following situation: the 'man-on-the-street' thinks that
CaptiveX is actually _more_ secure/reliable than Java, because 'more of the
bugs have been found & fixed'.  In other words, your average bloke does not
respond to news of CaptiveX problems by using Java.  I.e., all of the clever
thinking by Java people is wasted, since the customer doesn't appreciate it.

In fact, all of the discussions of IE problems is just more free publicity
for Microsoft.

Here in La-La land, we've known for nearly a century that 'the only bad
publicity is an obituary'.  Thus, if companies are to be 'punished' for bad
products, a more sophisticated approach will be required than the present

Henry Baker

Continual Risk/Benefit Analysis (Re: McCurley, RISKS-18.86)

Benedikt Stockebrand <>
09 Mar 1997 16:07:41 +0100
I quite agree that it is essential to understand the risk involved with
actions to be taken.  However, I consider the assumption of a "continual
risk/benefit analysis" to be far too optimistic --- it falls for the
well-known risk of assuming that people always behave rationally.

How many real world people don't fasten their seat belts while driving,
smash their thumb with a hammer too large for the job or trip over their
shoe laces they didn't bother to retighten?  How many companies connect to
the Internet without a firewall?  How many IE users have their IE security
level lowered to "medium" (which should rather be called "next-to-none"
anyway) or even below?  How many people will happily allow an ActiveX applet
in if it promises some interactive video thing showing something
sufficiently naughty?

Too many people will only perform the "risk/benefit analysis" *after*
they got bitten.

Of course, everybody is free to decide and subsequently has to live with the
consequences of the decisions made.  But any responsible designer will take
this effect into consideration.  And of course s/he will try to minimize the
risk/benefit ratio or even abandon a project if the risk can't be lowered to
an acceptable point.

Luring people into buying and using an unnecessarily insecure system and
telling them "to be careful" is plain irresponsible.  Selling a super-fast
extra-shiny car without seat belts is negligent no matter if the manual
tells you not to drive without making sure you won't get involved in an

Benedikt Stockebrand, Dortmund, Germany

Re: Trusting the software vendor (Welsh, RISKS-18.88)

David Collier-Brown <davecb@Canada.Sun.COM>
Mon, 10 Mar 1997 09:52:30 -0500
> ...  Although with the advent of Java we are starting to
> see these ideas in the mainstream, they're not particularly new.

  Indeed, they're rather old, says the Multician(:-))

However, both the ActiveX and Java communities can improve on the situation
considerably, by picking up some of the old, good ideas and applying them to
current technology.

If I want to run a semi-trusted program with access to a chunk of my disk
space, I can say so with only a small extension to currently-available ACLs
(Access Control Lists, available on many Unix variants and Windows NT).

Assume that I have the ability to define a specific instance of myself, and
give it a name visible to the ACL processor.  Call the instance ``browser''
Then I can say
    /home/davecb/catbox rw  davecb.browser
and run a Java applet or ActiveX control and let it access the catbox.

Let's expand on this a bit: add a list of authenticode signatures and a
notation for representing them in the ACL file, and I can say
    /home/davecb/fred   w   davecb.signed(fred)
in order to allow an applet signed by fred to write to /home/davecb/fred.

Alas, this doesn't come without risks of its own.  If there is no mandatory
access control ("MAC") mechanism, I can allow untrusted controls to access
anything I have access to... just by writing a bad acl rule.  Even worse, I
might write a rule that allows anyone else who has write access to my files
to run any applet whatsoever with write access to my files.

The risk here is a classical one: there are lots of ways to achieve access
control, and many of them are ill-considered.  A reading of the literature
on MAC and ambiguous interfaces is highly desirable before looking into ways
of giving more access to arbitrary programs. (Darn!)

David Collier-Brown, 185 Ellerslie Ave., Willowdale, Ontario  N2M 1Y3 CANADA

Re: Trusting the software vendor (Welsh, RISKS-18.88)

Daniel Hicks <hotlicks@VNET.IBM.COM>
Mon, 10 Mar 1997 10:27:27 -0600 (CST)
Having been associated with the IBM S/38 & AS/400 for twenty years, I have
to agree with Mr. Welsh.  The AS/400 uses a "trusted translator" which is
conceptually very similar to the Java trusted verifier/interpreter approach.
Years of experience have shown that this technique can be used to maintain
integrity in a system without the need for elaborate hardware protection,
while at the same time giving programmers relatively free rein to produce
the sort of code they want.

I have had the opportunity to examine the Java JVM spec in considerable
detail, and, based on my experience with similar concepts in the AS/400, it
appears to be sound.  Similarly, the security manager scheme appears sound,
though I haven't examined it in as much detail.

As always, there is ample opportunity for bugs, both in the specs and in
the implementations, but it seems wise to at least BEGIN with the
concept of a secure system and then fix the bugs, rather than begin with
a hopelessly insecure system and struggle to make it secure.

Re Wayne Gerdes' complaints about the limitations of the Java Applet
environment, these are of a transient nature.  It seems clear that the
Applet environment can be augmented to provide more capabilities, but,
unlike ActiveX, this can be done in a fine-grained incremental fashion as
experience brings to light both the need for and the hazards of these
additional capabilities.

Dan Hicks

ActiveX Security for Dummies (Re: RISKS-18.85-86)

Peter Gutmann <>
Wed, 12 Mar 1997 06:12:48 (NZDT)
The recent messages on ActiveX/Authenticode security have prompted me to
submit the following simple description of Authenticode security and why it
doesn't work.  It's very non-technical, and doesn't require any knowledge of
digital signatures or anything similar.  It's been tested on the local
ActiveX glee club, and seems to work:

Imagine a large, security-conscious office building.  At the main entrance
is a security desk where anyone entering the building is required to present
some form of ID like a drivers license, and sign in.  If you don't have your
ID, the security guards have the option of turning you away.  Once you've
signed in for the first time, you're allowed free run of the building.  You
can take anything you want into and out of the building and roam the
building at will, as long as you flash your drivers license at the security
guard noone ever checks anything else.

One day a huge explosion rocks the building, destroying most of it and
killing a great many people.  There is no evidence left after the explosion
which can be used to find out exactly what happened.

Scenario 1 (less likely):

  The security guards have logs of everyone who entered, a total of nearly
  3000 people in the last few months (remember that there is *no* other
  evidence).  How are these logs going to help pinpoint who caused the

Scenario 2 (more likely):

  The logs were destroyed during the explosion along with everything else.
  How do you find out who caused the explosion?

I think the parallels with ActiveX and Authenticode are obvious.


The real goal of Authenticode

Mark Seecof <>
Tue, 11 Mar 1997 22:24:01 -0800
In RISKS-18.85 Bob Atkinson gave us quite a bit of insight into the thinking
behind the design of MS-Authenticode.  In RISKS-18.86 a number of experts
analyzed and/or criticized Authenticode from a technical point of view.
While I learned quite a bit from both issues of RISKS, I think one very
important factor which may have motivated Microsoft to ship Authenticode was
not aired.

I suspect that Authenticode may serve as a competitive weapon for Microsoft
in ways which MS has not chosen to discuss.  Below I will suggest a simple
observational test for my hypothesis.

As all experts have pointed out, Authenticode does not even try to protect
the end-user from malicious code.  All it can do is identify the source of
some code with a degree of reliability heretofore unavailable (I ignore any
question of bugs in Authenticode as implemented).

Now, Microsoft does not, generally speaking, purvey malicious code.  Sure,
it ships a lot of bugs and occasionally a virus, but MS is basically
trustworthy.  Certainly all the users of Windoze and MS-* trust MS, and
that's perfectly reasonable.  For this reason, users who download code
signed *by Microsoft* can use it with reasonable confidence.

Users will have less confidence in code signed by others.  Even if those
others are well-known and trusted vendors (e.g., Borland), users may not be
able to verify their signatures.  Authenticode ships with verification for
MS' signatures.

Imagine the user who calls (at MS' per-incident rate of $95 paid in advance)
for help:  "Windows bombs every time I press the START button!"

"Have you been exploring(tm) the Internet?"  "Yes.  For weeks."

"Did you run any software from the Internet?"  "Yes.  Lots."

"Was it all signed by Microsoft?"  "No."

"Well, I'm afraid you'll have to call the vendors who supplied any software
you got from the Internet.  If they can't help you, though, you should try
reinstalling Windows.  Good luck!  [Click.]"

See, it doesn't matter who signed the software if it wasn't Microsoft.  Your
warranty is void.

Therefore it does not matter if Authenticode helps anyone to avoid malware
from arbitrary suppliers or not (the technical reasons why it won't have
already been aired).

MS states clearly (and Atkinson confirms) that MS expects end-users to take
responsibility for bad results--since they "trusted" someone they oughtn't
have.  This is a good position for Microsoft, which refuses categorically to
improve the security and stability of its OS software (or even to document
it thoroughly--some people think this is so that competitors in the
application arena will have more trouble shipping compatible software).  Why
should MS take bad press for OS holes when it can blame everything on
careless users?

Authenticode exists chiefly to authenticate *Microsoft software* and thereby
aid Microsoft marketing.  Whenever Authenticode pops up to warn a user that
he is about to risk trashing his PC by installing something that (gasp) is
*not from Microsoft* that user gets a little stab in the psyche: code which
is *not from Microsoft* is *dangerous*.

Now, I did promise a test.  If MS announces any sort of co-branding or other
arrangement under which, for a fee (or for early product intelligence), MS
will sign *other people's* code, you will know that my hypothesis is
correct.  (It might be correct anyway; I'm just offering one possible test.)
I would regard any offer by Microsoft to validate signatures from other
vendors through MS' networks as suggestive though not conclusive proof--MS
could get only smaller fees and less valuable intelligence from such a
service, and offering it would reduce their marketing benefits.

Mark Seecof

  [I have selected just a few recent representative contributions on
  this topic, in the hopes of not overwhelming our readers.  PGN]

CFP: DIMACS Workshop on Formal Verification of Security Protocols

Catherine A. Meadows <>
Wed, 12 Mar 1997 15:37:12 -0500 (EST)
DIMACS Workshop on Formal Verification of Security Protocols, 3-5 Sep 1997
Organizers: Hilarie Orman, DARPA and Catherine Meadows, Naval Research Lab.

As we come to rely more and more upon computer networks to perform vital
functions, the need for cryptographic protocols that can enforce a variety
of security properties has become more and more important.  Thus it is no
surprise that in recent years a number of new protocols have been proposed
for such applications as electronic credit card transactions, Web browsing,
and so forth.  Since it is notoriously difficult to design cryptographic
protocols correctly, this increased reliance on them to provide security has
become cause for some concern.  This is especially the case since many of
the new protocols are extremely complex.

In answer to these needs, research has been intensifying in the application
of formal methods to cryptographic protocol verification.  Recently this
work has matured enough so that it is starting to see application to
real-life protocols.  The goal of this workshop is to facilitate this
process by bringing together those were are involved in the design and
standardization of cryptographic protocols, and those who are developing and
using formal methods techniques for the verification of such protocols.  To
this end we plan to alternate papers with panels soliciting new paths for
research.  We are particularly interested in paper and panel proposals
addressing new protocols with respect to their formal and informal analysis.

Other topics of interest include, but are not limited to

- Progress in belief logics
- Use of theorem provers and model checkers in verifying crypto protocols
- Interaction between protocols and cryptographic modes of operation
- Methods for unifying documentation and formal, verifiable specification
- Methods for incorporating formal methods into crypto protocol design
- Verification of cryptographic API systems
- Formal definition of correctness of a cryptographic protocol
- Arithmetic capability required for proofs of security for
  number-theoretic systems
- Formal definitions of cryptographic protocol requirements
- Design methodologies
- Emerging needs and new uses for cryptographic protocols
- Multiparty protocols, in particular design and verification methods

We encourage attendees to bring tools for demonstration.  Information about
availability of facilities for demonstration will be posted later.

To submit a paper to the workshop, submit a one or two page abstract, in
Postscript or ASCII to both organizers at the e-mail addresses given below by
June 16, 1997.  Authors will be notified of acceptance or rejection of
abstracts by July 1.  Full papers will be due by August 1.  Copies of papers
will be distributed at the workshop.  We also plan to publish a proceedings.

Participation in the workshop is *not* limited to those giving presentations.

If you would like to attend the workshop, a registration form is available
Information on accommodations and travel arrangements is available at and
Information on the workshop in general is at

Hilarie Orman               Catherine Meadows
DARPA ITO               Naval Research Laboratory
3701 N. Fairfax Drive           Code 5543
Arlington VA 22203-1714         Washington, DC 20375
phone: (703)696-2234            phone: (202)-767-3490

Please report problems with the web pages to the maintainer