The RISKS Digest
Volume 19 Issue 03

Thursday, 3rd April 1997

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

New Zealand Police system
Richard A. O'Keefe
RISKS of disconnecting without first connecting
Bryan O'Sullivan
Re: UK TTP licensing proposals
Michael Bacon
Ross Anderson
Another Y2K Problem for Banks
Bruce Horrocks
All-ways green lights ... it's all in the timing
Richard Cook
Info on RISKS (comp.risks)

New Zealand Police system

Richard A. O'Keefe <ok@cs.rmit.edu.au>
2 Apr 1997 13:17:13 +1000
The following information is extracted from an article "Plans to boost
police in doubt", on the front page of the Wednesday April 2 1997 "New
Zealand Herald".  I've edited and re-arranged a bit to compress it.

  [In the run up to last year's MMP election] the NZ First law and order
  policy document, which was weaved (sic) into the coalition deal, promised
  500 extra police jobs on top of existing staff levels.  [This] pledge
  ... appeared to be a distant memory last night amid revelations that
  overworked police are hiring security guards to help fight crime.

Here's the computer-related bit:

  The Minister of Police, Jack Elder, ... said that 540 jobs had to be axed
  to fund the multimillion-dollar crime fighting computer system, Incis.
  ... He [was] requesting a report on the new computer system out of concern
  it has been dogged by delays.

  [A spokesman for the police association] said that 180 police officers
  throughout the country had already lost their jobs because of the _likely_
  efficiencies of the new crime-busting computer system.  A further 180 were
  expected to go from June.

There are a number of things that leap to the eye.

(1) The computer system is described as "crime fighting" and "crime-
    busting".  This is manifest nonsense, and I suspect that phrases
    like this may be at the heart of the problem.  It's _people_ that
    fight crime, and at present a computer system is at best a clerical
    assistant.  If you had the best possible police computer, but didn't
    have enough eyes and hands out there on the streets, you wouldn't be
    able to _do_ anything, and it sounds as though that's what's about
    to happen.

(2) Jobs are being cut *now* because of the "likely" efficiencies of a
    computer system that isn't yet in place.  Winding down the old
    system before the new one is fully operational is such a classic
    systems botch that I can't understand how it's happened _again_
    after so many examples in the past.

(3) The system is "dogged by delays" (a familiar story to RISKS readers).
    The criminals are not.  Surely they had _some_ contingency plans for
    fighting crime if the computer system was held up?

(4) The people in government in NZ now have been in power for several
    years (they split into several parties and then reformed as a
    coalition, but it's mostly the same _people_) so the delays should
    not have come as any kind of surprise.

(5) The promised _increase_ of 500 in police numbers will be a _decrease_
    of 40, if we're lucky.  Economic rationalism at work.

I hope someone with more detailed information will follow this up.

Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.


RISKS of disconnecting without first connecting

"Bryan O'Sullivan" <bos@serpentine.com>
Tue, 1 Apr 1997 18:25:04 -0800 (PST)
At my workplace, a number of engineers make use of dialup ISDN connections
to work from home.  Since ISDN is an expensive service to run on a large
scale, even in the garden of technological delights that is Silicon Valley,
provision of service is usually limited to engineers who have a clear need,
either in terms of responsibilities or seniority.

Some time during the past few days, our network service organisation
received a request to terminate service for a particular employee; let us
call the employee in question Jane Wilkinson.  The liaison responsible for
forwarding such requests to our local telco, unable to find a database entry
for Jane Wilkinson, decided that whoever had submitted the request must have
been referring to Jane Wilkins (not her real name), a coworker of mine who
has (or, rather, had) ISDN service.

Following a pattern that will be familiar to RISKS readers, our liaison sent
a disconnect order to the telco, without first stopping to check with either
the submitter of the order or my coworker.  As a result, my coworker is now
without ISDN service, and it will be at least two weeks before the various
bureaucracies within our company and the telco grind through her reconnect
order.  Meanwhile, her group is going through a software release cycle, and
she needs to monitor builds and regression tests from home in order to
ensure that their schedule doesn't slip.

It is rather beyond my comprehension that our telco liaison did not think to
ask anyone about the obvious name mismatch before sending out a disconnect
order.  Please pardon me while I curl up under my bed.


Re: UK TTP licensing proposals (Anderson, RISKS-18.95)

"Michael Bacon" <Streaky_Bacon@msn.com>
Sun, 30 Mar 97 15:29:58 UT
Ross Anderson makes some interesting comments about the UK government's
consultation paper "Licensing of Trusted Third Parties for the Provision of
Encryption Services.  Unfortunately he appears to make several leaps of
imagination and draws a number of conclusions which do not appear to be
justified by the paper.

Now, before I go on, let me indicate my personal position on the
consultation paper and related matters.  I am in favour of an open
discussion.  I am against mandated encryption systems.  I am in favour of an
appropriate licensing regime.  I am against any restrictions on freedom of
choice of encryption mechanisms or key lengths.  I am in favour of
maintaining national security and the prevention and detection of crime.  I
am against unnecessary government interference in the privacy of the
individual or the need for confidentiality, integrity and availability of
corporate data.  Some of these views can be found in "Whispering in the
Wires" - the paper I gave at CompSec91.

If that seems an irreconcilable list, I don't believe it is, neither do I
believe that the consultation paper contains a set of proposals wholly
incompatible with achieving those ideals.

Returning to Prof Anderson's comments, for illustration I make the following
comments of my own based upon his statements as they appeared in RISKS-18.95.

1.  The consultation paper was not 'sneaked out'.  Its publication was known
and it is available both in paper from the DTI and by download from the net at
http://dtiinfo1.dti.gov.uk/pubs/ .

2.  Whether the DTI server was down or not, I doubt it was 'convenient'.  Prof
Anderson seems to imply that this was in some way deliberate - perhaps to
prevent access and therefore comment.  If so, this is unjustified and I doubt
the DTI will keep its server down until the 30 May 1997 - the end of the
consultation period.  Additionally, this comment serves to set the tone for
his critique - tending towards paranoia.

3.  The paper addresses 'the provision of encryption services', not encryption
per se, not the use of encryption.  Indeed it specifically (para 45) excludes
this last.  Thus the proposals as they stand would not "ban PGP".  The
personal use of PGP on a user-to-user basis will still be allowed.

4.  Annex B to the proposals does not set any requirement for a national or
formal licensing or registration regime to be applied by any foreign country.
There is no requirement on other countries to use 'key escrow'.  There is no
requirement on a UK body to use only UK licensed TTPs, foreign TTPs can be
used.  Thus Holland (sic) and Denmark will not be "cut out of the Superhighway
economy"

5.  Distributed, secure systems can still be built.  They can extend beyond
the UK, beyond Europe and into countries (subject to local laws on use of
encryption) that do not operate TTPs, or key escrow arrangements.  The
proposals do not seek to prevent this.  If secure interfaces with external
bodies are required and encryption services are needed for this (not always
the case), in the UK a licensed TTP must be used.  There should be no great
difficulty in operating both with an external, licensed TTP and an internal,
unlicensed TTP in the same organisation.

6.  Para 46 states explicitly that "there is ... no intention ... to access
private keys used only for integrity functions".  Thus Prof Anderson appears
incorrect in claiming that "there are no let-outs for services providing
only authenticity (presumably 'authentication') and non-repudiation
... services".

The thing to bear in mind when reviewing the consultation paper is that
there is a world of difference between enforced compliance and compliance
for practical reasons.

I have often stated that "I know I'm paranoid, but am I paranoid enough?",
and I believe in a healthy paranoia when considering government access to
sensitive data.  I also note a growing tendency by governments to greater
access and thus less privacy and confidentiality.  Nevertheless, in his
short review of the consultation paper, Prof Anderson goes too far.

Michael Bacon [Disclaimer...]


Re: UK TTP licensing proposals (Bacon, RISKS-19.02)

Ross Anderson <Ross.Anderson@cl.cam.ac.uk>
Tue, 01 Apr 1997 13:10:47 +0100
Michael Bacon's comments on my risks posting are not merely abusive but
highly inaccurate.  Curiously, they echo almost word for word the messages
that I (and others) have been getting from civil servants seeking to justify
the government's policy.

> The consultation paper was not 'sneaked out'.

Untrue.

This consultation paper, and its predecessor last year, and the report on
encryption services in the National Health Service, were handled in exactly
the same way:

(1)  The report is initially made available on a Friday afternoon
 or just before a holiday or at some other time when people are
 rushing to get away, and to a number of journalists who don't
 understand the significance of what's being announced.

(2)  Those people who are involved in the issues, such as myself,
 receive their copies only days later and often via a third party.
 In the case of the current DTI document, the responsible minister
 (Ian Taylor) promised me in a letter of the 16th December that `I
 will ensure that you receive a copy'. I still haven't got a paper
 copy - the electronic copy I posted was forwarded to me by Caspar
 Bowden at Scientists for Labour.

(3)  By the time people in the know realise what's going on, the
 whole matter is `yesterday's news' and the press aren't
 interested any more.

This is standard UK government `news management' - a practised routine that
swings into play whenever a minister wants to do something he knows to be
unpopular, stupid or shameful. I repeat and stand by my statement that the
paper was sneaked out.

> The personal use of PGP on a user-to-user basis will still be allowed.

The proposal as it stands would make it an offence for me to sign anyone's
key. It would prevent the University from signing the keys of faculty and
students, as we have done on demand for some years.  The only way to make it
legal would be to get a license, but the paper makes clear that licenses
will only be granted to organisations that provide a full range of services
and that are trusted by the Secretary of State.

> There is no requirement on a UK body to use only UK licensed TTPs,
> foreign TTPs can be used.  Thus Holland (sic) and Denmark will not be
> "cut out of the Superhighway economy"

It will be illegal for these foreign TTPs to offer or provide their services
to UK nationals. I expect that the cipherpunks will still provide encryption
services to us poor Brits is defiance of the DTI, but I can't see Verisign
or Surety or Citibank or IBM or Microsoft - or anyone else with a lot to
lose - defying the UK government. We're still 4.5% of the world economy.

> Distributed, secure systems can still be built.  They can extend beyond
> the UK, beyond Europe and into countries (subject to local laws on use of
> encryption) that do not operate TTPs, or key escrow arrangements.  The
> proposals do not seek to prevent this.

But they will prevent sensible and economic secure systems engineering,
because they insist on the centralisation of trust.

Years ago, I worked for a bank that ripped out seven regional computer
centres and replaced them with a single mainframe. One of the problems they
hit was that the personnel were managed in seven regional head
offices. Maintaining robust communications between the regions and the RACF
administrators at the central site was hard. It took about 30 people to just
about cope - and the bank was a fairly small one (only 25,000 staff).

The lesson I learned there is that you need to manage trust where the
personnel management is done; otherwise the logistic overhead becomes
insupportable.

Yet in Britain's National Health Service, where the UK government is trying
to pilot its encryption ideas, they claim to believe that a single TTP (plus
one backup) with eight staff working 9 to 5 will be able to manage keys for
about a million people, who are managed in no less than 12,000 different
locations and who undergo about two million personnel changes each year.

We have pointed out again and again to the government that this is
engineering madness. The only people who know who's employed today at the
Trinity Street surgery are the people at the Trinity Street surgery. If they
have to phone up BT or EDS every time they hire a locum or an agency nurse
for the afternoon, then nothing would ever get done. Like the Russian army
at Tannenberg, they'll throw away the cipher system and send the traffic in
clear.

This may be what GCHQ actually wants to happen. However, then we need a
quite different agency to take charge of the defensive information warfare
aspects of UK policy.

> If secure interfaces with external bodies are required and encryption
> services are needed for this (not always the case), in the UK a
> licensed TTP must be used.

The voice of the regulator :-)

> There should be no great difficulty in operating both with an external,
> licensed TTP and an internal, unlicensed TTP in the same organisation.

But only large companies will be allowed to have a licensed TTP.
Small companies will have to buy the service in. However, interfacing
with an external TTP means that they have to be licensed, which they
can't be because they aren't big enough. So it looks like you will
either have to pay a licensed TTP to manage all your key material -
even your internal-use Kerberos keys - or else say goodbye to any
secure working with the outside world.

> Para 46 states explicitly that "there is ... no intention ... to access
> private keys used only for integrity functions".

However both RSA and DSA keys can be used for encryption as well as
signature. So if the law enforcement function is to be provided at
all, then RSA keys destined for use in signature must still be
escrowed. Observe the wording: `no intention ... to access private
keys used only for integrity functions' rather than `no intention
... to require the escrow of private keys used only for integrity
functions'.

Presumably the official intent is that such keys will only be touched
if there is evidence that they have been abused for confidentiality.
However, the police could claim falsely that such an abuse had taken
place, and the user would never find out; his signature could then be
forged by the police. So nonrepudiation is lost.

> Prof Anderson appears incorrect in claiming that "there are no
> let-outs for services providing only authenticity (presumably
> 'authentication') and non-repudiation ... services".

Not so - the report insists that a TTP should provide all services.
On the face of it, this means that Verisign won't be licenced (as
they don't provide a timestamping service) and neither will Surety
(as they provide only a timestamping service).

It is also of interest that the requirements for escrow include
access to keys at both ends - i.e., in both the sender's and
receiver's jurisdiction. This appears to mandate the use of the GCHQ
protocol (Kerberos would also do, but we can't export it from the
USA). The GCHQ protocol is a dreadful piece of engineering that
no-one in his right mind would use unless forced to (see my Eurocrypt
paper: http://www.cl.cam.ac.uk/ftp/users/rja14/euroclipper.ps.gz).

> The thing to bear in mind when reviewing the consultation paper is that
> there is a world of difference between enforced compliance and compliance
> for practical reasons.

This again is the official line but the reality expressed in the DTI
document is that is I offer a licensable encryption service without a
license - such as signing a student's PGP key, or even signing a
message with the date and time in it (timestamping) then I will be
committing a criminal offence.

This looks much more like `enforced compliance' than `compliance for
practical reasons'.

> I have often stated that "I know I'm paranoid, but am I paranoid enough?",
> and I believe in a healthy paranoia when considering government access to
> sensitive data.  I also note a growing tendency by governments to greater
> access and thus less privacy and confidentiality.  Nevertheless, in his
> short review of the consultation paper, Prof Anderson goes too far.

It's not an issue of paranoia but of fact. I have been involved for over two
years now in the safety and privacy of medical information in the UK. During
that period I have been lied to at least once by most of the officials
involved. I even forced an apology from a minister (John Horam) after he
lied in answering a parliamentary question I had caused to be asked.

Officials have tried using every trick they could think of to prevent
effective protection being made available for medical records.  Recently,
for example, when one pilot project used software that would have signed and
encrypted EDI messages for pathology and radiology from the hospital to
primary care physicians, the government demanded that the software be
changed so that the keys were generated centrally rather than by the
physicians themselves.

The latest trick is to rename key escrow as key recovery. A senior official
(Brian Molteno, director of the NHS's Information Management Centre) claimed
in a letter of the 18th February (which I just got a copy of today) that if
the health service encryption report had advocated key escrow, it would not
have been accepted by ministers.  But it goes on to explain that two of the
three encryption pilots are `looking at the procedures required to recover
from lost or damaged keys'.

I am not against key escrow per se. If you read the BMA's security policy,
which I wrote (*), you will see we recommend that partners in a general
medical practice should share a single decryption key for their general
clinical traffic or, equivalently, have access to each others' decryption
keys. If a patient turns up requiring emergency treatment while his own
doctor is absent, any of the other doctors must have access if need be to
any relevant traffic in the mailbox, such as recent pathology results.

Such an arrangement merely re-implements the current paper procedures in an
electronic form. It does not materially affect the trust relationships
between professionals and patients.

However the DTI proposal, for a small number of large centralised escrow
agents that would give surreptitious access to agencies like GCHQ, would
have an extremely grave effect on these trust relationships.

No doubt it would be convenient for the spooks if, when seeking access to
medical records, they could simply `dial-a-wiretap' rather than send a
special branch officer to the surgery with the paperwork, as they do at
present. But there is no way that doctors will accept surreptitious access
to personal health information, and this has been made clear on numerous
occasions to the Department of Health.  (The recent announcement that GCHQ
will assist in social security fraud investigation will make matters worse.)

I expect that lawyers, patent agents, accountants and other professionals
will take a similar line once the issues are brought before them. Lawyers in
particular will not relish the loss of their notary business,

Ross


Another Y2K Problem for Banks

Bruce Horrocks <Bruce.Horrocks@gecm.com>
Wed, 02 Apr 1997 12:09:38 -0800
I can foresee another potential Y2K related problem that could spell bad
news for the banks:

By about Nov/Dec 1999 the fear of being stranded without cash because of
failing ATMs and credit cards will induce many people to draw out as much
cash as possible in order to tide them over until things settle down. The
danger here is that the ATM network could become overloaded and therefore
crash earlier than it might or might not have done...

..and if significant numbers try to draw cash over the counter then there
might even be a shortage of cash itself, prompting a run on the banks.

I recommend that the Treasury print a few extra bills in 1999, just in case.

Bruce Horrocks, EASAMS Limited, Waters Edge, Riverside Way, Watchmoor Park,
Camberley,Surrey,GU15 3PD,UK  +44 1276 693043   Bruce.Horrocks@gecm.com


All-ways green lights ... it's all in the timing (RISKS-19.01)

Richard Cook <ri-cook@uchicago.edu>
Tue, 1 Apr 1997 10:10:33 -0600
In response to RISKS postings regarding street traffic control signals
failing in ways that led to simultaneous green lights in both directions,
Mr. Summit wrote that he assumed that relays were used as a safety device to
prevent all-ways green traffic lights from occurring at intersections...

Unfortunately, relays themselves fail in various ways and are probably less
reliable than solid sate components in most industrial applications of this
sort. Such 'relay logic' is not demonstrably more reliable than software
logic but is simply more easily understood and explained. Enormously complex
relay driven systems are also fraught with potential failure, especially
where timing arrangements are concerned.

Cursory examination will demonstrate that for most systems there is some
value in a delay inserted between the yellow-to-red transition in one
direction and the red-to-green transition in the other. All these
relationships involve time and its measurement. Such timing problems were
integrally involved in the train vs bus crash in the Chicago suburbs a few
years ago (see NTSB report Highway/Railroad Accident Report
NTSB/HAR-96/02). Indeed, the entire system function is related to
timing. And there are deeper messages, too.

Complex systems generally are derived from simpler systems in order to
accommodate multiple goals. In most cases, the tradeoffs between complexity
and failure required to achieve these goals seem reasonable or even prudent:
using new technology offers us the possibility of improving system
performance (e.g. allowing light synchronization in city to ease the burden
of rush hour traffic) and simultaneously improving 'safety' through the use
of more reliable components, mass production, etc. In a sense, these goals
are indeed achieved.

But adopting new technology shifts the nature of failure away from frequent
but low consequence events and towards rare but high consequence ones. This
shift is significant because most of us will look at the new system and see
that it has reduced the occurrence of the previously well known, well
understood, and frequent failure. But the cost of this new technology is the
production of new forms of failure, ones that are rare but generally
catastrophic. The exact nature of these is difficult to predict and even
more difficult to defend against.

For political reasons, new technology is always described as improving
safety and in one sense it does. But new technology is not used simply to do
the same old things more safely but rather to do new things (or old things
in new, more efficient ways) in new ways. There are numerous examples:
aviation technology, especially proposed changes to air traffic control,
digital communication networks, and a host of others. Because it is
important to make things 'safer' but also 'better', designers inevitably are
placed in the position of trading off performance against the rate of
catastrophic failure. When the rate of catastrophic failure is low
(e.g. commercial aviation) it is exceptionally hard to keep the tradeoffs
informed - the accident rate is too low to provide real information about
the effects of change on system reliability. But it is still very clear that
we are unable to add substantially to the cost of new systems without adding
performance and we fool ourselves by claiming that the new systems are
better and safer than their predecessors. Nature, to paraphrase Feynman, is
not fooled however.

This is not to say that there is nothing good about new technology.  These
new systems are, on the whole, preferable to their predecessors.  The
problem is not the fact that these systems sometimes fail
dramatically. Rather the problem is the social, organizational, and
institutional need to characterize these failures as arising from errors
made by system operators.

In nearly all cases of large, complex system failure, the system is regarded
as having failed because of 'human error'. This is a convenient explanation
for the airplane crashes, nuclear plant mishaps, and medical accidents
because it absolves the designers and creators of systems of blame and
lodges the fault narrowly in individuals. If individual operators are the
source of failure then we only need to get rid of these bad operators in
order to have successful, safe systems.

Especially in systems where the potential for catastrophic failure is
recognized, operators are stationed at the final output of the system and
charged with preventing catastrophe. After failures we are able to find
fault with these individuals, and nearly always do.  Immediately after the
train vs. bus crash there was a flurry of speculation in the press about the
bus driver: she was an irregular, she was inexperienced, she had been taking
medicines and was impaired, she had recently had a death in the family and
was inattentive or stressed. Only after a huge effort by the NTSB did it
become clear that this accident was waiting to happen, <italic>indeed had
happened before</italic> because the timing relationships for the lights and
the train indicators and the stopping areas did not permit the bus to
escape. But the effort needed to uncover these relationships was extreme and
nothing like it is expended on most of the failures that occur with the
complex systems of everyday life.

It is convenient to have operators to blame. At the risk of being
inflammatory, we are perpetuating the Soviet system. After disasters in the
Soviet Union (e.g. failure to meet the goals of a five year plan in Stalin's
day) the failure would always be attributed to the 'bad' individuals whose
sabotage was responsible. After all, the system was perfect and so failure
must be derived from human frailty. Of course, nothing changed with the
literal execution of these individuals - it wasn't bad actors but bad system
that generated the failure. A look at the failures of complex systems in our
own perfect economy shows quite a similar pattern. The difference is that it
is modern technology that is the perfected thing and human operator frailty
that generates the failures.

This is not, unfortunately, the sort of problem that a few relays will fix.

Richard I. Cook, MD, Dept of Anesthesia and Critical Care, University of
Chicago; 5841 S. Maryland Ave., MC 4028; Chicago, IL 60637 1+773-702-5306

  [There are many pending messages on this topic.  Later?
  Henry G. Baker <hbaker@netcom.com> offered us Kermit's
    "It isn't easy being green."   PGN]

Please report problems with the web pages to the maintainer

x
Top