The RISKS Digest
Volume 18 Issue 66

Thursday, 12th December 1996

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Instant money
Debora Weber-Wulff
Digital Equipment Corp loses repetitive-strain injury suit
PGN
RISKS of using adobe acrobat reader under Unix
Peter T. Breuer
The risk of system administrators not understanding enough
Matt Barrie
Denver airport baggage system simulations
Luis Fernandes
A visit from the Goon Squad: computer evidence
Nick Brown
Discussion of `Computer errors' causes hernia
Peter Ladkin
re: "Plane crashes" — corrections
Martin Minow
Re: Aviation Accident Rates
Peter Ladkin
Re: Don't touch this switch!
Bear Giles
Harlan Rosenthal
4th ACM Conference on Computer and Communications Security
Mike Reiter
Info on RISKS (comp.risks)

Instant money

Debora Weber-Wulff <weberwu@tfh-berlin.de>
10 Dec 1996 10:51:26 GMT
[from *Stars & Stripes*, 7 Dec 1996, a daily newspaper for military and such
overseas.  Excerpted by DWW.]

A soldier is under investigation after allegedly transferring more than a
half-million dollars into his checking account from his savings account. The
problem was, it wasn't his money.  On 29 Nov 1996, he unsuccessfully
attempted to withdraw money from his overdrawn accounts. Then he conducted a
transfer of 600,000.21 from his savings to his checking account at an ATM
(banks were closed for the day after Thanksgiving). Because of a defect in
the ATM computer system, the transfer was completed without verification of
whether funds were available in the soldier's account.  Over the next few
days, the soldier was able to withdraw $300 in cash and deposited $30,005
into a newly opened credit union savings/money market account. On 2 Dec he
attempted to wire $60,000.15 to California. Officials noticed the error and
stopped the transaction.  The soldier was apprehended, but has not been
charged and is not in custody.

A spokeswoman for the bank said the incident was the result of a one-time
glitch in the bank's computer system.  An anonymous customer service
representative said there have been problems with the bank's computer
accounting system since 8 Nov 1996 - the day the data from Bank A was
converted to Bank B [the military awards banking contracts for a limited
time. They have problems at every change-over, it seems.] "From that point
on, we've just been trying to fix messes," the bank employee said, noting
that the problems range from lost data to false duplication.

[I wonder if the "uneven" amounts contributed to the mess, or if it
was a foul-up in general. - dww]

Debora Weber-Wulff Technische Fachhochschule Berlin, Luxemburger Str. 10,
13353 Berlin GERMANY weberwu@tfh-berlin.de <http://www.tfh-berlin.de/~weberwu/>


Digital Equipment Corp loses repetitive-strain injury suit

"Peter G. Neumann" <neumann@csl.sri.com>
Thu, 12 Dec 1996 10:14:46 PST
A Brooklyn federal jury awarded nearly $6 million to three women who had
sustained arm, wrist, and hand injuries apparently resulting from use of a
Digital LK201 keyboard: $5.3m to Patricia Gerassy, $306,000 to Jill Jackson,
and $278,000 to Jeanette Rotolo.  This is the first such case in which the
computer manufacturer lost.  Earlier cases involved Compaq and IBM.  Digital
will appeal.  [Source: Diana B. Henriques in *The New York Times*, 10 Dec
1996, C1, also in *San Francisco Chronicle* same day, A1, PGN Abstracting.]

  [Many years ago my keyboard had a very rough touch that caused severe
  pain in the little finger of my left hand (the *emacs* finger).  I had two
  foot pedals installed for the control and meta keys, and the problem went
  away.  It also helped my organ-playing technique, speaking of manual
  systems.  But, clearly it is good advice that you should not sit at your
  keyboard for long unbroken sessions.  Unfortunately, I don't take my own
  advice enough.]


RISKS of using adobe acrobat reader under Unix

"Peter T. Breuer" <ptb@it.uc3m.es>
Wed, 11 Dec 1996 20:02:03 GMT
The latest (free) beta release for linux of the adobe acrobat .pdf reader
from adobe contains an interesting risk-enhancing feature, according to its
documentation.

from MapTypes.pdf:

                 If the file cannot be opened as a PDF file, the viewer
 examines the UNIX file permissions.  If the file can be executed by the
                                      *********************************
 user who launched Acrobat Reader or Exchange, the viewer attempts to
 *******************************************************************
 launch the file as an application.  If the file cannot be launched as an
 *********************************
 application, the viewer returns an error message.

The risk - of death by execution - is obvious. I have to admit I haven't
had time to try it. I suspect I am the only person in some radius to have
time to read the documentation! I report this to you as I find it.

Peter T. Breuer, Universidad Carlos III de Madrid, Butarque 15, E-28911
Leganes SPAIN +34 1 624 9947 ptb@it.uc3m.es <http://www.it.uc3m.es/~ptb>


The risk of system administrators not understanding enough

SYD) <Matt_Barrie@oti.com (Matt Barrie>
Wed, 11 Dec 1996 00:02:10 -0500
I've noticed increasingly that a lot of system administrators have placed
an incredible reliance in "out of the box" security products (firewalls etc) .

These products tend to provide a fair degree of auditing and it seems that
quite a few administrators don't understand what it really all means.  I get
the impression that the response is a typical "Quick, call Chuck, we've got
a situation down here!" every time a message is logged.  (Why would these
security products log messages if it wasn't a threat?)

The first example is when a friend in Luxemburg e-mailed me saying that he
had a new job and this was his e-mail address. He doesn't know too much
about computers and asked me how I could chat to him. I asked if he was on a
Unix machine, and if he had chat, we could use that. He said that he thought
it was a Unix but wasn't sure - as a quick check I just telnetted quickly to
his site and pressed ctrl-D as most machines tell what OS they are as a
login message. The machine was TCP wrapped, but answered SunOS anyway. I
guess I could have used another method to find out (nslookup etc) but this
was quick & simple. It turns out that the machine was the helpdesk for a
major bank of sorts - the connection spawned a return finger @mymachine -
which answers for a group of local machines.  It told them that five people
were currently logged on to a bunch of local machines.

The log should have been read as "You had a connection attempt on port 23
which was denied - it _may_ have come from any of these five users.."

Apparently the bank admins freaked - supposedly they had forgotten the root
password the day before and read the log as something like "you have five
people simultaneously breaking in to your machine from a variety of places;
machine1.au , machine2.au , ...". They admins freaked and ran around the
bank looking for Australians working there.  They then waved logs in their
face and asked them if they knew any of these people - any why they would be
using all these accounts trying to break in.  My friend called frantically
from Belgium at 1am in the morning and had to send me the log so I could
explain what it meant. Sheesh.

Another quick example comes from the fact I am the technical administrator
for xxxx.com. We all know that spamming is on the rise - and it seems
xxxx.com is an ideal choice as a fake source domain as it is a fairly
generic sounding name.  The risk is where admins who wish to stop the
spamming then go ahead and bar or send complaints to postmaster@xxxx.com
asking for the spamming to stop - when if anyone read the header properly
would see it sure didn't come from the real xxxx.com. I've even had TOSEmail
at AOL send complaints - aren't these guys meant to be experienced admins?
Surely aol receives more than their fair share of spamming - these people
should know what they're on about by now. I know places like cybervalue.net
are having the same sort of problems.

matt


Denver airport baggage system simulations

luis fernandes <elf@ee.ryerson.ca>
Wed, 11 Dec 1996 17:21:02 -0500
The January 1997 issue of "Dr. Dobbs Journal" has an article in which the
author reports that his software simulation of the automatic baggage
handling system of the Denver airport mimicked the real-life situation.

In his conclusion, he notes that the consultants did perform a similar
simulation and had recommended against the installation of the system
currently in place. The city, however, overruled the consultant's report
(the contractors who were building the system never did see the report) and
gave the go-ahead.


A visit from the Goon Squad: computer evidence

+33)388412674 <"Nick BROWN" <Nick.BROWN@DCT.coe.fr> (Tel>
11 Dec 1996 14:49:12 +0000
A field service engineer for a major computer company relates the following
recent true story:

One day, he was working in a government office, when the police arrived with
a search warrant.  They asked, "which desk does Mr. X work at ?".  On being
shown the desk, they proceeded to remove the Macintosh from the desk and
took it away with them.  Mr. X was not there to protest, having been
incarcerated that morning on charges of forging identity papers using
PageMaker, a Macintosh, and a high-quality color laser printer.

As the police left, everyone burst out laughing, not least at the thought
that almost all the tools Mr. X was using (software and data files stored on
the server downstairs, networked printer in the next-door office) were still
sitting in the office.  I don't know if anything incriminating was ever
found on "his" Macintosh; perhaps he'll be found guilty of nothing more
serious than possession of an unlicensed copy of AfterDark.

The RISKS?  Well, apparently it's getting worthwhile (but not _perfectly_
so !) to forge official documents at work, on company time, using
taxpayer-funded high-quality printing tools; and secondly, of course, law
enforcement has a way to go in its understanding of how technology works,
but then I think we might already have suspected that.

Nick Brown, Strasbourg, France


Discussion of `Computer errors' causes hernia (Minow, RISKS-18.65)

Peter Ladkin <ladkin@TechFak.Uni-Bielefeld.DE>
Wed, 11 Dec 1996 20:42:15 +0100
In a piece entitled "Computer errors cause several plane crashes", Martin
Minow summarised in RISKS-18.65 an article from Aftonbladet suggesting the
above.  There ensued a lengthy discussion between Martin, myself, and Danny
Kohn in Sweden, about the null-, whole-, or half-truth of most of the
assertions.  Of the 11 assertional sentences contained in Martin's quote, I
found 10 of them misleading or false.  Including, especially, the title. PGN
suggested I *briefly* summarise the discussion.  Faint hope.

I would argue:
 (a) against sensationalism and for precision;
 (b) against the common habit of picking on one of many contributory causal
     factors and calling it *the cause*;
 (c) for a classification of problems into computer-per-se/requirements/HCI;
 (d) against the assumption that even if computer behavior is directly
     causally involved, the *type* of error must be a computer error
     (I myself was disabused of this notion by Mary Shafer).

Martin would agree with me on (a) and (b), worries that the classification
proposed in (c) may lead one to miss some important aspects of system
failure, and is prepared to consider (d): he was struck by the similarity
many of the (pre-heavy-automation) accidents discussed by Perrow (`Normal
Accidents') bear to the (potentially computer-related) accidents we
discussed.  Martin went to some trouble to distance himself from the views
expressed in the article.

Martin would also like to point out that he has absolutely no training or
competence in aircraft-safety and thus cannot judge the accuracy of the
article, but definitely distances himself from the sensationalistic
"death-trap" tone of the writing.

On the other hand, Danny thinks it is `fully correct', and cites Reason's
book on `Human Error'.  He considered some military aviation accidents, such
as the two accidents with the Gripen fighter (I agree with the official
position that these were control-system design problems, not computer
problems per se). I also separate military aviation concerns from commercial
aviation.  Two cheers for diversity, I guess. But just in case the gentle
reader doesn't believe in diversity, here's my view on journalistic
sloppiness.

The Aftonbladet article suggested that new-technology airplanes are
death-traps (the statistics actually show that they have a far better
accident rate per departure than older-technology airplanes or even
wide-bodies); claimed that `technicians and engineers' are the people who
test-fly the aircraft (patently absurd. In any case, pilots are very
involved in the cockpit design of all new transports); and also seemed to be
restricting its comments to the autopilot ("The critical point is when the
computer system should be disconnected" - you'd hardly want to disconnect
your control system, or your navigation information system, or your air data
system, even were it to be possible).

The article apparently cited four accidents to justify its claim: AA,
Cali 1995; Lauda Air, Thailand, 1991; Gottro"ra (SE), 1990; Svalbard
(SE) 1996. I consider these briefly. (Thanks to Martin for translations.
He warns they're a little dog-eared, but I don't think this affects my points.)

Cali: "The accident investigation concluded that the contact between the
       pilot and computer system broke down [failed]."

None of the conclusions said any such thing. Readers may check the report
for themselves in `Computer-Related Incidents with Commercial Airplanes' at
       http://www.rvs.uni-bielefeld.de
(This is not to deny the serious HCI issues that arose.)

Bangkok: "A computer error that caused one of the plane's engines to
         reverse could have caused the accident, concluded the investigation."

It is not possible for a `computer error' alone to cause an in-flight thrust
reverse on a B767. As I understand it, there is a hydraulic interlock which
prevents this. It was determined in tests that there exists a failure mode
of this interlock on high-time engines. No potential explanation other than
(interlock failure *plus* pilot-uncommanded in-flight thrust-reverse
request) has been found.  No probable cause was determined.

Gottro"ra: "Both motors broke up. One of the better supported
        theories about the cause was problems in the computer system. The
        throttle [?] in one of the motors stopped working. The computer
        ran the other motor at full power and destroyed it completely."

It is my understanding that no probable cause was determined.  Danny pointed
out that this is a hot example amongst pilots in Sweden at the
moment. However, I would caution against `theory' which supposes computers
are the modern form of gremlins. A synopsis of a PhD thesis on this accident
may be found at
     http://info.admin.kth.se/info/pressmeddelanden/1995/0517-1.html
This Swedish press-release was an invitation to Dr. Martensson's
doctoral thesis defense.

Svalbard: "[..] a Russian passenger plane flew straight into a mountain
        [..] The accident cause is unclear, but there is a suspicion
        that the computer system broke."

They said it themselves: the cause is unclear.

Does anyone actually know of *any* accident to a commercial airplane
in which some sort of `computer error' is cited as the probable cause?
Example and direct quotes, please. Caveat: `probable cause' and
`contributing factor' are semi-technical terms.

Peter Ladkin


re: "Plane crashes" — corrections (RISKS-18.65)

Martin Minow <minow@apple.com>
Wed, 11 Dec 1996 21:47:57 -0800
There are two small errors that should be corrected in my item in
RISKS-18.65:

1. The correct URL for the Aftonbladet article is
   http://www.aftonbladet.se/nyheter/dec/06/flyg.html
   "aftonbladed" -> "aftonbladet" .  As always, note that
   newspaper archives may not be kept for a long time.

2. The article title, "Computer errors cause several plane crashes" would
   be better translated as "computer errors a cause of several plane
   crashes." My translation made a sensationalistic Swedish article
   rather more sensationalistic than necessary.

There is an important "risk" here — that technically sophisticated readers
should avoid giving excessive weight to specific terms appearing in the
popular press. Journalists often use "terms of art" without the precision
and accuracy expected of scientific writing. In this case, readers must also
understand that terms are translated from English to Swedish to
popular-press Swedish and back to English, and have probably lost much of
the subtlety and gradation of the original.

Martin Minow, minow@apple.com


Re: Aviation Accident Rates (Kabay, RISKS-18.63)

Peter Ladkin <ladkin@TechFak.Uni-Bielefeld.DE>
Wed, 27 Nov 1996 10:53:11 +0100
Mitch Kabay quotes Reuters quoting Michael Bagshaw, head of aviation
medical services at British Airways, in an address to the Royal Society:

>    "If we examine the accident rate by type of aircraft, it
>    can be seen that although the overall trend is down ... new
>    highly-automated types have a relatively higher accident rate."

Such statements need careful qualification. Boeing produces a statistical
report on worldwide aviation accidents each year, from 1959-[present year].
My 1995 copy (courtesy of Pete Mellor) divides fatal accidents worldwide
into three classes: Second generation (B727, Trident, VC-10, BAC 111,
DC9, B737-100/200, F28), Widebody (B747-100/200/300, DC10, L1011, A300),
and `New' (MD80/90, MD11, B737-300/400/500, B747-400, B757, B767, B777,
A300-600, A310, A320/A321, A330, A340, BAe146, F100). The annual rate
of fatal accidents per million departures since 1983 is in all years
lower than that of either second-generation or widebodies, except for
1988, 1994 and 1995, when widebodies had 0 and new types non-zero rates.
The new types are all `glass cockpit' aircraft of the sort to which Dr.
Bagshaw's comments are relevant.

So what can Dr. Bagshaw's comment as reported by Reuters be referring
to? Maybe a different measurement. One that may seem prima facie
reasonable is passenger deaths per passenger-mile. However, this
measurement can be misleading, because 91.2% of fatal accidents occur
in the takeoff or landing phases of flight, and only 8.2% in
cruise. The widebodies spend most of their miles in cruise, and the
new types are mostly short-haul aircraft which may only spend an hour
to two hours in flight - similar to the second generation, whose
accident statistics overall are worse. The different characteristics
of airplane use could be reflected in the standard deviation: the
figures, rounded to whole numbers, for widebodies from 85-95 are 1, 0,
1, 0, 3, 1, 2, 5, 1, 0, 0. Those for new types, rounded, are 0, 0,
then all 1's. Those for second generation are 1, 1, 1, then higher
than new's but less than 2, except for 94 (2). But these statistics
do not reflect the vast differences in `safety cultures' across the
world's airlines. These differences can be large enough that the
US prohibits aircraft operated by airlines in certain countries from
flying into the US - and this system may be extended to Europe.

So, on these measures, the `new technology' aircraft seem to be safer
overall than other types. Dr. Bagshaw's measurements lead him to a
different conclusion. The moral is: beware of facile quotation of statistics.

Dr. Bagshaw's comments on human factors are pertinent, and in fact
justifiable independently of statistics.  His comments focus on the
cognitive capabilities of pilots, as `information processors'. Such a
view lies behind much current research in aviation human factors. And
there have been some recent accidents which have focused attention on
the pilot's awareness of the system state and its proper functioning:
A320, Bangalore 1990; A320, Warsaw 1993; A300-600, Nagoya 1994; B757,
Cali 1995; B757, Puerto Plata 1996. The reason for this attention is
that these new systems allow failure modes (such as pilot's faulty
awareness of system state) that simply didn't exist in older
aircraft. Thus we need to pay attention to these problems and solve
them, no matter what comparative accident rates are.

Solving such problems will, of course, affect accident rates, no matter
how measured. Airline accident rates (pick your measure) are the lowest
in history, but they do not appear to be going down. To bring this down,
it is reasonable to suppose that research and new techniques will be
needed. And since nearly 60% of fatal accidents worldwide are primarily
due to flightcrew behavior (Boeing again, 1985-95), human factors issues
must form a major part of this program. There are two aspects to this:
establishing expectations of flight crew; and ensuring these
expectations are met. The first is the focus of Dr. Bagshaw's comments:
what can we reasonably expect from flight crew? The second is the business
of appropriate and thorough training (that `safety culture' again).

To summarise: it is appropriate, and appears to be necessary, to focus
on human factors to reduce the rate of airline accidents (however
measured).  Recent accidents with `new technology' have uncovered
human factors issues that could not arise with older-generation
aircraft. These new issues must therefore be addressed.  However, I am
not aware that accident rates as currently measured and analysed
enable us to judge whether `new technology' aircraft are inherently
`safer' or `less safe' than either second-generation or widebody
aircraft (in Boeing's classification). They're `differently safe'.

Peter Ladkin


Re: Don't touch this switch! (Simpson, RISKS-18.65)

Bear Giles <bear@indra.com>
Wed, 11 Dec 1996 19:57:54 -0700 (MST)
Why didn't the original sign state "Emergency Power Shutoff"?  Who would
casually press a button so labeled, especially in the offices of a "Major
Computer Company."

Instead of being a bunch of dumb sheep, let's think about this situation.
(There's a lot of computer hardware and software which commits the same sin,
after all.)  _Why_ would "MCC" leave that switch in place, since it has
obviously caused them much grief in the past?

The answer, of course, is it's an important piece of safety equipment.
Computer rooms need emergency power shutoffs, and for those shutoffs to be
useful in an emergency they can't be in the computer room itself.  As I
vaguely recall building codes won't even allow them in the doorway of the
computer room; there has to be at least a doorway between the cutoff switch
and the heavy electrical wiring.

Yet this important piece of safety equipment is effectively useless (and
actually slightly harmful) since someone posted a paternalistic "do not
touch" sign instead of posting a sign explaining the purpose of the switch.

It would be too much to suggest that we have a professional duty to press
every unlabeled red button we encounter, but the problem in this auditorium
isn't the emergency power cutoff switch or the hard-to-find screen controls.
It's the paternalistic author who thinks most people can't be trusted to
know not to touch the cut-off switch if they know what it is — and that
they can't be trusted to know when it _is_ important to use it.

Bear Giles  bear@indra.com


Re: Don't touch this switch! (Simpson, RISKS 18.65)

"Rosenthal, Harlan" <rosenthh@dialogic.com>
Thu, 12 Dec 1996 9:08:38 -0500
This is not a computer risk; it is the even more common human risk of people
who seem to think that signs, lights, guide ropes, barricades, etc., apply
to everyone but them.  There is no way to design around such attitudes other
than removing all potentially dangerous articles from their reach.  Like
child-proofing. :-)

-harlan

  [As I have noted in the past, our forum deals with COMPUTER-RELATED RISKS
  (not coincidentally, the title of my book).  Unfortunately, a lot of
  events that seem not to be computer risks are computer-related risks.
  When your computer wipes out, for whatever reason, that is certainly
  relevant here.  People-tolerant systems is still a good research area. PGN]


4th ACM Conference on Computer and Communications Security

Mike Reiter <reiter@research.att.com>
Wed, 11 Dec 1996 17:02:13 -0500 (EST)
    Fourth ACM Conference on Computer and Communications Security
         (Preliminary Technical Program, Abridged for RISKS)
                     Zurich, Switzerland
                       1-4 April 1997
                   Sponsored by ACM SIGSAC

For more information, including registration and hotel information,
see: http://www.zurich.ibm.ch/pub/Other/ACMsec/index.html .

TUESDAY, 1 APRIL

4 half-day tutorials in two parallel tracks:

        Theory Track            Practice Track

Morning     Cryptography        CERT and Practical Network Security
            Jim Massey, Ueli Maurer Tom Longstaff
        (ETH Zurich)        (Software Engineering Institute)

Afternoon   Internet Security       Info-Wars
        Refik Molva         Paul Karger
        (Eurecom)           (IBM TJ Watson)

WEDNESDAY, 2 APRIL

09:00-09:30 Introduction and Opening Comments
            Richard Graveman (Bellcore)
            Phil Janson (IBM Zurich Lab)
            Li Gong (JavaSoft)
            Clifford Neuman (Univ. of Southern California)

09:30-10:30 Invited talk 1: To Be Announced

10:30-11:00 Coffee Break

11:00-12:00 Session 1: Fair Exchange of Information

* Fair Exchange with a Semi-Trusted Third Party
  Matthew Franklin, Mike Reiter (AT&T Research)

* Optimistic Protocols for Fair Exchange
  N. Asokan, Matthias Schunter, Michael Waidner
    (IBM Zurich Lab and Univ. Dortmund)

14:00-15:30 Session 2: Language and System Security

* Static Typing with Dynamic Linking
  Drew Dean (Princeton University)

* Secure Digital Names
  Scott Stornetta, Stuart Haber (Surety Technologies)

* A Calculus for Cryptographic Protocols: The Spi Calculus
  Martin Abadi, Andrew D. Gordon (DEC SRC and Cambridge)

16:00-17:30 Panel 1: Programming Languages as a Basis for Security
  Chair: Drew Dean (Princeton), Panelists: To Be Announced

THURSDAY, 3 APRIL

09:00-10:30 Session 3: Authentication

* Authentication via Keystroke Dynamics
  Fabian Monrose, Avi Rubin (New York Univ. and Bellcore)

* Path Independence for Authentication in Large-Scale Systems
  Mike Reiter, Stuart Stubblebine (AT&T Research)

* Proactive Password Checking with Decision Trees
  Francesco Bergadano, Bruno Crispo, Giancarlo Ruffo (Univ. of Turin)

11:00-12:00 Invited talk 2: To Be Announced

14:00-15:30 Session 4: Signatures and Escrow

* Verifiable Partial Key Escrow
  Mihir Bellare, Shafi Goldwasser (UC San Diego and MIT)

* New Blind Signatures Equivalent to Factorisation
  David Pointcheval, Jacques Stern (ENS/DMI, France)

* Proactive Public-Key and Signature Schemes
  Markus Jakobsson, Stanislaw Jarecki, Amir Herzberg,
    Hugo Krawczyk, Moti Yung (IBM TJ Watson and Bankers Trust)

15:30-16:00 Coffee Break

16:00-17:30 Panel 2: Persistence and Longevity of Digital Signatures
  Chair: Gene Tsudik (USC/ISI), Panelists: To Be Announced

Banquet Dinner

FRIDAY, 4 APRIL

09:00-10:30 Session 6: Commerce and Commercial Security

* A New On-Line Cash Check Scheme
  Robert H. Deng, Yongfei Han, Albert B. Jeng,
    Teow-Hin Ngair (National University of Singapore)

* Conditional Purchase Orders
  John Kelsey, Bruce Schneier (Counterpane Systems)

* The Specification and Implementation of 'Commercial' Security
    Requirements including Dynamic Segregation of Duties
  Simon Foley (University College, Cork, Ireland)

11:00-12:30 Session 5: Cryptography

* On the Importance of Securing Your Bins: The Garbage-Man-in-the-Middle Attack
  Marc Joye, Jean-Jacques Quisquater (Univ. Louvain)

* Improved Security Bounds for Pseudorandom Permutations
  Jacques Patarin (Bull)

* Asymmetric Fingerprinting for Larger Collusions
  Birgit Pfitzmann, Michael Waidner (Univ. Hildesheim and IBM Zurich Lab)

Please report problems with the web pages to the maintainer

x
Top