The RISKS Digest
Volume 15 Issue 5

Thursday, 30th September 1993

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

An oxymoron?
Jim Horning
Unfriendly fire
Tim Steele via Jim Thompson
RISKS BY FAX
Lauren Weinstein
Risks of Flying Toasters?
Jim Griffith
Ilyushin Il-114 crash
Urban Fredriksson
Cancer Treatment Blunder
Brian Randell
RFI from phones (was "EuroDigital")
Mich Kabay
Fungible microprocessors
Mich Kabay
Re: Security holes and risks of software ...
A. Padgett Peterson
Josh Osborne
John Hascall
DCCA-4 Advance Program
Teresa Lunt

An oxymoron?

<horning@src.dec.com>
Fri, 24 Sep 93 18:05:50 -0700
An announcement recently posted at an installation that shall remain nameless:

    Subject: IMPORTANT: All machines will be down

    ALL MACHINES WILL BE DOWN!

    When: Saturday morning (9 to 13) September the 25th
    Why: Maintenance of the UPS (Uninterruptible Power Supply)


Unfriendly fire

Jim Thompson <jim@Tadpole.COM>
27 Sep 1993 13:41:14 GMT
>From our system administrator in the UK

  Date: Mon, 27 Sep 93 11:28:00 BST
  From: tjfs@tadtec.co.uk (Tim Steele)
  To: ttp@tadtec.co.uk
  Subject: Macintosh Users Please Read

  Last week we had another Mac SE catch fire.  We were lucky that it
  was during the day and could be switched off quickly.

  As a general principle, if you can switch your equipment off at night
  you should do so.  Unix systems are a bother to shut down, and are
  normally designed to run 24 hours, so it's OK to leave them on.

  Mac SE (and SE/30) users should consider this mandatory, as they have
  a known propensity to catch fire.

  Thanks,   Tim


RISKS BY FAX

Lauren Weinstein <lauren@vortex.com>
Sun, 26 Sep 93 21:21 PDT
Please change the voice contact number for fax information to (818) 225-2800.
The fax number should be changed to (818) 225-7203.  We've moved.  Thanks.

--Lauren--


Risks of Flying Toasters?

Jim Griffith <griffith@fx.com>
Thu, 30 Sep 93 13:29:23 PDT
Heard on the radio that Berkeley Software, the people who brought you the
flying toaster screen saver, have filed a lawsuit against Del Rio of Canada
("Del Rio" probably misspelled - radio, remember?) regarding that company's
new "Opus and Bill" screen saver.  The screen saver in question features Our
Favorite Penguin using a shotgun to blast, you guessed it, flying toasters.
The lawsuit asks for a "cease and desist" order against the sale of the
product.

Berkeley Software apparently tried to negotiate a settlement, and the lawsuit
signifies that their efforts failed.

    Jim
               [Added LATER note, from Mark Brader: It was Delrina Corp.
               They lost the suit, and proposed to change to toasters
               with helicopter rotors.   msb]


Ilyushin Il-114 crash

Urban Fredriksson <urf@icl.se>
Mon, 27 Sep 1993 18:31:08 GMT
The reason for the fatal crash of Ilyushin Il-114 prototype regional turboprop
was that the digital engine control system inadvertently commanded one
propeller to feather just after take off on July 5:th. The pilots couldn't
compensate for the resultant yaw.  [Flight International 15 - 21 Sept 1993] --
 Urban Fredriksson  urf@icl.se


Cancer Treatment Blunder

<Brian.Randell@newcastle.ac.uk>
Thu, 30 Sep 1993 15:05:07 +0100
Today's Independent carries a front-page story headed:

"Cancer Blunder Will Cost Health Authority Millions", by Celia Hall and
Jonathan Foster.

It is based on a report, by Dr Thelma Bates and Dr David Ash, commissioned
by the relevant Health Authority into the accidental mistreatment of a
large number of cancer patients at the North Staffordshire Royal Infirmary.
Here are some quotes from the article:

"A Health Authority last night faced the prospect of finding millions of
pounds from its patient budget to pay compensation after a mistake which
led to more than a thousand people getting the wrong cancer treatment.
...
It is impossible to be sure how many of the patients have died because of
the underdosages - but the number is likely to be in the tens rather than
in the hundreds.
...
Details of the errors were disclosed after a clinical inquiry by senior
radiologists who examined the cases of all 1045 patients who had radiation
doses of up to 35% less than prescribed. Their report blamed human error by
Margaret Grieveson, a physicist, who unnecessarily programmed a correction
factor into the radiography computer in 1982."

Dept. of Computing Science, University of Newcastle, Newcastle upon Tyne,
NE1 7RU, UK  Brian.Randell@newcastle.ac.uk   PHONE = +44 91 222 7923


RFI from phones (was "EuroDigital")

"Mich Kabay / JINBU Corp." <75300.3232@compuserve.com>
11 Sep 93 15:09:46 EDT
In RISKS 15(4) Brian.Randell@newcastle.ac.uk wrote about the radio-frequency
interference (RFI) generated by new digital phones.

This issue raises serious security questions.  Winn Schwartau of Inter.Pac has
been writing about high-energy radio-frequency ("HERF guns") for years.
Pulses of electro-magnetic radiation can hang or crash microprocessors from a
distance.  They can cause processing and memory errors.  It now seems anyone
will be able to buy a phone which emits high-energy radio-frequency pulses
sufficient to interfere with the microprocessor in a modern hearing aid.

Would RISKS readers be on the lookout for the results of tests of the effects
of such phones on microprocessors?  A short time ago in RISKS we read about
cellular phones causing motorized, computer-controlled scenery to shift about
in one of Andrew Lloyd-Webbers's productions in London.  I wonder what happens
to a spreadsheet when one of these noisy phones goes off while we're busy
trying to compute?

Michel E. Kabay, Ph.D., Director of Education, National Computer Security Assn


Fungible microprocessors

"Mich Kabay / JINBU Corp." <75300.3232@compuserve.com>
11 Sep 93 15:10:20 EDT
A story delivered by CompuServe's Executive News Service newswires through my
topic-filters into the "Security" in-box caught my eye yesterday afternoon:

"OTC 09/10 1606 Violent computer chip takeovers worry officials

SAN JOSE, Calif.  (Sept.  10) UPI - The lucrative trade in computer chips has
captured the attention of the state's street gangs, luring them to
California's Silicon Valley where the armed takeover of supply warehouses has
become a common occurrence, authorities said Friday."

The article includes an interview with Julius Finkelstein, deputy district
attorney in charge of Santa Clara's High Tech Crime unit.  Mr Finkelstein
thinks that there is a trend towards violent robberies of computer processors
in Silicon Valley because of the high demand for these chips.  One of the
reasons the chips are so lucrative on the gray market is that they have no
serial numbers and cannot be traced to a stolen lot.  The chips are as
valuable as cocaine on a weight-for-weight basis, he said.

The most recent case occurred on Thursday, 9 Sept 93, when six thieves
attacked Wyle Laboratory Inc. in Santa Clara in a well-planned, precise
operation which netted thousands of dollars of Intel CPUs.  Apparently the
thefts have reached one a month so far, with signs of worsening as criminal
street gangs realize how low their risks are, either of capture, successful
prosecution or sentencing.

***

CPU chips, like pennies but not dollar bills, are fungible.  That is, they are
indistinguishable and equivalent.  When a manufacturer buys gray-market CPU
chips, there is no way to identify them as stolen because there is no way to
tell which chips came from where and how they got there.

How long will it be before this kind of RISK to workers and loss for
manufacturers leads to a cryptographically-sound system for imposing serial
numbers on microprocessors?  In this case, a unique ID could not only save
money, it could save some innocent person's life.

Could the chip manufacturers engrave a unique ID on their chips during the
wafer stage using their normal electron-beam/resist/UV/acid production phase?
Each chip in a wafer would have a sequence number, and each wafer might have a
wafer number.  For such ID to be effective in reducing the fungibility of
microprocessors, each manufacturer would have to keep secure records of their
products and where they shipped them, much as pharmaceutical manufacturers and
many others do.  Would such an engraved number be readable once the chip were
encapsulated?  Does anyone know if X-rays, for instance, could pick up the
engraved numbers?

Another approach might be to integrate a readable serial number in the
physical package in which the CPU is embedded.  Perhaps a unique, IR-readable
information could be molded into the plastic or epoxy-resin package using
technology that has already been applied successfully to producing
access-control cards.  Other technology that might be applicable includes the
Wiegand effect, where the orientation of ferromagnetic spicules in a plastic
matrix produces a characteristic and individual response to a radio-frequency
electromagnetic beam.  Perhaps it would be wise for the industry to agree on
some standards to make it easier to read such numbers using a simple,
inexpensive technique.

How much would all this engraving and record-keeping cost?  Surely the costs
would ultimately be borne by consumers; therefore, individual companies may
balk at identifiers because they could derive a short-term competitive edge by
continuing to manufacture fungible chips.  In the long run, however, if theft
continues to increase, plants producing identical chips may become the
preferred targets of chip thieves.

Michel E. Kabay, Ph.D., Director of Education, National Computer Security Assn


Re: Security holes and risks of software ... (Ranum, RISKS-15.03)

A. Padgett Peterson <padgett@tccslr.dnet.mmc.com>
Fri, 10 Sep 93 21:23:12 -0400
> ...It seems to me that in designing a complex program that requires
> privileges, the complex part and the privileged part should be separated...

The interesting part is that for the first time we are approaching the point
where true separation is possible. Not in a mainframe, nor in a UNIX machine
but in the client-server network (not peer-peer though).

The problem is that the traditional architecture is a single-state machine
and each operating condition is linked to every other condition. Security
it built on top of what starts out as a single privileged user state.

Until now, the client-server relationship has been viewed simply as a
collection of such single state machines (and a peer-peer network is). More
and more we are starting to see security and integrity products (anti-virus
NLMs were the first) that consider the synthesis of clients and servers as
a multi-state machine with the clients unable to influence the server (well,
clients could flood the net but this does not affect the server).

IMHO this changed world-view is going to cause the single greatest change in
information security that we have ever seen. Networks will cease being
"unsecurable" and become the only accepted means for protection of data.

                        Padgett


Re: Security holes and risks of software ... (Ranum, RISKS-15.03)

Josh Osborne <stripes@uunet.uu.net>
Fri, 10 Sep 93 23:14:43 -0400
>Following the example given, classical UNIX provides only the setuid mechanism
>for increasing the access of a program, and setuid always applies to an entire
>program.  Thus, if a program must run partially as root, the only way to avoid
>having it ALL run as root is to divide it up into communicating processes.

Not true.  A program may return from its effective uid to its real uid at any
time (on BSD systems it may swap the euid & ruid, which isn't a great
improvement - homework question, why?).  I have a program that needs to use a
raw socket to do its job, only root can get open a raw socket.  Any uid can
_use_ a raw socket.  The very first thing my program does is open the raw
socket (and check for errors).  The very second thing it does is set the euid
to the ruid.  It does this before even parsing command line options (which
anoys me, you can't get it to print the usage message if you don't run it as
root, or set-uid it).  If the program has a bug somewhere else in it that can
be used as a security hole, the damage is limited to allowing an unprivileged
user access to a raw socket.  How bad is that?  Well they can send almost
anything onto the net with one, simulating connections from privileged ports,
or other (normally near-by) hosts.  If the site allows .rhost files it could
be in for some trouble.  It will take alot of programming effort to achieve
this 'tho - an entire TCP/IP stack in user space, and a rlogin that uses it.

[...]


Re: Security holes and risks of software ...

<john@iastate.edu>
Fri, 10 Sep 93 23:13:31 -0500
mjr@tis.com (Marcus J. Ranum) writes:
> ...It seems to me that in designing a complex program that requires
> privileges, the complex part and the privileged part should be separated...

geof@aurora.com (Geoffrey H. Cooper) writes:
}I agree with this statement, but find another conclusion/RISK.  This is the
}risk of having security mechanisms that are too cumbersome to be used easily.

}Following the example given, classical UNIX provides only the setuid mechanism
}for increasing the access of a program, and setuid always applies to an entire
}program.  Thus, if a program must run partially as root, the only way to avoid
}having it ALL run as root is to divide it up into communicating processes.

   Not true, but perhaps a good idea, just the same.

}If you want security, you have to make it easy to be secure.  For example, if
}a setuid program had to explicitly enable and disable the setuid access
}(running otherwise as the user who invoked it), the body of code that needed
}to be carefully checked to verify security would be significantly diminished;
}a loophole in another part of the program could not compromise the entire
}system's security.

   Under BSDish flavors of Unix (at least), it is indeed possible to turn
   "setuid" on and off using the "setreuid()" call.  Although, some network
   file systems can make it a little less straight forward than it might be...

        /*
         * Do a fancy jig (dance around AFS & NFS)
         */
        setreuid(0, pwd->pw_uid);
        chdir(pwd->pw_dir);
        setreuid(pwd->pw_uid, 0);
        /*
         * can access files as user here
         */
        if (!quiet) quiet = (access(qlog, F_OK) == 0);
        if (Xflag) doXauth();
        /*
         * back to root
         */
        setreuid(0, 0);

From: John Carr <jfc@Athena.MIT.EDU>
>Ideally, the worst the complex code can do to you is give you a core dump

}[...the classic fingerd buffer overwrite...]
}A few months later everyone learned that there were worse side effects.
}
}Think of a core dump as the best thing that can happen when your program goes
}wrong, not the worst.  If you want a program to dump core under certain
}conditions, call abort().  Don't depend on memory corruption to do the job
}right.

I think this illustrates that separating the complex part from the
privileged part is more subtle than would appear at first blush.
It does little good to have the privileged part carefully segregated
into an easily understood module if the complex part can scribble all
over it or its data.

John Hascall, Systems Software Engineer, Project Vincent
Iowa State University Computation Center  +  Ames, IA  50011  +  515/294-9551


DCCA-4 Advance Program

Teresa Lunt <lunt@csl.sri.com>
Thu, 30 Sep 93 15:09:01 -0700
              Advance Program and Registration Information
                 DCCA-4: Fourth IFIP Working Conference
           on Dependable Computing for Critical Applications
                            January 4-6, 1994
           Catamaran Resort Hotel, San Diego, California, USA

Organized by the IFIP Working Group 10.4 on Dependable Computing and
Fault-tolerance, in cooperation with:
  IFIP Technical Comittee 11 on Security and Protection in Information
       Processing Systems
  IEEE Computer Society Technical Committee on Fault-tolerant Computing
  EWICS Technical Committee 11 on Systems Reliability, Safety and Security
  University of California at San Diego

This is the fourth Working Conference on this topic, following the successful
conferences held in August 1989 at Santa Barbara (USA), in February 1991 at
Tucson (USA), and in September 1992 in Mondello (Italy). As evidenced by
papers that were presented and discussed at those meetings, critical
applications of computing systems are concerned with service properties
relating to both the nature of proper service and the system's ability to
deliver it. These include thresholds of performance and real-time
responsiveness, continuity of proper service, ability to avoid catastrophic
failures, and prevention of deliberate privacy intrusions.

The notion of dependability, defined as the trustworthiness of computer
service such that reliance can justifiably be placed on this service,
enables these various concerns to be subsumed within a single conceptual
framework. Dependability thus includes as special cases such attributes
as reliability, availability, safety, and security. In keeping with the
goals of the previous conferences, the aim of this meeting is to
encourage further integration of methods and tools for specifying,
designing, implementing, assessing, validating, operating, and
maintaining computer systems that are dependable in the broad sense. Of
particular, but not exclusive interest, are presentations that address
combinations of dependability attributes, e.g., safety and security or
fault-tolerance and security, through studies of either a theoretical or
an applied nature.

As a Working Conference, the program is designed to promote the exchange of
ideas by extensive discussions. All paper sessions end with a 30 minute
discussion period on the topics covered by the session. In addition, three
panel sessions have been organized. The first, entitled "Formal Methods for
Safety in Critical Systems" will explore the role of formal methods in
specifying and assessing system safety. The second, entitled "Qualitative
versus Quantitative Assessment of Security?"  debates of the role that methods
based on mathematical logic and stochastic process theory ought to play in
assessing system security.  The third panel "Common Techniques for
Fault-tolerance and Security" explores techniques that are useful for
attaining both fault-tolerance and security.

ADVANCE PROGRAM

Monday, January 3

  7-10pm  Welcome Reception

Tuesday, January 4

  9:00-9:15am Opening Remarks
  F. Cristian, General Chair
  G. Le Lann, T. Lunt, Program Co-chairs

  9:15-10:45am Session 1: Formal Methods for Critical Systems
  Chair: M. Melliar-Smith (U of California, Santa Barbara, US)

    W. Tursky, Warsaw University, Poland: On Doubly Guarded
      MultiprocessorControl System Design

    G. Bruns, S. Anderson, U of Edinburgh, UK: Using Data Consistency
      Assumptions to Show System Safety

  10:45-11:00am Break

  11:00-12:30am Panel Session 1: Formal Methods for Safety in Critical
    Systems
  Moderator: Ricky Butler (NASA Langley, US)
  Panelists: S. Miller (Rockwell Collins, US), M. J. Morley (British
    Rail/Cambridge, UK),  S. Natarajan (SRI International, Menlo Park,
    US)

  12:30-1:30pm Lunch

  1:30-3:00pm Session 2: Combining The Fault-tolerance, Security and
    Real-time Aspects of Computing
  Chair: C. Landwehr (NRL, Washington DC, US)

    P. Boucher et al, SRI International: Tradeoffs Between Timeliness
      and Covert Channel Bandwith in multilevel-Secure, Distributed
      Real-Time Systems

    K. Echtle, M. Leu: Fault-Detecting Network Membership Protocols for
      Unknown Topologies

  3:30-4:00pm Break

  4:00-6:00pm Session 3: Secure Systems
  Chair: P. G. Neumann (SRI International, Menlo Park, US)

    J. Millen, MITRE: Denial of Service: A Perspective

    R. Kailar, V. Gligor, S. Stubblebine: U of Maryland: Reasoning About
      Message Integrity

    R. Kailar, V. Gligor, U of Marland, L. Gong, SRI: On the Security
      Effectiveness of Cryptographic Protocols

Wednesday, January 5

  9:00-10:30am Session 4: Assessment of Dependability
  Chair: W. Howden (U of California, San Diego)

    C. Garrett, S. Guarro, G. Apostolakis, UCLA: Assessing the
      Dependability of Embedded Software Using the Dynamic Flowgraph
      Methodology

    A. Aboulenga, TRW and D. Ball, MITRE: On Managing Fault-tolerance
      Design Risks

  10:30-11:00am Break

  11:00-12:30 Panel Session 2: Qualitative versus Quantitative
    Assessment of Security
  Moderator: T. Lunt (SRI International, Menlo Park, US)
  Panelists: M. Dacier (LAAS, Toulouse, France), B. Littlewood (City U,
    London, UK), J. McLean (NRL, US), C. Meadows (NRL, US), J. Millen
    (MITRE, US)

  12:30-1:30pm Lunch

  1:30-3:00pm Session 5: Basic Problems in Distributed Fault-tolerant
    Systems
  Chair: F. B. Schneider (Cornell U, Ithaca, US)

    C. Walker, M. Hugue, N. Suri, Allied Signal Aerospace: Continual
      On-Line Diagnosis of Hybrid Faults

    A. Azadmanesh, R. Kieckhafer, U of Nebraska: The General Convergence
      Problem: A Unification of Synchronous and Asynchronous Systems

  3:30-4:00pm Break

  4:00-6:00pm Session 6: Specification and Verification of Distributed
    Protocols
  Chair: R. Schlichting (U Arizona, Tucson, US)

    O. Babaoglu, U of Bologna, Italy, M. Raynal, IRISA, France:
      Specification and Verification of Behavioral Patterns in
      Distributed Computations

    P. Zhou, J. Hooman, Eindhoven Univ, The Netherlands: Formal
      Specification and Compositional Verification of an Atomic
      Broadcast Protocol

    H. Schepers, J. Coenen, Eindhoven Univ, The Netherlands: Trace-Based
      Compositional Refinement of Fault-Tolerant Distributed Systems

  6:30-10pm: Banquet, with a talk by Peter Neumann

Thursday, January 6

  9:00-10:30am Session 7: Design Techniques for Robustness
  Chair: J. Meyer (U. Michigan, Ann Arbor, US)

    N. Kanawati, G. Kanawati, J. Abraham, U of Texas: A Modular Robust
      Binary Tree

    R. Rowell, BNR, V. Nair, SMU, Texas: Secondary Storage Error
      Correction Utilizing the Inherent Redundancy of Stored Data

  10:30-11:00am Break

  11:00-12:30 Panel Session 3: Common Techniques in Fault-Tolerance and
    Security
  Moderator: C. Levitt (U of California, Davis, US)
  Panelists: Y. Deswarte (LAAS, Toulouse, France), B. Littlewood(City U,
    London, UK), C. Meadows (NRL, US), B. Randell (U of Newcastle upon
    Tyne, UK), K. Wilen (U of California, Davis, US)

  12:30-1:30pm Lunch

  1:30-3:00pm Session 8: Real-Time Systems
  Chair: L. Sha (SEI, Pittsburgh, US)

    M. Goemans, I. Saias, N. Lynch, MIT: A Lower Bound for Faulty
      Systems without Repair

    S. Ramos-Thuel, J. Strosnider, CMU: Scheduling Fault Recovery
      Operations for Time-Critical Applications

  3:30-4:00pm Break

  4:00-6:00pm Session 9: Evaluation of Dependability Aspects
  Chair:  K. Trivedi (Duke U, Durham, US)

    G. Miremedi, J. Torin, Chalmers Univ, Sweden: Effects of Physical
      some Software Implemented Error Detection Techniques

    J. Dugan, Univ of Virginia, M Lyu, Bellcore: System-Level
      Reliability and Sensitivity Analysis for Three Fault-Tolerant
      System Architectures

    J. Carrasco, U Polit de Caalynya, Barcelona, Spain: Improving
      Availability Bounds Using the Failure Distance Concept

REGISTRATION INFORMATION

Registration fees are $445 before December 4, 1993 and $495 afterwards.  We
will accept a check if it is drawn on a US bank. You may also wire money to
the DCCA bank account: Wescorp Federal Credit Union, ABA 122-04-12-19, credit
account of USE Credit Union: 32-22-81-691S-025, account UCSD 4TH DCCA,
142665100.

If you register by mail, please make the check out to DCCA-4 and mail
with the following registration form to:

    DCCA-4
    University of California, San Diego
    Department of Computer Science and Engineering
    9500 Gilman Drive  m/s 0114
    La Jolla, CA  92093-0114   USA

If you wire money, then please follow up with a letter to the above
address, or an e-mail message to dcca@cs.ucsd.edu, or a fax to +1-619-
534-7029, or a telephone call to Keith Marzullo, +1-619-534-3729.

   --------------REGISTRATION FORM----------------------------------------

Name:

Affiliation:

Address:

Telephone number:

E-mail address:

Dietary restrictions:

Registration fees are $445 before         number additional reception
December 4, 1993 and $495 afterwards.       tickets ($35 each):_____
Included in these fees are all lunches,
the banquet, and the reception for one    number additional banquet
person, and proceedings. You may          tickets ($65 each):_____
purchase additional tickets at the
following prices:                         number additional lunch
                                            tickets ($60 each):_____
   -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

HOTEL INFORMATION

The conference will be held at the Catamaran Resort Hotel on Mission Bay (3999
Mission Blvd., San Diego, CA 92109). Please call the hotel to make your room
reservations. A block of rooms has been reserved until December 4, 1993 at a
rate of $90 for one or two persons, and $15 for each additional person above
2. The phone numbers are +1 619 488-1081, fax +1 619 490-3328.

There are flights from most major US cities to the San Diego airport.
Transportation from the airport is provided by a company called Super Shuttle,
which can be reached by using up the courtesy phone at the Reservation Board
near the baggage claim area to call the Catamaran Resort Hotel. The fare is $6
per person, one way. You can also take a taxi, which should cost $10-$15 one
way.

All conference events except the banquet will be at the Catamaran Resort
Hotel, and lunches will be served at the hotel. The banquet will be
aboard the authentically recreated 1800's sternwheller Willaims D. Evans
which docks near the hotel. Conference participants may receive
telephone calls at the hotel: +1 619 488-1081, +1 619 488-0901 fax.

The hotel is right on the beach on Mission Bay. Winters in San Diego are mild,
with daytime temperatures in the low 70's (22-24C) and nighttime temperatures
around 50 (10C). Winter is the season in which San Diego gets most of what
little rain it gets, so bring an umbrella just in case there is a shower.

CONFERENCE ORGANIZATION

General Chair
F. Cristian, U of California, San Diego, USA

Program Co-chairs
G. Le Lann, INRIA, France
T. Lunt, SRI International, USA

Local Arrangements/Publicity Chair
K. Marzullo, U of California, San Diego, USA

Program Committee
J. Abraham, U of Texas at Austin, USA
A. Avizienis, U of California, Los Angeles, USA
D. Bjoerner, UNUIIST, Macau
R. Butler, NASA Langely, USA
A. Costes, LAAS-CNRS, Toulouse, France
M-C. Gaudel, LRI, France
V. Gligor, U of Maryland, USA
L. Gong, SRI International, USA
H. Ihara, Hitachi, Japan
J. Jacob, Oxford U, UK
S. Jajodia, George Mason U, USA
J. Lala, CS Draper Lab, USA
C. Landwehr, NRL, USA
K. Levitt, U of California, Davis, USA
C. Meadows, NRL, USA,
J. McLean, NRL, USA
M. Melliar-Smith, U of California, Santa Barbara, USA
J. Meyer, U of Michigan, Ann Arbor, USA
J. Millen, MITRE, USA
D. Parnas, McMaster U, Canada
B. Randell, U of Newcastle upon Tyne, UK
G. Rubino, IRISA, France
R. Schlichting, U of Arizona, Tucson, USA
J. Stankovic, U of Massachusetts, Amherst, USA
P. Thevenod, LAAS-CNRS, Toulouse, France
Y. Tohma, Tokyo Inst. of Technology, Japan

Ex-officio
J-C. Laprie, LAAS-CNRS, France
IFIP WG 10.4 Chair

Please report problems with the web pages to the maintainer

x
Top