The RISKS Digest
Volume 19 Issue 66

Thursday, 9th April 1998

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Stanford business school hit by [Windows] computer 'disaster'
PGN
More Windows Magic
Bob Frankston
LA county pension fiasco
Richard Schroeppel
AOL Stock Charts Posted Erroneously Due To "Malfunction"
Irvin Jay Levy
STOVEACT - Oops, Wrong Number... Gridlock!
Jeremy Leader
Re: EMI and TWA800
Peter B. Ladkin
Re: Phone scam alert: Social Engineering 101
PGN
Rice University spammed too!
Scott Ruthfield
Re: Funding for a new software paradigm
Nick Rothwell
Fred Cohen
Erann Gat
"Web Security: A Step-by-Step Reference Guide", Lincoln D. Stein
Rob Slade
ICDCS-18 cfp
Teruo Higashino
Info on RISKS (comp.risks)

Stanford business school hit by [Windows] computer 'disaster'

"Peter G. Neumann" <neumann@csl.sri.com>
Thu, 9 Apr 98 14:56:47 PDT
The Stanford University Graduate School of Business underwent a serious
computer system breakdown on 7-8 Mar 1998 (during the weekend that GSB was
hosting an entrepreneurship conference on ``The Technology of Success''),
and some folks are still trying to recover.  Under the guise of ``routine
maintenance'' to add storage capacity to two network file servers, disaster
struck.  The files were unreadable.  The Admin server was able to be
restored from backup tapes, but the other server with faculty and student
files was clobbered when backup tapes were loaded, overwriting the original
contents without being able to restore the backups.  At least 10 faculty
members and Ph.D. candidates have still not been able to recover their files
-- in some cases representing work over the past three years.  The article
notes that ``many of the faculty members and students were shielded from the
disaster because they used Apple computers or Unix mainframes [sic] --
rather than the Windows-based PCs served by the business school network.''
[PGN Abstracting from an article by Scott Herhold, *San Jose Mercury News*,
8 Apr 1998 (http://www.sjmercury.com/business/center/stanford09.htm)]

  [Noted by several readers.]


More Windows Magic

<Bob_Frankston@frankston.com>
Tue, 7 Apr 1998 13:10 -0400
Under Windows 95 I had a database "C:\abc\abc.mdb".  I decided to move it to
the server (Y:\x\Data\abc\abc.mdb) and updated my toolbar link. To be safe I
renamed the old directory "C:\abc.x".  All was fine.

I then decided to go back to the old location and renamed abc.x back to abc,
moved the updated database back and renamed the Y directory to abc.z to be
safe. All seemed fine.

By habit I clicked the toolbar link and it worked.

In a little while I realized it shouldn't. W98 had updated the link on my
behalf to "y:\x\data\abc.z\abc.mdb". Huh? Nice favor but not at all what I
wanted. I then renamed the database itself to "y:\x\data\abc.z\abcz.mdb". But,
again Windows was smarter than me and updated the link.

(Actually, this was under Windows 98, beta 3 but I presume the behavior is the
same on Windows 95).


LA county pension fiasco

"Richard Schroeppel" <rcs@CS.Arizona.EDU>
Wed, 8 Apr 1998 10:37:03 MST
<summarized from Nando --rcs>

L.A. County's pension missing $1.2 billion from computer error

LOS ANGELES (8 Apr 1998) Because of a computer programming gaffe, the
nation's most populous county failed to contribute $1.2 billion to its
pension fund over 20 years.  The mistakes, discovered when pension
administrators brought in an outside auditing firm to look at the books,
will likely force cash-strapped Los Angeles County to spend an additional
$25 million annually to make up for insufficient fund contributions, the Los
Angeles Times reported Wednesday.  [http://www.nando.net]

[Curious that they saved $60M/year by not contributing, and it will
only cost $25M/year to recover, but the fund has benefited from the
stock market run-up. --rcs]

<pension officials flabbergasted>
<software firm acknowledges error>
<is being replaced for other reasons>

<my editorial:>
Two issues here:  Computational Complexity & Data Availability

Can you verify the accuracy of the deductions made from your paycheck?
How about your mortgage payment, or the interest on your savings account?
Is that $5.25 for dental insurance, or the United Fund?

I've tried to check savings account interest: it isn't easy, because
the banks use various fudge factors along the way.  The tellers
(remember tellers?) don't know the formula, and the branch manager has
to look it up.

We should demand, as a correct business practice, that all calculations
like this should include sufficient details for independent checking.
For a payroll withholding tax deduction, this would be the formula used
("$227 + 28% (paycheck-$1250), from line 37 in Weekly Paycheck table
of IRS publication E, available at http://blackhole.gov") and pub E
would indicate that the numbers are calculated by dividing the statutory
annual rates by 52.

If we make checking easy enough, then those of us who are numerically
inclined (and either bored or paranoid) will do some checking, part of
the time.  This sampling will catch the gross systematic errors, which
is the first step toward correction.  (We will also need an arithmetic
ombudsman to force corrections, since we usually are dealing with
organizations more powerful than ourselves.)

Following the same principal, all code used to calculate pensions,
mortgages, etc. should be required to be public.  [It wouldn't hurt for
x-ray machines and air-traffic control either, but that's another story.]

Public accounts should be published as soon as practicable.  Pension
funds need have no secrets.  Imagine if the Orange County derivative
position had been posted on the web each night.

Rich Schroeppel   rcs@cs.arizona.edu


AOL Stock Charts Posted Erroneously Due To "Malfunction"

IJL <IJL@gordonc.edu>
Thu, 9 Apr 1998 16:42:32 -0400 (EDT)
AOL calls this a "chart problem."  One wonders what an investor who took the
data seriously might call it.  Irvin Jay Levy, Gordon College

"CHART PROBLEM, THURSDAY 4/9/98

Erroneous data was posted to many MNC charts Thursday, April 9, 1998, starting
at approximately 9:30 am ET and ending at approximately 10:45 am ET.  Please
disregard data posted on the charts in this period.

The problem was caused by a computer malfunction. Affected charts include the
stock indexes and intraday stock charts.

We deeply regret any inconvenience this causes.

America Online

Transmitted: 4/9/98 11:55 AM"


STOVEACT - Oops, Wrong Number... Gridlock!

Jeremy Leader <jleader@alumni.caltech.edu>
Fri, 3 Apr 1998 19:49:24 -0800 (PST)
A local network news show recently reported on a new system called STOVEACT
(STOlen VEhicle ACTivation), which would allow police to shut down a fleeing
car.

Highlights:

- State DMV computer would have, in the record for
  a STOVEACT-equipped vehicle, the vehicle's "STOVEACT
  number"; a standard police query of the DMV database
  would display this info.

- Police could trigger the device by a phone call.
  They would wait until the vehicle was in a safe
  place to stop.

- Upon triggering, the vehicle flashes its lights,
  honks its horn, and announces over a loudspeaker
  that it's a stolen vehicle and is about to shut
  down.

- After 2 minutes of flashing/honking/etc., the
  device does a 10 second countdown (displayed
  on the dash and spoken over the loudspeaker),
  and shuts off the engine.

- The reporter mentioned the idea of _requiring_
  this device on the cars of convicted drunk
  drivers.

Looking at a few of these steps, the "obvious" risks
(ignored by the news report) seem to include:

- What if an unauthorized person gets a vehicle's
  STOVEACT number?

- How secure is the phone number? Against mis-dials?
  Against hackers?

- Can the shut-down be aborted, if during the
  two-minute warning the car ends up someplace
  unsafe to stop (on a railroad crossing, e.g.)?

- How easily could a criminal disable the device?

- How likely is the device to spontaneously activate?

Jeremy Leader <jleader@alumni.caltech.edu>


Re: EMI and TWA800

"Peter B. Ladkin" <ladkin@rvs.uni-bielefeld.de>
Mon, 06 Apr 1998 21:25:15 +0200
Piers Thompson contributed a pithy comment in RISKS-19.65 on the article by
Elaine Scarry (*New York Review of Books, 9 Apr 1998, also at
http://jya.com/twa800-emi.htm as announced by Woods, RISKS-19.64) on whether
electromagnetic interference (EMI — she prefers the acronym HIRF, which she
says stands for High-Intensity {Radio Frequency | Radiated Fields} for those
who read BNF) could have been `the cause' of TWA800's crash in July
1996. She posits military activity in the area as potentially responsible
for this HIRF. She identifies as possible sources a Black Hawk and an HC-130
within 5 miles horizontally and two miles vertically below TWA800; a P3
6,000+ft above; a C-141 and C-10 `in the vicinity', a Coast Guard cutter 15+
miles distant (and of course 13,700 ft = 2.5 miles below); an Aegis cruiser
180+ miles distant; and three submarines 70-200 miles south.

I find myself in sympathy with Thompson's comment and would like to
contribute a few comments on Scarry's actual argument.

Omitting the surrounding packaging, she actually gives two concrete
scenarios (1 and 3) and one supposition (2):

(1):  Arcing from high-voltage to low-voltage wires, caused by a `pulse
      of energy' from outside the aircraft, caused the central fuel tank
      explosion;
(2): "Whatever evidence in the plane made lightning a possible candidate
      [for consideration as energy source for ignition] should make HIRF
      a candidate as well";
(3): "A sudden pulse of energy from a military jammer or countermeasures
      system could have acted to knock the plane out of control"

She wants the possibility of HIRF to become part of the TWA800
inquiry.  Let's save the trouble and do it right here. One can show
that investigating (1) won't lead anywhere; and (2) and (3) are completely
implausible. Before the arguments, some background.

The breakup sequence of Flight 800 was initiated by the breakup of the Wing
Center Section (http://www.ntsb.gov/events/twa800/exhibit.htm Exhibit 18A,
Metallurgy/Structures Sequencing Group Chairman's Report, Section 7.3) whose
breakup sequence itself showed signs of an early `overpressure event'
(op. cit Section 5.2.3). This means a central fuel tank explosion.
Accordingly, one searches for the origin of the explosion, and this has not
definitively been identified, so the investigation is still open.  Hugh
Chicoine has described to me (in another context in private conversation)
that three factors must converge to form an `Ignition Sequence': available
oxygen, a combustible, and a competent ignition source. I understand that
the first two have been identified in the case of TWA800, and have led to
the extensive research into flammable fuel vapors in central wing tanks of
commercial aircraft. The search for a competent ignition source is open.

When inquiring about the possible effect of EMI on aircraft systems, it is
important to distinguish, as Scarry does not appear to, between the various
kinds of electrical systems on board aircraft: fly-by-wire controls are
different from navigation electronics, which are different from fuel pump
electrics, which are different from the ovens used for heating passenger
meals. She refers to a certain number of accidents to support her case:
these occurred to Black Hawk helicopters (see e.g., RISKS-5.56, 5.58, 5.59
from a decade ago) and according to Scarry to F111s during the U.S. raid on
Libya. I understand these accidents are believed to have emanated from
EMI-FBW interference.

There is as yet no definitive incident with reproducible symptoms in which
EMI is known to have interfered with commercial aviation navigation systems
in navigable airspace, as far as I am aware, although there are plenty of
plausible `anecdotes' (Ladkin, RISKS-19.24; see also my essay at
http://www.rvs.uni-bielefeld.de --> Publications --> Electronic Journalism
--> RVS-J-97-03. There is the possible exception of cases in which aircraft
violate airspace restrictions — stay away from those microwave antennas
:-). Also as far as I am aware there has been as yet no suspected incident
of EMI interfering with electronic control (`fly-by-wire' or FBW) on
commercial aircraft, but in the case of TWA800 this question is moot since
the Boeing B747-100 is a `classic' aircraft with hydraulic and mechanical
controls.

Electrics are generally more robust than electronics. The main potential
ignition sources that have been considered are mechanical electrical
sources; a pre-existing fire below the central wing tank; a bomb; a missile
(http://www.ntsb.gov/events/twa800/exhibit.htm Exhibit 20A, Fire and
Explosion Group Chairman's Report, Section 3, p5).  There was no evidence of
a pre-existing fire, a bomb or a missile found. Potential sources explored
include the electrical fuel gauging system; electrical power to the fuel
pumps; a static electric charge/discharge; and `other systems' (op. cit p6).
"No evidence of electrical arcing or other mechanical failure signature has
been noted on the hardware" (op. cit. Section 3, final sentence, p9).

Finally, one should note that an aluminium aircraft hull acts as a
significant barrier in each direction to electromagnetic radiation on
radio frequencies. The original response to questions of EMI from passenger
electronics pointed out that the nav receiver antennae were outside the
hull, but the potentially damaging signals were supposed to come from
inside the aircraft, and no one could see a way that those signals could
have interference strength outside - they simply couldn't be powerful
enough. Later inquiry has suspected imperfect or degraded interior
avionics wiring connections (RVS-J-97-03, op. cit., from RTCA SC-177).
As far as I know, no one has published estimates of what the field strength
would have to be *outside* the aircraft in order to create that requisite
field strength *within* the aircraft hull sufficient to cause arcing
in any component of the fuel gauging system, fuel pumps or other such
systems. Note that since no evidence of arcing was found, any arcing that
did occur must have occurred in an item that was not recovered, despite
an unprecedentedly thorough search — and will not be recovered because
the search has stopped.

This leads to the following commentary on Scarry's three suggestions.

ad 1: The inquiry has looked; no evidence of arcing was found; no evidence
      therefore will be found; the only thing that can be done is to
      calculate roughly the kind of field strength outside the aircraft
      that would be required to cause sufficient arcing inside the aircraft
      in the suspect but missing components. Any answer is going to be
      very rough and could not be correlated with any physical evidence;
      I suspect it could be calculated to a sufficient level of accuracy
      by some engineering graduate student with a little data from the
      component manufacturers who have already carried out such arcing
      tests. I would not expect the answer to lend any plausibility to the
      supposition that HIRF could have caused arcing. Whether or not,
      supposition it would remain since physical evidence there is not.

ad 2: A lightning strike contains enough energy to kill people it hits.
      It does not contain enough energy to kill people 100 yards away from
      a strike, unless one is happening to stand on a conduit without
      rubber soles. I've been this close to mountain lightning strikes
      twice without apparent arcing :-) A mile away from a lightning
      strike is even less of a problem. I don't know that even the
      military would consider discharging a Van de Graaf generator
      on a P3, even if they could fit one large enough in the fuselage.
      And I don't see how that remote and relatively mild event a mile away
      could be compared with a direct lightning strike on an aircraft.
      I find such a comparison .... um, implausible.

ad 3: The control on this aircraft is via cables and hydraulics. HIRF
      affects these not one jot. This is a truly stupid supposition
      [oh dear, that was rude.... it just sorta slipped out....sorry].

So apart from finding the graduate student to do the calculation for the
first supposition, what are the action items on Scarry's list?  To get the
`men and women in nearby planes and ships [to] describe the instruments in
use that night'; to have the USAF and DoD release classified studies they
have done on how EMI affects military planes and ships. I'd judge she has a
vastly underwhelming case - but then, she's the expert on the general theory
of value, not I.

Peter Ladkin, Univ. Bielefeld, Postfach 10 01 31, D-33501 Bielefeld, Germany
ladkin@rvs.uni-bielefeld.de http://www.rvs.uni-bielefeld.de +49(0)521-106-5326


Re: Phone scam alert: Social Engineering 101 (RISKS-19.64)

"Peter G. Neumann" <neumann@csl.sri.com>
Thu, 9 Apr 98 17:02:12 PDT
Quite a few readers insisted that this case was a scam, quoting various
newsgroups.  However, an AT&T Web site notes that it affects only PBXs and
not residential customers.  <http://www.att.com/features/0398/90pound.html>
It is an old problem, by the way.  Thanks to all of you who wrote in.


Rice University spammed too!

"Scott Ruthfield" <indigo@owlnet.rice.edu>
Tue, 31 Mar 1998 21:54:45 -0600
In line with the post on Cornell's spam issue: Rice University had the same
problem last week, when an academic department (somehow) obtained the e-mail
addresses of all 2600 undergraduate students, and sent a message with all
the addresses in the To: block. At least five students responded to the
whole group: at some point, Information Services began locking the accounts
of those who were sending mail. Several of the responses, though, came from
non-Rice addresses (or faked addresses).

Interestingly, the day after this incident, some student(s) put up flyers
all over campus, encouraging students to send angry mail to the obviously
clueless department that sent the original mail, and providing the e-mail
address. (Like they hadn't heard it already.) And for extra fun: the e-mail
talked about a schedule change for an introductory Latin class, and the
flyer mentioned how we should thank the department for their information
about a dead language.

Scott Ruthfield, Graduate Student, Computer Science, Rice University


Re: Funding for a new software paradigm (Moran, RISKS-19.64)

Nick Rothwell <nick@cassiel.com>
8 Apr 1998 12:55:07 -0000
> 1) Devise a language that fails safely (where safety has programmer
>    adaptable defaults and values) so that failures "do the right
>    thing". I think that Perl and Basic come pretty close to this.

I wasn't sure whether this was a follow-up spoof to the original spoof
at first. My knowledge of Basic is pretty basic, but I don't see how
anyone can claim that Perl "fails safely."

One should distinguish between apparent runtime errors and incorrect
behaviour. While a Perl program might not often exit with an error code, it
is one of the most error-prone languages I have ever used. The identifier
binding is essentially purely dynamic; the scoping rules for identifiers are
rather obscure (non-local by default, for instance, last time I checked);
there are huge numbers of highly ad-hoc overloaded primitive operations
based upon the contextual occurrence of identifiers and expressions (partly
alleviated by prefixes like "#", "$", "@" and so on). There are large
numbers of obscure reserved tokens ($', $|, $_, $` and so on). The language
freely mixes regular-expression lexical rules with high-level syntactic
rules (example: "$x" and '$x' are different, but "x" and 'x' are the same)
and there are large numbers of proprietary regular expression
constructs. (\E and \Q surprised me, and Perl 5 now outlaws "@", or gives it
some meaning which escapes me.) And the scoping rules for file objects are
obscure in the least; as I recall they occupy a totally different namespace
with different dereferencing rules, such that the Perl 5 man page contains
specific hacks to be employed when passing them around.

I use Perl heavily, and love what it can do. but it does all the
right things in all the wrong ways.

On the other hand, if Fred is spoofing then I've made a fool of myself.

Nick Rothwell, CASSIEL  http://www.cassiel.com


Re: Funding for a new software paradigm (Rothwell, RISKS-19.66)

Fred Cohen <fc@all.net>
Wed, 8 Apr 1998 17:41:15 -0700 (PDT)
Perl fails relatively safely in lots of circumstances, but it also has lousy
syntax and semantics, poor language discipline, heavily overloaded
operators, and lots of other problems. I agree with many of Nick's comments,
but I don't think they invalidate my point that many unanticipated failures
result in program termination with an error message. Even more importantly,
my comments were intended to have some humorous elements to them and Nick
correctly identified it. All of this notwithstanding, it appears that Nick
agrees that we should have programming languages with better default error
handling.

Sandia National Laboratories at tel:510-294-2087 fax:510-294-1225
Fred Cohen & Associates: http://all.net - fc@all.net - tel/fax:510-454-0171


Re: Funding for a new software paradigm (Cohen, RISKS-19.65)

Erann Gat <gat@binkley.jpl.nasa.gov>
Fri, 3 Apr 1998 12:09:55 -0800 (PST)
Several "failure-safe" languages exist, and they all have the same problem:
providing safety exacts a cost in performance.  All else being equal, code
written in a failure-safe language will be slower than code written in an
unsafe language.  This cost is constantly in your face even when there are
no errors, which is most of the time.  Using a failure-safe language is like
flood insurance.  People think they can get by without it because the costs
are obvious but the benefits rarely manifest themselves.

There is another problem: as a result of this fundamental cost driver, we
have built up an enormous infrastructure based on unsafe architectures.
(Two-digit date representations are a prime example.)  This infrastructure
now permeates our society.  CS courses teach people that programming is
synonymous with writing C++ code.  As this infrastructure grows it gets
harder and harder to go back and fix it at its core.

You'd think that if there were any organization that would be receptive to
the use of failure-safe languages it would be NASA, but in fact the exact
opposite is true.  Failure-safe languages like Java or Lisp (or, God forbid,
Haskell or ML) are viewed with suspicion at best.  At worst, their advocates
(both of us ;-) become pariahs.  It seems this is unlikely to change until
there is a major disaster that impacts enough people to make it on the
evening news.  Without prejudging the desirability of this event, I predict
that it is only a matter of time before it happens.

Erann Gat <gat@jpl.nasa.gov>


"Web Security: A Step-by-Step Reference Guide", Lincoln D. Stein

"Rob Slade" <rslade@sprint.ca>
Wed, 8 Apr 1998 07:57:47 -0800
BKWEBSEC.RVW   980201

"Web Security: A Step-by-Step Reference Guide", Lincoln D. Stein,
1998, 0-201-62489-9, U$29.95
%A   Lincoln D. Stein stein@genome.wi.mit.edu
%C   P.O. Box 520, 26 Prince Andrew Place, Don Mills, Ontario  M3C 2T8
%D   1998
%G   0-201-62489-9
%I   Addison-Wesley Publishing Co.
%O   U$29.95 416-447-5101 fax: 416-443-0948 bkexpress@aw.com
%P   448 p.
%T   "Web Security: A Step-by-Step Reference Guide"

As it happened, this book came off the stack on a night when I wanted
nothing more than to wander off to bed.  Despite my sleep deprivation I
managed not only to finish the book, but even to enjoy it.  Any technical
book with security in the title that can hold interest like that has to have
something going for it.

The book covers all aspects of Web security, as laid out in chapter one: the
client or browser concern for privacy and safety of active content, the Web
server concern for availability of service and prevention of intrusion, and
the concern that both share for confidentiality and fraud.  Chapter two
provides a brief but accurate overview of cryptography as the backbone of
secure systems operating over unsecured channels.  (There is only one oddity
that I noted, when 512 bit RSA public key encryption was compared in
strength with 40 bit RC2 and RC4 systems.)  More of the basics like Secure
Sockets Layer (SSL) and Secure Electronic Transactions (SET) are described
in chapter three, along with various forms of digital cash.

Part two looks at client-side security, with further discussions of the use
of SSL in chapter four.  Chapter five details active content, with
particular attention to ActiveX and Java.  "Web Privacy," in chapter six, is
an excellent and practical guide to the realities and myths about
information that can be gleaned from your browsing activities.  Included are
practical tips about keeping your system from finking on you.  (Windows
users should note that the files referred to are not always in the paths
specified, due to the variety of ways that Windows programs can be
installed.)

The bulk of the book, as might be expected, deals with server-side security,
this being the slightly more complex side of the issue.  Chapter seven
provides an overview of the various vulnerabilities and loopholes to watch
and plug.  UNIX and Windows NT servers are dealt with in chapters eight and
nine respectively.  These chapters don't assume much familiarity with the
system security functions of the systems, but do stick primarily to the
server specific topics.  Access control is a major part of any security
setup, and is covered in chapter ten.  Encryption and certificates are
revisited in chapter eleven, concentrating on use in access control.  CGI
(Common Gateway Interface) scripting has been a major source of Web security
risks, and chapter twelve points out safe, and unsafe, practices in
programming scripts.  Chapter thirteen discusses remote authoring and
administration.  Firewalls are often seen as the be-all and end-all of
Internet security, and Stein covers the reality in chapter fourteen.

Each chapter contains references to both online and printed sources of
information, and these resources are all of high quality and useful.

As noted, the book is not only readable, but even enjoyable.  The writing is
clear and accurate, giving the reader both concepts and practical tasks in
minimum time with maximum comprehension.  Although the bulk of the book is
for Webmasters, the casual user can not only read it but get a great deal of
value from it.  Any ISP that does not have it on their customer support
bookshelf should held criminally negligent.

copyright Robert M. Slade, 1998   BKWEBSEC.RVW   980201


ICDCS-18 cfp

by way of Teruo Higashino <taki@takilab.k.dendai.ac.jp>
Mon, 30 Mar 1998 11:24:30 +0900
See http://ICDCS.fernuni-hagen.de/welcome.html for full program.

                       Final Program for
                            ICDCS-18
 The 18th International Conference on Distributed Computing Systems
                  May 26 (Tue.) - 29(Fri.)
           Hotel Mercure, Amsterdam, The Netherlands
    Sponsored by IEEE Computer Society, TC on Distributed Processing
               URL http://ICDCS.fernuni-hagen.de/welcome.html

Please report problems with the web pages to the maintainer

x
Top