The RISKS Digest
Volume 12 Issue 48

Friday, 11th October 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Police raid wrong house — for second time
David B. Benson
Crypto Public Policy
Bill Murray
Re: Security Criteria, Evaluation and the International Environment
Henry Spencer
PGN
Re: "Safer Flying through Fly-By-Wire
Arnd Wussing
Mary Shafer
Re: Computers and missile control
Eric Prebys
Re: Software migration at Johnson Space Center
Bob Frankston
Doug Burke
Guy J. Sherr
Human error: once more, with feeling
Don Norman
Re: AT&T outage
Bob Colwell
Mark Seecof
Bob Niland
Martyn Thomas
A step towards adopting DefStan 00-55
Vicky Stavridou
Digital Retouching on the Telephone
Chuck Dunlop
Info on RISKS (comp.risks)

Police raid wrong house — for second time

David B. Benson <dbenson@yoda.eecs.wsu.edu>
Fri, 11 Oct 91 09:47:50 pdt
Lewiston Tribune/Friday, October 11, 1991, page 6C

Associated Press

FEDERAL WAY, Wash. — King County Police confounded by a typographical error
mistakenly descended on the home of Terry and Dean Krussel this week — for the
second time this year.  At least this time they didn't break the door
down.
    When the officers from the narcotics unit raided the Krussel
home in May, they kicked in the door, ordered Terry Krussel, 57, to
get down on the floor and held her at gunpoint while they searched the
house.
    County officials replaced the door at a cost of $2000 and
apologized profusely.
    When the Krussels got a letter from the county prosecutor's
office on Sept. 11, addressed to the person officers had sought in
the May raid, they worried that their address was still on file as
a den of iniquity and dangerous drugs.
    King County police scrambled to delete their address from
the department's computer files, and deputy prosecutor Judith
Callahan assured the Krussels in a Sept. 17 letter of the county's
good intentions.
    "Our office is truely concerned that Mr. and Mrs. Krussel
not feel that they are victims of county bureaucracy," she wrote.
    Unfortunately, the Krussels' address remained in the drug
dealer's file — and that's what the officers pursuing the dealer
Tuesday night were working from.
    The officers didn't leave until Dean Krussel showed them
Callahan's letter.  "This thing just won't go away," he said
after the couple's latest run-in with King County's finest.


Crypto Public Policy (C. Weismann)

<WHMurray@DOCKMASTER.NCSC.MIL>
Fri, 11 Oct 91 09:42 EDT
>The debate is international in applicability.  However, U.S. policy on
 encryption appears most severe, so I urge a U.S. National debate to begin the
 dialog, and start with some questions.

I agree with Clark that debate is indicated.  There is no proper forum for this
debate.  The present policy has ancient origins.  They are older than the Cold
War, though the Cold War has been used to justify them since the National
Security Act of 1947.  The current policy dates from the Great War and was
placed in law without public debate in 1943.  That law, passed in war time, has
been used since to suppress any further debate.

>Do we gain more by strengthening our
 commercial competitiveness and products, upon which the military is
 increasingly dependent, than we lose by permitting international commonality
 in cryptographic services, which may weaken military capabilities?

While it is difficult to state the issue, proper debate requires that it be
stated clearly.  I do not think that Clark's question properly frames it.  I
think that the issue is more one of the trust and confidence required for
commerce than it is one of "competitiveness."  This country needs trade.  The
most efficient way to mediate trade in the modern world is electronically.
Trust and confidence in electronically mediated trade requires secret codes
which both parties can trust.  That is one interest.  I submit that it is far
more compelling than mere "competitiveness."

I also understand the contending issue differently.  Rather than relative
"military capability," the issue is one of the cost of intelligence gathering.
Even in a peaceful world, security requires that we gather intelligence.
Prudence suggests that we gather it about everyone, not simply "adversaries,"
but everyone.  History screams that any political instability causes people to
choose sides.  Therefore, it behooves us to know as much as we can about what
is going on in the world.  If the ether begins to fill with "random appearing"
data, the cost of intelligence gathering will rise as a geometric function of
the quantity of that data.  Therefore, the second interest is to discourage
that data to the extent that we can.  It is not simply one of effectiveness; we
cannot hope to discourage all use of secret codes.  Rather it is one of
efficiency; how much can we discourage and at what price.

Neither of these interests is trivial.  Each is worth defending.  They do
conflict.  To date they have been debated only in secret proceedings.  I am
concerned that in those debates, the latter interest has prevailed and that the
former may not have been properly appreciated.  I do not believe that either
interest will be seriously compromised by a more public debate.

William Hugh Murray, Executive Consultant, Information System Security
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840     203 966 4769


Re: Security Criteria, Evaluation and the International Environment

<henry@zoo.toronto.edu>
Fri, 11 Oct 91 14:08:31 EDT
One note of caution here...

>     Criteria for secure time-sharing systems will not "make it" in the
 nineties, but it is not clear that we know enough to write evaluation
 criteria for networks, data bases or applications...

I think I see the Wheel Of Reincarnation operating here in several ways.
Time-sharing systems are passe', but everyone is busy rediscovering the same
old issues in the context of networks, databases, etc.  What, exactly, is the
fundamental difference between a time-sharing system and (say) a heterogeneous
network?  Answer: there isn't one, unless you insist on thinking of
time-sharing systems in terms of a narrow stereotype that has never described
all time-sharing systems.  (As a case in point, note that the Plan Nine
experimental operating system at Bell Labs is aimed specifically at making a
heterogeneous network look pretty much like a time-sharing system.  They're
succeeding fairly well.)  What, exactly, is the difference between the access
controls enforced by a shared database and the ones enforced by a time-sharing
kernel?  Answer: while there is a different flavor to some of it, the problems
and solutions are often very similar.  And so on.

"Those who do not remember history are condemned to repeat it."  If we continue
to discard past experience with multi-user systems as obsolete, we will
continue to rediscover issues and make the same old mistakes when building new
multi-user systems.  Criteria for secure time-sharing systems deserve very
careful examination, as much of that experience should be applicable to
networks, databases, applications, etc., given some caution in the presence of
shifts in the underlying concepts.
                                   Henry Spencer at U of Toronto Zoology


Re: Security Criteria, Evaluation and the International Environment

Peter G. Neumann <neumann@csl.sri.com>
Fri, 11 Oct 91 12:10:14 PDT
Ah, but the people who wrote the Orange Book (TCSEC) years ago were thinking
not in terms of generic functionality for trusted distributed systems, but
primarily in terms of isolated-system security kernels.  They wrote the
criteria in an overly-specific manner that makes the applicability to networks
and distributed systems very difficult/uncharted/unclear/...  Nevertheless, the
Red Book tries...  See also the European ITSEC.  But in principle any sensible
operating system concept could be distributed in a nice clean invisible way; in
practice there are LOTS OF PROBLEMS, some of which are indeed different from
the old ones (such as distributed authentication).


Response to "Safer Flying through Fly-By-Wire (Spencer,RISKS-12.45)

Arnd Wussing <AW@PRI-CE.Prime.COM>
11 Oct 91 10:14:36 UT
>The USAF/NASA Advanced Fighter Technology Integration test aircraft is doing
 flight evaluations of a system to help pilots cope with disorientation: push a
 button on the stick and the computer automatically brings the aircraft back to
 level flight.

As  an  active  aerobatic  pilot,  I've  had  the  experience  several times of
complete disorientation, the  horizon  cannot  be  interpreted  or seen and the
G-forces acting on  the  body  lead  to  incorrect  conclusions  regarding  the
attitude  of  the  aircraft.   Although  a mechanical device to recover from an
unnatural  flight  situation  would  be  of  immense  benefit,  the  process of
achieving level-flight from a given spatial orientation can be  quite  complex,
involving judgements  regarding  G-Forces,  rudder  &  aileron coordination (or
dis-ordination in some cases),  airspeed  (both  indicated  &  true),  aircraft
red-line  and  stall characteristics, etc.  These factors can for the most part
be vectorized into a given  computer  system/program assuming that *ALL* of the
sensors are functioning correctly;  the consequences of going over red-line and
getting flutter due  to  a  partially  blocked  Pitot-tube  or  going  into  an
unrecoverable  stall  because  the  aircraft  isn't  balanced correctly on this
flight (perhaps the cargo shifted) and the recovery-software wasn't informed or
stalling because there is icing  and  the  stall-warning is out of function are
devastating.

The risks inherent in such a system would be outweighed by the benefits when an
emergency situation occurs  assuming  the  pilot  has  no  recourse;   but  the
knowledge that the aircraft is equipped with "a device which will get me out of
any  situation" might make a pilot take more risks and thus induce exactly that
situation where the system must be used;  somewhat akin to  the  RISKS  article
about the warning signs for the Virginia drawbridge.


Re: Safer flying through fly-by-wire (Schwartz, RISKS-12.47)

Mary Shafer <shafer@skipper.dfrf.nasa.gov>
Fri, 11 Oct 91 08:31:09 PDT
The AFTI/F-16 is a completely instrumented airplane and has several
accelerometer packages.  Level flight just turns into setting a_x and a_y to
zero, with a_n = -a_z = 1.0 g.  You can couple in the rate gyros and set p, q,
and r to zero too.  This is a pretty simple little feedback system.

>I can't see how this device is better than your basically-trained
>IFR pilot, and it may be worse (mortal failures under strange
>instrument failure modes).

Quinine, in the form of tonic water, doesn't give the accelerometer package
vertigo like it does the pilot's vestibular system (being discussed in
rec.aviation right now).  Accelerometers don't get the leans, either.

Actually all F-16s have similar accelerometer and rate gyro packages, the
AFTI/F-16's are just tied to the instrumentation package as well.  Modern
fighters are somewhat more heavily instrumented than are general aviation
aircraft (which, by the signature, is what the poster, a private pilot, is
familiar with).

The system was first proposed to deal with GLOC (g-induced loss of
consciousness).  The F-16 is notorious for having such a high instantaneous
rate of g onset that pilots in combat are at risk of GLOC.  The question is
really whether this system is better than an unconscious pilot.  (To further
tie this to a thread in sci.military, the F-20 had the same high g onset rate
and many people believe that GLOC led to at least one of the prototype
crashes.)

Mary Shafer  DoD #0362  NASA Ames Dryden Flight Research Facility, Edwards, CA
         shafer@skipper.dfrf.nasa.gov  shafer@pioneer.arc.nasa.gov


RE: Computers and missile control

Eric Prebys, CERN-PPE/OPAL <prebys@vxcern.cern.ch>
Fri, 11 Oct 91 18:56:11 +0100
All technical issues aside, the first (obvious) question that comes to mind is:

  Who gets to design, build, program, install, verify and maintain this system?

If all countries got along well enough to settle that question, the whole
issue would become moot (perhaps that's the real idea).

But another very real question is:

  Would it really be an improvement over the existing situation?

Maybe I'm missing something, but wouldn't it just make Mutually Assured
Destruction even more "mutually assured".  Unless, of course, the idea
is to give the victim enough time to completely destroy the attacker at
the outset.  In that case, it would be as "realistic" (and a lot cheaper)
to get countries to agree to just blow themselves up if they ever get angry.

What I really don't understand is, if (a huge "if") it WERE possible to
establish central, tamper-proof control over ALL countries' abilities to launch
ALL nuclear weapons (as the article suggests), why not go the one (IMHO small)
step further and make it impossible to launch them at all?  ...maybe through
the use of a "beneficial virus" (just kidding).  Personally, I think it would
be very sad if the world could achieve the sort of trust and cooperation
necessary to implement this system, and not not manage to do away with the
things entirely.
                Eric Prebys, CERN, Geneva, Switzerland


Re: Software migration at Johnson Space Center

<frankston!Bob_Frankston@world.std.com>
10 Oct 1991 22:57 -0400
I can't vouch for the details of the arguments, but this is a good example of
trying to decide a scope of solution.  Is it more important to maintain a given
system in its own cocoon or take the risk of change in order to get the
benefits of what, over a decade, has emerged as a standard.  We can argue the
technical benefits (and I would think that 10 years of change has produced some
improvements, though nowhere near as much as it might have) but there are
larger issues such as switching into a more cost effective/price competitive
market. There is also the benefit of standardization in terms of being able to
take advantage of common knowledge and tools.

Risks are necessary part of evolution.  It is important to be aware when one
is taking a risk and the consequences and not be naive.  But not taking a
risk can be a bigger risk.

Again, I claim no knowledge or insight about this particular instance and
I'll admit to a bias in favor of rampaging PC's.


Re: Software migration at Johnson Space Center

"Doug Burke, Shell Account Spec., Malaysia" <doug.burke@msa.mts.dec.com>
Thu, 10 Oct 91 20:53:05 PDT
I used to use UNIVAC 1100 series computers too.  However, I would like to cast
some doubt on one of the statements, and refute another made under this topic
in RISKS-12.47.

First of all, there were other companies who had well developed realtime
software processing more than 12 years ago, although perhaps not on a
processor the size of a, say, UNIVAC 1108.  For example, one machine and
operating system that comes to mind is the PDP-11 running RT.
Then there is the VAX...

And speaking of the VAX, it is a system sold by another vendor (Digital
Equipment Corporation) which has as large a range of compatible processing
power as the UNISYS 1100 series, if not more.  Since I am a software
specialist, I'll spare the sales pitch...

Doug Burke, Senior Software Specialist, Digital Equipment (Malaysia),


Re: Software Migration at Johnson Space Center (Bouchard, RISKS-12.47)

"Guy J. Sherr" <0004322955@mcimail.com>
Fri, 11 Oct 91 17:24 GMT
>Unisys 1100-series equipment, from the smallest (2200/100, desk sized small
 business system) to the largest (2200/600, big mainframe), runs the same
 software across the entire line with NO modifications required.  Such a large
 range of compatible processing power is unavailable from any other vendor (the
 Unisys A-series has a somewhat wider range).

I must take exception with this.  The VAX family processor will faithfully
execute programming which makes no installation dependant call provided that the
VMS linker was used to link it, and that the VMS executive is at the same
release point or is a later release.  I believe DEC is not owned by Unisys.

Also, will the Unisys equipment take the executing image of the code, or must
the source be recompiled?  The VAX family processor, for example, executes the
exact same executive no matter what model it runs on.  Actually, I do recall one
release of VMS where that was not the case, but then DEC fixed it anyway.

Guy Sherr, Lab Configuration Mgr, MCI Reston, VA 0004322955@mcimail.com


Human error: once more, with feeling

Don Norman <norman@cogsci.ucsd.edu>
Fri, 11 Oct 1991 07:40:10 -0800
Perhaps our moderator, Peter Neumann, should just keep a copy of this on
hand and reissue it as needed.  This is long, but needed periodically.
(Maybe Peter should add a brief version to the masthead of RISKS!)

The real RISK in computer system design is NOT human error.  It is designers
who are content to blame human error and thereby wash their hands of
responsibility.

in RISKS
   The ATT failure
   The truck driver and bridge

In Aeronautics Digest 3.22  (Oct. 10, 1991)
   Traffic collision avoidance system failures:  the Federal Aviation
Administration (FAA) ordered a shutdown of 200 of the 700 units that had
been installed.  The 200 systems were seeing phantom aircraft and
instructing pilots to evade planes that simply were not there.
  "We had a simple human error where an engineer misclassified the changes
in the software"

Human error is almost always a result of system and design error.   It has
to be taken account of in the design and in the work procedures.

Lots of people in Risks have proposed design procedures that will help.
Even the manufacturer of the TCAS system (in the last incident above) said:
. To prevent similar omissions, Collins now requires that a committee of
. software engineers review changes before a program is released.  "More than
. one pair of eyes must review these things and make a decision"

That will not guarantee correctness (if, for example, the specifications
are incomplete or inappropriate — as they almost always are — the
committee will simply verify that the program meets the wrong
specifications) but it will help.  Committees are also subject to various
kinds of group decision processes that sometimes propogate errors.  It is a
first step, but it still does not indicate that the designers are sensitive
to the nature of error and will take design pains to avoid it.

Example: if only the truck driver had been attentive, the accident would not
have happened.  True. But also if only the signs had been working, or if the
procedures required traffic to stop elsewhere, or if only the drawbridge hadn't
been raised.  In any accident, there are always dozens of "if onlys".

NO HUMAN IS 100% ATTENTIVE.  Designers assume perfect human attention, which is
fallacious.  (My restatement is that humans are excellent at switching
attention among competing demands.  Alas, the demands of modern technology are
not always compatible with the evolutionary structure of the human.)  The
design error is assuming inappropriate properties to humans and assuming they
can perform in ways that are foreign and unnatural — truly, biologically
determined, "hard-wired," unnatural.

We design to allow equipment to work in the face of noise and even
component failure, certainly in the face of out-of-tolerance components.
We should do the same for people.  It is no excuse to blame training,
attention, attitude, or "human nature."  These things happen so much that
they have to be designed for.   And we even know how to do so.  The real
problem is the attitude of the design community, even among those who read
RISKS.

The other problem is the training of the design community: engineering and
computer science departments train technology, program verification, and the
like.  No expertise in human and social issues.  Computer scientists cannot
turn overnight into social scientists, nor should they. The design of systems
for people requires design teams consisting of computer scientists, cognitive
and social scientists, (and representatives from the user community).

Technology alone cannot provide the answers when we deal with human activities.

"What has this to do with computer science?  Nothing, directly, but
indirectly it means a lot.  The same computer that makes so much possible,
also sets up the conditions for human error.  And if this is not
understood, the systems will fail, and the failure will be blamed on "the
computer" or even on "those computer programmers and scientists."
(rephrased from Norman, in press)

SEE:
Perrow, C. (1984). Normal accidents.   New York: Basic Books.

Norman, D. A. (1990). Commentary: Human error and the design of computer
systems. Communications of the ACM, 33, 4-7.

Norman, D. A. (in press, 1991). Collaborative computing:  Collaboration
first, computing second. Communications of the ACM, 34

Donald A. Norman, Department of Cognitive Science, University of California,
San Diego La Jolla, CA 92093-0515                        dnorman@ucsd.bitnet


Re: AT&T (RISKS-12.47)

<colwell@ichips.intel.com>
Fri, 11 Oct 91 10:40:45 -0700
   Some people seem to want to blame human weaknesses for the AT&T failure, other
   people seem to want to blame the technology.  What I havan't seen anyone point
   out is that, every time AT&T (or most other people) does something to "improve"
   their system, they end up more and more centralized.

Actually, that's partly what I was trying to point out in my earlier post.
When you install the same software (do they still call them "generics"
inside Bell?) everywhere, you have an implicit single-point-of-failure
across the whole network. Yes, when they route too much through a physical
single point of failure, that's bad, and they know it (or should).

But it appears to me that

   - the historical system availability target of 2 hrs outage in 40 years
     is no longer being met, even though it once was with much lower
     tech hardware

   - the reason may be related to this implicit single-point-of-failure not
     being made explicit in the way the code is written, or the development
     project is run.

Perhaps the attitude they took with the Space Shuttle computers needs to be
transferred to the phone company. Yes, do a superlative job in programming
the four on-board computers, write it to the most exacting specifications,
then test the heck out of the code. But oh-by-the-way, here's a fifth
computer with completely alien hardware AND SOFTWARE in order to obviate
any implicit, unanticipated, yet catastrophic single-point-of-failure
modes.

I don't know that this solution can or should be adopted wholesale; it's
the frame-of-mind that the shuttle designers had that there is this class
of problems that appears to be lacking in the current design of the phone
system.

Bob Colwell, Intel Corp.  JF1-19, 5200 NE Elam Young Parkway, Hillsboro, Oregon
97124     colwell@ichips.intel.com  503-696-4550


man-machine interface (was AT&T outage)

Mark Seecof <marks@capnet.latimes.com>
Fri, 11 Oct 91 11:13:06 -0700
I think we're in danger of missing a key element in the AT&T outage.  Yes, the
technicians were lax (if understandably so); yes, AT&T had routed too much
stuff through the one switch w/o any backup path (which I think was the chief
screwup); yes, the alarm system was inadequate (which AT&T has promised to
address).

But the real problem is that the power system and its alarm system were
designed under the assumption (now vitiated) that technicians would be there to
supervise it.  Recall that AT&T says the rectifier failure was discovered only
when a technician happened upon an alarm registering at a location (away from
the power equipment) which was not ordinarily manned.  That location would have
been manned before AT&T riffed many of its technicians.  The alarm system was
not adequate to alert the present human supervisory regime-- perhaps the old
technicians should have been kept on the job until AFTER the promised new alarm
system was installed?

Beefing up drills as suggested by some is an inadequate response to the
design-constraint/ reality gap evident in the description of the AT&T setup.
As hard as it is to get the humans to meet the needs of the system, or the
system to meet the needs of the humans, if we don't try to match them at the
interface as best we can, failure is certain.  Building a machine which needs a
supervisor, then firing that supervisor and expecting all to be well is
foolish.
                         Mark Seecof <marks@latimes.com>


Re: "AT&T `Deeply Distressed' (Colwell, RISKS-12.43)

Bob Niland <rjn@hpfcso.fc.hp.com>
Fri, 11 Oct 91 12:31:23 mdt
>This seems equivalent to the question of how much override a pilot of a
 fly-by-computer airplane should be able to exert; when the flight computer
 refuses to pull too many G's because the wings may overstress, but the
 pilot knows he'll hit a mountain otherwise, it's a bit clearer who should
 outrank whom.

Perhaps in that specific case, but in the general case it's not that clear.
I haven't studied the statistics (if anyone even has any along these lines),
but what if the data show that more people die because the crews override
when they shouldn't than because they can't/don't override when they should.

We have already had a couple of Airbus losses in which a suspected cause is
the crew inappropriately overriding the flight computer and riding the
aircraft into the ground (e.g.  Toulouse airshow).  Have we lost any because
the crew failed to override?  Have we lost any other air transport types
because of inability to override?

Speaking as a pilot myself, emotionally, I always want to have total
authority of the craft, but if statistically I am more likely to live longer
by not having (or at least not exercising) that authority, my preference is
not completely obvious.

Perhaps the
  "automatic | manual"
override switches need to have big legends above those descriptions, stating
  "PROBABLY  | USUALLY
   SURVIVE   | PERISH "

Bob Niland, 3404 East Harmony Road, Ft Collins CO 80525-9599
     Internet: rjn@FC.HP.COM       UUCP: [hplabs|hpfcse]!hpfcrjn!rjn


Keeping people in the loop (Bellovin, RISKS-12.45)

Martyn Thomas <mct@praxis.co.uk>
Fri, 11 Oct 91 12:47:09 +0100
We could make the humans the prime operators, and use the computers as a
back-up. This preserves the motivation - noone wants to be caught making
mistakes - and gives many of the desired benefits. Of course, we still
cannot predict the reliability of the overall system, but that's another
problem :-(


A step towards adopting DefStan 00-55

Vicky Stavridou <victoria@cs.rhbnc.ac.uk>
Thu, 10 Oct 91 07:09:40 BST
Although 00-55 is an interim standard, it seems that there is real progress
towards its development and eventual adoption. About a year ago, we produced a
VDM specification of the safety requirements for an ammunition control system
(ACS) which is used by the Directorate of the Proof and Experimental
Establishment of the MOD for managing the ammunition holdings of some ranges.
I understand that the appropriate MOD authority intends to issue our specifi-
cation as a part of the Operational Requirements draft for the next generation
of the system. I believe that the intention is to provide an improved statement
of the safety requirements during the tendering process.  Although, this is a
long way from full application of 00-55/56, it is certainly an encouraging and
a very welcome step in that direction.

We have a technical report for anyone who is interested.

Victoria Stavridou

PS. If you want to followup this topic, please email me direct because our news
server is down at the moment.


Digital Retouching on the Telephone

<Chuck.Dunlop@ub.cc.umich.edu>
Fri, 11 Oct 91 01:28:16 EDT
The latest Hammacher Schlemmer catalog advertises a "Voice-Changing Telephone",
that
      uses digital signal processing technology to realistically alter
      the sound of the user's voice, even changing male speech to female,
      child to adult and vice-versa, to completely disguise identities
      and discourage unwanted calls.  Perfect for people living alone
      or children at home by themselves . . .

Yes, and perfect also for abusive or threatening telephone calls, imposters'
scams, and sexual harassment.

Even if used in the way that the advertisement suggests, some peculiar
scenarios emerge.  E.g.,

      Deep Male Voice:  Mommy and Daddy aren't home right now.

Please report problems with the web pages to the maintainer

x
Top