The RISKS Digest
Volume 12 Issue 45

Wednesday, 9th October 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

TACAS — good news / bad news
Martin Minow
PGN
Safer flying through fly-by-wire
Henry Spencer
Friendly (?) viruses
Paul Smee
Computers and missile control
Walt Thode
Re: Known plaintext attacks
Ted Rodriguez-Bell
Clive Feather
Re: AT&T "Deeply Distressed"
Steve Bellovin
Bob Colwell
Re: Schipol Airport
Henry Spencer
Re: RISKS of Highway warning signs
K. M. Sandberg
Joe Morris
Dominic G. Flory
Michael Cook
Risks of computerized typesetting
Paul Wallich
Joe Smith
Re: Ada Code Formatters (or the dangers of old software)
Kent Mitchell
Re: Computerized typesetting and character sets
Richard S. D'Ippolito
Info on RISKS (comp.risks)

TACAS — good news

Martin Minow <minow@ranger.enet.dec.com>
Sun, 6 Oct 91 06:31:41 PDT
From the Associated Press, via the Boston Globe, Sunday, Oct. 6, 1991, in full:

    3-way Plane Crash Averted in Illinois.

Chicago - An error by air traffic controllers nearly caused three passenger
jets to collide Thursday, the Federal Aviation Administration said Friday.  A
warning from one of the planes' safety systems and a quick turn by the pilot
averted disaster near Midway Airport, an FAA spokesman said. The near-collision
involved a Southwest Airlines Boeing 737, a Northwest Airlines DC-9, and a
Midway airlines DC-9. The Midway pilot saw the Southwest plane after a warning
from his plane's Traffic Alert and Collision Avoidance System, the FAA said.
(AP)

Martin Minow    minow@ranger.enet.dec.com


Air Controllers Blast New Safety System

"Peter G. Neumann" <neumann@csl.sri.com>
Wed, 9 Oct 91 10:09:57 PDT
Air traffic controllers urged Congress yesterday [08 Oct 91] to suspend
installation of the new Traffic Alert Collision Avoidance System in the
cockpits of commercial aircraft, contending that they cause chaos in control
towers.  But Federal Aviation Administration officials said problems with the
system — including a tendency to warn of ``phantom'' aircraft that do not
exist — either have been resolved or are near resolution.  Barry Krasner,
president of the national Air Traffic Controllers Association, told the House
Transportation Subcommittee on Investigations that from May to September,
pilots and controllers reported 590 incidents of malfunctioning involving the
alarms, which are designed to warn pilots when they are on a potential
collision course with another aircraft.  [San Francisco Chronicle, 09 Oct 91,
p.A5]


Safer flying through fly-by-wire

<henry@zoo.toronto.edu>
Tue, 8 Oct 91 01:13:27 EDT
Interesting minor item in the August 5th issue of Aviation Week:

The USAF/NASA Advanced Fighter Technology Integration test aircraft is doing
flight evaluations of a system to help pilots cope with disorientation: push a
button on the stick and the computer automatically brings the aircraft back to
level flight.
                    Henry Spencer at U of Toronto Zoology  utzoo!henry

        [That would have done wonders for Jonathan Livingston Seagull.  PGN]


Friendly (?) viruses

Paul Smee <P.Smee@bris.ac.uk>
Tue, 8 Oct 1991 09:42:21 GMT
Some years ago, a similar discussion took place in comp.sys.atari.st — would
it be sensible to write ST 'benign viruses' to 'fix' known bugs in the
operating system by having them patch themselves into the system call vectors.
Thankfully, we managed (I think) to convince people it was a bad idea.  One
very major problem, of course, is 'which version of the OS?'  How do you tell
the thing to stop when a bugfixed version of the OS is installed in the
machine?  How do you teach it the about various national versions of OSes?  How
do you prevent it from interacting destructively with user programs which knew
about the OS bugs and contain their own workarounds?

(Of interest in many real-world situations, how do you keep it from interfering
with old applications which actually rely on the bug to make them work?  I know
that's a bad habit, but there are a lot of businesses running old versions of
OSes on old machines for precisely that reason.)

I maintained (and still do) that it is an (at least) antisocial act to cause
anything to run on my machine without my knowledge, and with no action on my
part to indicate that I want it.  I cannot think of ANY useful piece of
software of ANY sort — even standard commands — which does not have the
potential for screwing things up amazingly in SOME contexts.  The possibility
that I might have to debug a machine which is running some benign monster that
has snuck itself on, and that no-one asked for and no-one is aware of, is
frightening — even if it is implemented 'correctly', never mind the
possibilities of error.

I am not up on the state of US law, but over here even such a 'benign virus'
would be a violation of the law.  I believe that 'inciting someone to break a
law' is itself an offense.


Computers and missile control

Walt Thode <thode@nprdc.navy.mil>
7 October 1991 1306-PDT (Monday)
PENTAGON CONSIDERING LAY ANALYST'S PEACE PATENT
By PAIGE St. JOHN
Associated Press Writer

     TRAVERSE CITY, Mich. (AP) [7Oct91] Raoul Revord's international missile
control system reads so much like science fiction the patent papers include a
reference to a "Star Trek" episode. But the Defense Department is listening.
The Pentagon has assigned Revord's Mutual Missile Control System to a strategic
weapons group for study — making the Marquette, Mich., attorney a rare breed of
lay defense analyst.
   [general paragraphs about DoD unsolicited proposals omitted - wt]
   Revord's patent, No. 5,046,006, lays out a plan for a central computer to
control the nuclear arsenals of adversaries.  Essentially, the computer gives
first-strike rights to the intended victim. And it blows up, on the spot, any
missile a country tries to tamper with.  "They all say they want to do away
with nuclear weapons. What they need is a way to do it," he said Friday.  "If
they mean what they say, then all we need to do is show them a way."
   Revord sent his technical document to the Defense Secretary in early
September, and it was forwarded to the Undersecretary for Defense Acquisitions.
That office assigned it for study, and the staff has until Oct. 25 to respond.
   Pentagon spokeswoman Jan Walker said it isn't all that unusual for someone
like Revord — who lives in the remote woods of Michigan's Upper Peninsula --
to put together a nuclear defense system the Pentagon would consider. But she
added that most of the ideas are not developed enough to be taken seriously.
Revord, who designed corporate security systems before becoming a lawyer, said
he used his security systems background to tackle the problem of nuclear war.
   In 1987, he got his answer: the control computer.
   When one country tries to fire its arsenal, the computer would first alert
the target. Then it would wait, perhaps for 20 minutes or so, to give both
sides time to back out.  If the attacking country still wanted to fire, the
computer instead would unlock the launch codes for the intended victim to fire
first.  "I focused on that, to make so devastating the penalty that it would
take away the initiative to risk making the first strike," he said.
   Revord had an engineer map out the schematics. The U.S. Patent Office
approved the patent, but noted it somewhat resembled a "Star Trek" episode.
The episode, "A Taste of Armageddon," featured weaponless wars where
governments ordered the executions of their citizens when the enemy announced
attacks.
   Even if the Pentagon says no, Revord is proud of his peace patent.  "I'll
still sleep good at night," Revord said.  "I've spoken how I feel."

I won't bother mentioning the obvious risks.  I did like the reference to the
"Star Trek" episode, although I wondered if the reference shouldn't instead
have been made to an old movie that I think was titled "Colossus: The Forbin
Project."  In that one, the world's computers got intelligent, banded together
and took control of weapons (and the rest of the world) away from humans.
That's not the risk I'd worry about, but it's probably more appropriate than
the one in "A Taste of Armageddon."

Walt Thode    thode@nprdc.navy.mil     {everywhere_else}!ucsd!nprdc!thode


Re: Known plaintext attacks (RISKS-12.43)

Ted Rodriguez-Bell <ted@goldberry.uchicago.edu>
Tue, 8 Oct 91 16:04:23 CDT
In Risks Digest 12.43, D.W. Roberts quotes the London Daily Telegraph:
>[A known plaintext attack] helped the Americans to win the Battle of Midway in
>1942.  An American base radioed falsely that its water supplies had broken
>down.  The Japanese then reported the message in a cipher.  The Americans
>simply compared the two texts and learned to read secret enemy traffic.

It was more complicated than that.  The U. S. Navy was reading the Japanese
code all along.  They got the battle plans for an attack on a location like
Point PH.  They knew that the location was based on a map grid, but they
weren't sure that the grid point was actually Midway.  The island's garrison
sent a message in the clear that their fresh water generator had failed; it was
duly intercepted up by the Japanese and reported back to headquarters.  It was
the fact that the code was already being read that allowed American
cryptanalysts to attach a meaning to Point PH.  There was a big difference
between breaking JN-25 and confirming that those two letters meant Midway
Island.  I would not be surprised if the practical difficulties of cracking DES
have been similarly understated.

This is not to say that all known plaintext attacks don't work.  Towards the
end of World War I, Allied intercept services were able to read messages in the
German Army's low-level code quite easily; guessing the messages was one of the
ways they got in.  The code was changed at midnight every night, and some radio
operator would invariably send out test messages containing proverbs like "The
early bird gets the worm."  There was also usually someone who didn't get the
new code, and so a few messages would be repeated verbatim in the old (solved)
one.  More examples can be found from my source for these: David Kahn's
fascinating and thorough _The_Codebreakers_.

[  You may want to delete this to save space.  ]

On the topic of the Battle of Midway, only marvelous luck got the Japanese
battle plans read.  The Japanese had planned to change to a new version of
their code on April first.  A submarine that was transporting new codebooks ran
aground and was sunk; the newly distributed books were recalled and a new set
had to be printed.  That was supposed to go into effect on May first.  The
effective date was postponed until June first because the new codebooks could
not be distributed on time.  The change did take effect on June first, but by
that time the plans were laid and the ships were at sea.  The attack on Pearl
Harbor could not have been detected the same way.  The ships involved were
docked when plans were being formulated, and communications were done by
telegraph.

Ted Rodriguez-Bell, U. of Chicago Astronomy ted@borgil.uchicago.edu


Re: Demise of DES (Roberts, RISKS-12.44)

Clive Feather <clive@x.co.uk>
Tue, 8 Oct 91 7:43:50 BST
Dave Roberts <dwr@datasci.co.uk>, quoting the Daily Telegraph, says:

| [A known plaintext attack] helped the Americans to win the Battle of Midway
| in 1942.  An American base radioed falsely that its water supplies had broken
| down.  The Japanese then reported the message in a cipher.  The Americans
| simply compared the two texts and learned to read secret enemy traffic.

This is not what happened. At this point, the cipher had been mostly broken.
However, map references where enciphered in a separate system, and then
inserted into the message, and the cryptanalysts had not completely broken the
map system. In particular, they did not know where AJ was, and they did not
know the coordinates of Midway. They decided not to take the risk of assuming
the one was the other, but to get the Japanese to confirm it !

Instructions were sent (in cipher) to the base at Midway, telling them to radio
in clear [unenciphered] a message stating that their desalination plant was out
of order. A few days later, a deciphered Japanese message stated that "AJ is
short of water".   [Source: Kahn - The Codebreakers]

Clive D.W. Feather     | IXI Limited
clive@x.co.uk          | 62-74 Burleigh St.
Phone: +44 223 462 131 | Cambridge   CB1 1OJ
(USA: 1 800 XDESK 57)  | United Kingdom


Re: AT&T "Deeply Distressed" (Colwell, RISKS-12.43)

<smb@ulysses.att.com>
Tue, 08 Oct 91 13:25:19 EDT
     It's clear that the human component of the AT&T outage was
     what caused the outage. That's the link that needs more
     attention in the engineering and in the analysis of failure.

Yes.  Yes, but.  Many of the problems we discuss on RISKS are caused by
attempts to engineer humans out of the loop.  People make mistakes, get bored,
get distracted, panic, etc.  Machines don't have any of those failings.  So we
build an automated system that doesn't have any of those problems — except
that it was built by people who make mistakes, get bored, etc.

It's not hard for me to envision a scenario a few years hence where some long
distance network melts down (please, I hope it's not AT&T's again...)  because
someone misplaced a ``break'' statement in the automatic warning system that
was intended to check the power alarms.  And the human backup checks won't be
meaningful because no one will really look very hard; after all, the computer
is doing the real job, and probably getting it right most of the time.  We've
discussed this many times, especially in the context of fly-by-wire systems,
but it's a universal problem.  Not that I know what to do about it — we can't
build automated systems that are reliable enough (and flexible enough) that we
can dispense with people, we can't build systems that use humans as a backup
because that doesn't work, and we don't want to rely on purely manual systems
because of the mistakes that the operators make....
                                                --Steve Bellovin


AT&T "Deeply Distressed" (RISKS-12.43)

<colwell@ichips.intel.com>
Tue, 8 Oct 91 10:51:34 -0700
Yes, I agree, it's possible to err on the other side, and (arguably) it may
even be worse to do that. This seems equivalent to the question of how much
override a pilot of a fly-by-computer airplane should be able to exert; when
the flight computer refuses to pull too many G's because the wings may
overstress, but the pilot knows he'll hit a mountain otherwise, it's a bit
clearer who should outrank whom.

It just struck me that the AT&T exec might have been exhibiting an attitude
that worries me a lot — "doggone humans, there they go again screwing up
our system, where do we get these people, ah well, back to business." That
attitude won't result in a better system.

Phones are phones, that's mostly just lost money for somebody (save 9-1-1,
but still the danger to the population is restricted). It's the nuclear
reactors and chemical plants that worry me. Maybe we should start a
research colloquium entitled "How To Design Very Dependable Systems With
Wacko Humans In The Loop."

Bob Colwell  colwell@ichips.intel.com  503-696-4550
Intel Corp.  JF1-19  5200 NE Elam Young Parkway  Hillsboro, Oregon 97124

PS. I worked at BTL for a few years, and I know a little about the system
reliability goals and their previous track record. Remember the old TelCo
saw that "the most reliable switcher is tended by a man and a dog; the man
is there to feed the dog, and the dog is there to keep the man away from
the equipment." Is it my imagination, or have the risks increased that a
common-mode failure in the ESS software will take down an entire network,
and very quickly at that? Bell Labs used to worry a lot about how to design
a switcher to meet certain availability goals; it seems to me a
qualitatively different endeavor to design switcher software to meet
certain *network* availability goals. Is Bell approaching it that way?


Re: Schipol Airport

<henry@zoo.toronto.edu>
Tue, 8 Oct 91 14:08:01 EDT
>  Confusing?  Definitely.  Solution?  Perhaps using distinct names for
>the piers instead of letters would be the best method.  I seem to recall
>one airport which used color coded piers, such as Red pier and Green pier...

One has to avoid getting carried away with the possibilities of colors,
however, bearing in mind that some modest fraction of the population is
partially color-blind.  For example, however tempting it might be to
save space on monitors by just using color monitors and coloring the
symbols, it really is necessary to spell out the names.

There is also some potential for humans to interpret the color in their minds
and confuse it with a similar color, even if the colors are used only as names
and there is no actual use of color-coding.  This isn't much of an issue for
red and green, but when you start getting beyond the first half-dozen colors it
may become a problem.
                          Henry Spencer at U of Toronto Zoology   utzoo!henry


Re: RISKS of Highway warning signs (RISKS-12.44)

K. M. Sandberg <kemasa@ipld01.hac.com>
8 Oct 91 19:42:55 GMT
The accident in no way, IMHO, was caused because of the signs. The simple fact
is the truck driver was not paying attention and was the cause (based on what
was said above). The cars could have been stopped for many reasons other than
the bridge, like an accident, road block to pull objects from the road or
anything else. The sign helps to warn those not paying attention, but it does
not or should not allow people to not pay attention. I have seen cars stuck in
the road with no lights on, no emergency flashers or anything in the dark, but
that does not mean that I bear no responsibility if I should hit them as I
should be looking for object in my path. Be warned, objects stop without notice
sometimes.

>The obvious fix here is that if the signs are broken or not notified in
>time, the bridge should not be allowed to raise.
>                                                       J.B.Hofmann

The obvious fix is for *people* to pay attention, not to add an more and more
"fail-safe" systems. Otherwise how about a radio transmitter to disable all
normal (non-emergency) vehicles to kill the ignition on any cars in an area
where an accident has occurred?  It would prevent any additional accidents and
save lives. :-)
                         Kemasa.


RISKS of Highway warning signs (Hofmann, RISKS 12.44)

Joe Morris <jcmorris@mwunix.mitre.org>
Tue, 08 Oct 91 14:03:24 -0400
In RISKS-12.44, Jim Hofmann reports on a highway accident on the Woodrow
Wilson bridge in which warning signs were not activated to tell drivers
that the bridge was being raised, and Peter Neumann added some notes from
news reports.  A couple of points seem to have been missed...

(1) The bridge is located in a cusp of political jurisdictions.  One end
of the bridge is in Virginia, the other is in Maryland, and just for fun,
the District of Columbia is responsible for the bridge itself.  Can you
say "finger pointing"?  I thought so.

(2) According to news reports (and I have no other source of info on this)
all of the signs on the approaches to the bridge are controlled from a
central site in Virginia.  This includes the signs on the Maryland side; the
accident occured in the Maryland-to-Virginia lanes.  The Virginia center
normally closes at 9:30 PM (daily?) but the staff there says that they
will stay on duty until after midnight if they have been notified that
the bridge will be opened.  The signs themselves are just text messages --
if there are any flashing lights or such to draw attention to them I
haven't seen them.

(3) The bridge is part of the Washington Beltway, which itself is part
of Interstate 95...an Interstate highway with a drawbridge.  Since the
traffic in this area is difficult to describe without using words which
would be unacceptable to Miss Manners, it takes little imagination to see
that bridge openings for any reason — commercial freighters or pleasure
craft — are not appreciated.  Thus there are significant restrictions on
when the bridge will be opened which weren't in place when the bridge
was built (many years ago) or when the sign system was designed (somewhat
more recently).

Part of the problem, of course, is having a drawbridge on an Interstate
highway, especially one which is a major north-south corridor for trucks.
That's hardly a computer RISK.

The other part is the all-too-familiar problem of changing part of a system
without realizing that other parts of the system are affected.  In this
case, the severe restrictions on bridge openings (more severe on pleasure
craft than on commercial freighters) makes the late-night openings more
likely, but the control of the signs was left in the hands of a facility
that normally closes in the evening.

The signs in question, incidentally, are located some ways back from the
bridge, so that if you see that the bridge will be open you've got a chance to
get off at the last exit before the bridge.  There's been no indication or any
malfunction (or nonfunction) of the overhead flashing lights (about 1/4 mi (?)
before the traffic gates) or the gates and red lights at the draw itself; these
are directly controlled by the bridge tender and for years were the only
warning devices.
                                        Joe Morris


Re: RISKS of Highway warning signs

<dusty.henr801e@xerox.com>
Tue, 8 Oct 1991 12:28:28 PDT
<< The implication is that had there been no signs, the driver might have been
more cautious.  But since there WAS a sign and it was not flashing a warning
signal, the driver did not slow down.<>

The was no risk here because the signs weren't working.  The risk was the truck
driver obviously driving recklessly.  There's too much of a tendency to blame
something or someone else when things happen.  Bottom line is, if the trucker
had been driving safely, the accident WOULD NOT HAVE HAPPENED, sign or no sign.

dusty flory


RE: RISKS of Highway warning signs

"GVA::MLC" <mlc%gva.decnet@consrt.rockwell.com>
8 Oct 91 12:58:00 PST
Here in Cedar Rapids, Iowa, interstate highway I-380 passes through the
downtown area.  The highway has a large "S" turn in it to avoid tall buildings
that already existed when the highway was built.  During the winter months,
snow and ice on the highway cause accidents occasionally as drivers drive
faster than conditions allow.  A few years ago, it was proposed to install
electrically illuminated signs over the roadway that would warn when "icy
conditions may exist".

As I remember, there were discussions about who would be liable when a car had
an accident in bad weather when the sign was off.  The interstate is a federal
highway, but running through a municipality.  The wording of the sign itself
was discussed.  Who would be responsible for determining when the sign was
turned on?  What if the sign were turned on, but didn't illuminate properly?
What if an accident occurred due to icy conditions, but not on the portion of
the highway that was regulated by the sign?

The sign is in place.  I have seen it illuminated at times.  But I'm not sure
how the liability issues were resolved.  As far as I know, there have been no
lawsuits (yet) regarding accidents on that portion of the interstate highway.
At least some front-end thinking did take place.

And of course, in many locations there will be permanent signs that are
obscured in one way or another, whose visibility or lack there-of will play a
part in an accident.

Michael Cook   Internet: MLC%GVA.DECNET@CONSRT.ROCKWELL.COM


Risks of computerized typesetting

Paul Wallich <well!pwallich@fernwood.UUCP>
Thu, 3 Oct 91 15:13:14 pdt
Fonts with slightly different characters are a subtle problem, but there are
more egregious risks of computer typesetting as well. Excerpts from a memo
follow:

"Please be advised we have experienced system related software problems with
Quark Xpress during the November issue . . . We are reinstalling software ...
this may cause some articles/departments to reflow from the current proofs..."

Translated, that means that lines will break differently on the printed page
than on the screen. If you're lucky, you end up short; otherwise the last few
lines of an article vanish mysteriously. It may be possible to catch this on
real proofs, but for a magazine (or, even worse, newspaper) just catching it is
not enough, as there may be no way to fix the problem in the time remaining
before press date.


Re: Risks of computerized typesetting

Joe Smith <jms@tardis.Tymnet.COM>
Thu, 3 Oct 91 18:16:56 PDT
In regards to having backquotes (`) come out the same as apostrophe ('), I have
been consistently confounded by the Courier font built into the Postscript
cartridge of the HP LaserJet printer.  The two characters do show up as to
distint glyphs, but only under a magnifying glass.  They both tilted about 30
degrees clockwise of vertical.  The only difference is that the apostrophe is
10% thicker at the top, and the backquote is 10% thicker at the bottom.

I fear that this set of indistinguishable characters is the standard straight
from Adobe.

Joe Smith (408)922-6220, BTNA Tech Services TYMNET, PO Box 49019, MS-C51
San Jose, CA 95161-9019  SMTP: jms@tardis.tymnet.com


Re: Ada Code Formatters (or the dangers of old software) (RISKS-12.41)

"Kent Mitchell" <KDM@rational.com>
Fri, 4 Oct 91 08:40:15 CDT
I read with interest the report in RISKS-12.41 on the risk of Ada
pretty-printers.

While this unfortunate mis-formatting did happen in an older release, the
Rational Environment Ada editor has not exhibited this behavior for some time.
The Environment now (for the past three releases I could test under) correctly
flags this as a syntax error.

Perhaps the real risk here is using old, out of date software.  Like any
software company we occasionally have bugs in our software and produce periodic
releases to address them.  We cannot, however, make people upgrade to these new
releases.

I sympathize the the amount of debugging effort this behavior may have caused
Mr. Hash, but it is comforting to note that some risks are dealt with more
easily than others.

Kent Mitchell, Rational Technical Representative, kdm@rational.com


Re: Computerized typesetting and character sets

<rsd@SEI.CMU.EDU>
Fri, 04 Oct 91 19:44:44 EDT
Well, gentle readers — there's another risk here!  The character that appears
in my last name (when I can coax programmers to do it right — it took the
IEEE, of all organizations, several years to replace the spaces and commas with
the correct character) is NOT a quote, forward, backward, north, south, single,
double or any direction or quantity you choose — it's an apostrophe!  Its use
as a special character in computer programs has caused me a lot of grief!

We have a library system here at CMU where my publications can't be found due
the non-acceptance of the apostrophe.  The response from the library manager
was, "Sorry, that's the way the software is, and it's not a high priority."
Folks with names like mine also tend to disappear from phone lists, and, when
we appear at all, appear incorrectly alphabetized.  For some reason, the local
Bell phone company always gets it right in the directory, though the CMU
people don't.  Maybe Bell directory printers never use "C"!

Additionally, the use of fonts whose capital "I"s have no serifs and look like
lower-case "l"s (like the one you're reading this with?) are another source of
confusion.

My guess is that the author of the subject programming language and operating
system had a very simple name like..."Richie".  >Sigh<  It's almost enough to
make me wish I _were_ a number!

Richard S. D'Ippolito                                       rsd@sei.cmu.edu

  [ZIPPO!  You ARE a number!
     And then there were the old Multics days when Corby (see the September
     91 CACM) had the string `` Corbato[backspace]' '' for his account name,
     to get the overstruck character out of Multics' innovative character-
     position canonical form that permitted arbitrary overstrikes!  PGN]

Please report problems with the web pages to the maintainer

x
Top