The RISKS Digest
Volume 6 Issue 21

Saturday, 6th February 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…


o Delta Air Lines "Computer" Mistake
Chris McDonald
o Missouri Voting Decision
Charles Youman
o Re: Whistle-blowing
Bob Ayers
o Re: RISKS in Cable TV?
Svante Lindahl
o Time base on cable TV info
o Signals on power lines
Peter da Silva
o The risk of LOJACK
Johnathan Vail
o Risks of helpful news software
Henry Spencer
o "My country's misguided technology transfer policy"
Hugh Davies
o Info on RISKS (comp.risks)

Delta Air Lines "Computer" Mistake

Chris McDonald STEWS-SD 678-2814 <cmcdonal@wsmr10.ARPA>
Wed, 3 Feb 88 7:28:19 MST
Last week the news media reported that Delta Air Lines had determined that its
"computer" had erroneously issued 750 frequent-flier certificates for free or
reduced fare flights to individuals who had not earned them.  A Delta spokesman
stated that "we know who these people are" and that the certificates would not
be honored.  It was also revealed that 3,000 other frequent fliers, who should
have received credits, had not.

This week Delta reversed its decision.  It will now honor the "unearned"
certificates.  Apparently 200 people will receive a free trip anywhere in the
USA; an additional 550 people will be able to fly for 50% off when a companion
buys a full-fare ticket.  The cost of the "error" will not be known until
individuals redeem the certificates.  All individuals, who should have
received credits, have similarly received their just due, according to Jim
Lundy from Delta.

I wonder who ultimately pays for Delta's decision.  On the assumption that
Delta officials feel confident the "error" was unintentional and not a
deliberate act by--dare I say--an insider, may we not adopt the maxim "computer
errors do pay!"

Chris McDonald, White Sands Missile Range

Missouri Voting Decision

Charles Youman ( <>
Thu, 04 Feb 88 08:51:29 EST
The January 1, 1988 edition of the St. Louis Post Dispatch contained
a follow up article on the Missouri voting decision previously reported
in RISKS 6:4.  The article by Tim Poor is titled "Blunt Says Ruling
Could Make Punch-Card Voting 'Unworkable'", appears on page 9A and is
quoted without permission:

"Missouri Secretary of State Roy Blunt said Thursday that a recent federal
court decision could 'make punch-card voting unworkable' and delay the
results of statewide elections.

Blunt called the ruling by U.S. District Judge William L. Hungate 'unfair'
because it requires a manual review of ballots on which some votes have
gone uncounted by St. Louis' automatic tabulating equipment.

He said as many as 60,000 ballots--half of all cast--might have to be
counted by hand because of the ruling. . . .

Hungate said the board's failure to review the ballots violated the 
Federal Voting Rights Act.  In addition to the manual review, he told the
board to target for voter education those wards from which more than 5
percent of the ballots were uncounted. . . .

Blunt said he agreed with the board's position that a manual review of
ballots on which some votes were uncast would be unworkable.  There would
be too many ballots to review; on lengthy ballots, many voters skip some
issues, he said.

The ruling 'encourages voters to vote on things they're not interested in,'
Blunt said.  He explained that people might vote on all items on the ballot
if they think that their ballot will be manually inspected if they don't. . . .

And he questioned the ability of election officials to determine for whom
a voter wanted to vote on ballots that are uncounted because they are
improperly punched.

'Engaging in speculation by looking at scratch marks, indentions or double
punches requires guessing as to what the voter is thinking,' he said.
'No group of election workers is qualified to do that.'"

There appears to be two distinct categories of votes that are not being
counted (1) those with the "scratch marks, indentions or double punches"
and (2) those that the voter didn't vote on every issue.  It's difficult
to tell from the article how many fall into category (1) and how many fall
into category (2).  I would not expect a computer program to be able to
make the judgements needed to deal with those in (1).  On the other hand,
if a substantial number of votes are in category (1) something is seriously
wrong with the overall system design that causes voters to make this error.
I see no reason why a computer program couldn't accurately count those
votes that fall into category (2).  In fact, I would go further and say 
that a program that makes that kind of error should not be allowed to be
used.  Perhaps legislation to that effect is in order.

It also appears that the judge was willing to accept a 5% rate of uncounted
votes. A lot--A LOT!--of elections are decided by less than 5% of the vote.

I'm not sure how votes in category (1) are dealt with in a manual system.
Is the entire ballot voided or are only those issues where the voter's 
intent is not clear?

It also appears that there need to be extensive procedural controls to
prevent someone from voiding ballots by making additional punches after
the vote is cast.  You could void all the votes that didn't go the way
you wanted them to.  Does this mean that a checksum needs to be computed
and punched into the ballot at the time it is cast?
                                                       Charles Youman

Re: Whistle-blowing (RISKS-6.20)

Bob Ayers <>
Wed, 3 Feb 88 12:34:05 pst
In Risks 6.20, Ronni Rosenberg (in a whistle-blowing discussion) remarks that

  We have the right to make mistakes only if we (1) acknowledge up front
  that this is the way we have to work, and (2) do not put a [computer]
  system into use, particularly in a critical application, if we are not 
  sure that it works.

What does "sure that it works" mean here?  If it means "certain that it
meets the specifications and never delivers anomolous results" then I
have to admit that I've never met such a computer system.

It is partly an issue of comparative risk — something that other posters
have previously mentioned. Is it better to have a computerized system --
knowing that it is not perfect — or to have a non-computerized system --
which also will not be perfect, though its faults will be different?

Would you use a computer system if, on each use, it had a one in 10^9 chance
of killing you?  You use such [non-computer] systems every day.  I recommend
the book (also mentioned before) On Acceptable Risk.

Re: RISKS in Cable TV?

Svante Lindahl <>
Fri, 05 Feb 88 03:33:12 +0100
In RISKS 6.18 marty moore <> writes:
>I always thought this had great possibilities for unscrupulous TV station
>programmers.  ("Let's buy some commercials through a dummy on the other 
>stations...we'll bury the signal to change to our stations in the commercials.
>The audience will never know the difference.")

The Swedish televion monopoly shut down their slave transmitters by
sending a short series of beeps from the masters. This signal is heard
from the TV just before the screen gets blurred.

A few years ago a news program showed a film displaying a televion set
filmed just when the broadcasts where terminating for the night.
The beeps were sent out in the middle of the news broadcast from this
"recursively" shown TV-set. This caused all transmitters to turn off
this station nationwide right in the middle of prime time news...

I believe this has been fixed so that the same mistake wouln't
happen again.
                            Svante Lindahl

Time base on cable TV info

Kekatos <moss!ihuxv!>
3 Feb 88 22:01:03 GMT
Re: (The second of) Two recent stories with lessons to be learned 
    (Rich Kulawiec) [RISKS-6.17]

The time (and date) info is digital encoded into the "back-porch"
of the TV signal of an "un-used" or "local cable guide" channel. I think
the "control" packets for the boxes are also sent via the wasted bandwidth
of an "un-used" or "local cable guide" channel.
The time signal is ALWAYS there, beening generated by some central clock.
It is problably not coming for a "general purpose" computer, but
rather a piece of special hardware as part of the distribution equipment.

(Disclaimer: I have little knowledge of actual Cable TV electronics)

Ted G. Kekatos
backbone!ihnp4!ihuxv!tedk                     (312) 979-0804
AT&T Bell Laboratories, Indian Hill South, IX-1F-460
Naperville & Wheaton Roads - Naperville, Illinois. 60566 USA

Signals on power lines

Peter da Silva <nuchat!peter@uunet.UU.NET>
3 Feb 88 12:46:49 GMT
I hope they shove the signal even higher than 19 KHz. Some of us can hear
that high.

The risk of LOJACK

Wed, 3 Feb 88 17:48:12 EST
This concerns the implications (risks!) of the LOJACK (sp?) anit-car-theft
system.  My information on this subject is based on a sales pitch and
brochure when I bought my new car.

The LOJACK system is designed to quickly retrieve a stolen car and apprehend
the thief before serious damage has occured to the car.  When a person buys
a new car they can, for about $500, have a LOJACK system installed in a
random hidden place (inside frame members, etc) in their car by the dealer.
When the person realizes that their new car is missing they call toll free
the LOJACK office, presumably supplying an authentification code.  The
operator then calls up the relevent info (presumably plate number, make,
model, color, etc.) and broadcast this info on radio transmitters around the
state or area.  The LOJACK unit in the stolen car responds and starts
transmitting a locating beacon.  The police, with special LOJACK finders in
their cruiser also recieve the information on a small display and if they
are within range of the stolen car then directional (and range?)
information is displayed as well.  Thus they can quickly locate the stolen
car.  All fine and dandy.

This system is installed and operating in Massachussetts.  Supposedly
every state police cruiser and at least 1 cruiser in every town is
equipped with the LOJACK equipment (you can tell by the 4 18" whips in
a diamond pattern on the roof of the cruiser).  I don't know how
effective this has been lately but in testing I was told they found
autos in different parts of the state in an average of 7 minutes!

The risks with this system should be obvious to the RISKS reader.
Suppose big brother wants to arrest Joe Citizen (to assist the
ministry of information with certain inquiries or course).  Big
brother simply broadcasts his LOJACK code and the cops bring 'em in.
Or just keeps an eye on him.  I think that the LOJACK people control
the data and in theory it doesn't work that way _today_.

I would be interested in hearing what other people think about this system
and if anyone has any technical information (frequencies, etc) I would be
particulary interested.  One quick note: although I didn't buy this system
(I don't live in the People's Republic of Massachussetts) but a friend did
buy one and it never even occured to him that it could be used this way.  I
think _that_ is one of the greatest risks of this kind of a system
(double-edged blade).

Johnathan Vail (603) 862-6562

Risks of helpful news software

Wed, 3 Feb 88 05:40:15 EST
This one is old news on Usenet, but may not be so well-known elsewhere.
Normal Usenet newsgroups are "unmoderated", i.e. anyone at a Usenet site
may post contributions without having to route them through a moderator
for approval.  Postings propagate via a "flooding" broadcast protocol:
when a site receives a new posting, it sends the new posting to ALL other
sites that exchange news with it.  There are some other provisions that
break loops and prevent duplications.  Normally, this works pretty well;
it is much more efficient than point-to-point mailing lists for traffic
that is read by many people.  (A minor variation on this method is now being
used on parts of the Internet as well.)

Relatively recently, an attempt has been made to provide better support
for moderated newsgroups, which still use the flooding protocol but which
do clear all submissions through a human moderator first.  (Some Arpanet
mailing lists are gatewayed onto Usenet as such groups.)  Modern versions
of the news software will either post a user's followup or mail it to
the moderator, depending on the nature of the newsgroup.  Now, the older
versions did not do this, and Usenet's lack of central authority makes it
impossible to enforce coordinated software upgrades, so there are backwaters
of the net where this doesn't work.  Like the phone company, Usenet has to
be backward compatible nearly forever.  To minimize loss of submissions at
boundaries between new software and old, while enforcing the all-postings-
via-moderator rule, the new software also mails to the moderator (rather
than posting) when an article arriving from another site is in a moderated
newsgroup and is not marked "approved by moderator".

Of course, this means that if such an article somehow gets posted at an
old-software site with several paths to new-software sites, the poor
moderator gets N copies of it.  This can be anything from a nuisance to a
disaster, depending on the value of N and how frequently it happens.  Some
Usenet moderators nearly quit in disgust shortly after the new software
first came out, when new-old boundaries were common.  It's less of a problem
now, but still crops up on occasion:  due to a complex combination of
mistakes on my part, a routine contribution to Risks from me got posted
instead of mailed here (we run new software but in an unusual configuration),
and PGN got six copies of it at last count.  (Sorry about that, Peter.)

When thousands of sites run software that is willing to send network mail
automatically to specific individuals, those individuals can have a very
rough time of it if the software does something unexpected...

Henry Spencer @ U of Toronto Zoology {allegra,ihnp4,decvax,pyramid}!utzoo!henry

   [The volume of barfmail continues to be quite painful, particularly
   from addresses that have worked consistently in the past.  I am therefore
   instituting a more Draconian policy of simply not trying to track down
   these problems.  If I don't hear from you when you STOP getting RISKS, I
   can only assume that you don't care.  (But don't panic if a week goes by
   without your RISKS FIX.  There are weeks when I cannot get to it.)  

   A sample of recently barfed addresses includes, ...@VLSI.JPL.NASA.GOV,, 
   ...@ADS.ARPA, ...@JPL-MIL.ARPA, ...@ACATT1.ARPA, and
   <BBOARD>RISKS.TXT@ECLC.USC.EDU (No such mailbox!).  PGN]

"My country's misguided technology transfer policy"

3 Feb 88 01:05:11 PST (Wednesday)
One of my colleagues has a Compaq 386/20 portable. He recently went on a
training course abroad and wanted to take it with him.  He had to spend 2
whole days raising export documentation, including a technology export license
required by the UK Department of Trade under an 'agreement' (did they actually
'agree' to this?) with the US.  Where was he going?,  Oh I forgot to mention. 

Where is the RISK in this? Well, the US technology export legislation is
unpopular enough in Europe as it is (where it is seen mainly as a means by
which US computer manufacturers can have the Eastern European market to
themselves), but when it leads to nonsense like having to obtain a license to
export the technology back to the country it came from, it brings the
legislation into disrepute, and people will just start ignoring it...

Please report problems with the web pages to the maintainer