The RISKS Digest
Volume 11 Issue 77

Friday, 31st May 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Yet another "stupid computer" example
Allan Duncan
Re: kremvax
Steve Bellovin
Douglas W. Jones
Russ Nelson
Re: Vote-by-Phone
David Canzi
Erik Nilsson
Re: the FBI and computer networks
Steve Bellovin
Two books on privacy that may be of interest
Tim Smith
Credit reporting
Paul Schmidt
More on Lossy Compression
David Reisner
Info on RISKS (comp.risks)

Yet another "stupid computer" example

Allan Duncan <a.duncan@trl.oz.au>
Fri, 31 May 91 15:06:09 EST
More in the continuing saga of man versus machine.

Here are a couple of letters that appeared in the The Age ( Melbourne,
Australia ).  There is a sting in the tail.

Allan Duncan, Telecom Research Labs, PO Box 249, Clayton, Victoria, 3168,
Australia  (+613) 541 6708  {uunet,hplabs,ukc}!munnari!trl.oz.au!a.duncan

        = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

                            15 May 91
    I have discovered that The Bed of Procrustes still exists and,
daily, people are being stretched or lopped to fit.  Nowadays, it is
called the Computer.
    At birth, I was named Charles Edward Williams and, since my father
was also named Charles, my family called me Ted, as an abbreviation for
Edward.  Consequently, I muddled along for 60 years as Ted/C.E./Charles
Edward Williams.  The exact term depending upon the formality of the occasion.
    This state of affairs came to an abrupt end a few years ago, when
the Department of Veterans Affairs tucked me under their wing.  Immediately
I became Charles E. Williams, and my feeble protests were brushed aside.
    At first it mattered little, as the postman still delivered their
mail, and the bank accepted their cheques.  But as I found myself spending
more time in waiting rooms, it became quite embarrassing to wake up and
realise that the "Charles Williams"  they were calling for the fourth time
was me.
    Recently, I asked DVA that, if they must adopt this American usage,
would they at least employ it correctly, ie would they in future please
address me as C. Edward Williams?
    A few days ago I received a phone call from a young lady at DVA who
said that, if I wished, they would address me as Edward C. Williams.  In
horror, I drew her attention to the vast legal implications of such an act.
She stated, categorically, that the first given name must be the one spelled
out, otherwise their computer could not cope.
    Now I am not vindictive, but I believe any EDP manager who recommended
the procurement of a computer system so inflexible that it cannot handle such
a small variation from the commonplace deserves to be publicly executed.

                        C. Edward Williams


                            30 May 91
    Always give credit where it is due.  After my letter (15/5) a very
"computer literate" gentleman from Veterans Affairs contacted me to say
that henceforth, I will be addressed as C Edward Williams, instead of the
present Charles E Williams.  All that remains to be done is for their
computer to inform all of its fellows of my change of name.  DVA regrets
that they cannot follow the C with a full stop.  An attempt to do this
causes the machine to have a fit of electronic epilepsy.

                        C. E. Williams


Will the real kremvax please stand up (Windley, Moscow ID)

<smb@ulysses.att.com>
Thu, 30 May 91 21:22:56 EDT
kremvax.hq.demos.su. is quite real.  There is no direct IP connectivity;
mail to the USSR is routed through assorted gateways selected via MX
records.  The good folks at Demos (a ``co-operative'') are quite aware
of the story of kremvax; its use as a machine name is hardly co-incidental.

But really — you're questioning the authenticity of such a machine name, when
you're allegedly posting from a place named ``Moscow, Idaho''.  A likely story.

--Steve Bellovin


kremvax.hq.demos.su (Windley, RISKS-11.76)

Douglas W. Jones <jones@pyrite.cs.uiowa.edu>
31 May 91 02:44:11 GMT
> The Naval Investigative Service (NIS) knows about it.  I told them.

And I spoke to my congressman about it back in October '90 when the link to
Moscow first came to my attention.  Most important of all, though, the people
at DEMOS registered the .su domain with the people at the Defense Data Network
Network Information Center who manage assignment of top-level domain names.
Yes, the government knows, but as is common with organizations that large,
there is a sense in which they probably don't know that they know.

kremvax.hq.demos.su is in the headquarters of DEMOS software in Moscow (the
domain name tells you most of that).  DEMOS is acting as the gateway to the
west for the RELCOM network in the USSR.  Soon after the initial connection
between DEMOS and the rest of the world, I sent a copy of Piet Beertema's great
kremvax/kgbvax hoax to Vadim Antonov at DEMOS, and he said it was great fun.  I
gather it was fun enough that they copied some of the machine names.
                                                     Doug Jones

    [Similar observations to those above from:
        Michael O'Dell <mo@gizmo.bellcore.com>
        David Fetrow <fetrow@orac.biostat.washington.edu>
        Charles (C.A.) Hoequist <HOEQUIST@bnr.ca> .
    I was going to wait to put out this issue, but decided to do it now in
    order to stave off a further flood of responses.  PGN]


kremvax (Windley, RISKS-11.76)

Russ Nelson <nelson@sun.soe.clarkson.edu>
Fri, 31 May 91 10:24:56 EDT
The Soviets are neither stupid, ignorant nor humorless.  DIG says:

;; ANSWERS:
kremvax.hq.demos.su.    345600  MX  100 fuug.fi.
kremvax.hq.demos.su.    345600  MX  120 nac.no.
kremvax.hq.demos.su.    345600  MX  200 mcsun.eu.net.

In my experience, the folks at hq.demos.su are hackers in the AI lab
tradition, and not above a little tomfoolery.


Vote-by-Phone - Promises and Pitfalls (Saltman, RISKS-11.75)

David Canzi <dmcanzi@watserv1.waterloo.edu>
Fri, 31 May 91 00:19:34 EDT
Sure, the computer can be programmed to protect the voters' privacy.  It can
also be programmed not to, and the voters told otherwise.  They will have no
way of knowing.  You can even show them source code for a vote-collecting
program that respects their privacy.  They have no way of knowing that that is
the program that will actually be run.

If both the verification of the voter's identity and the collecting of votes
are done by phone, I don't see how there can be any secret ballot without
depending excessively on the honesty of those running the polls.

(I don't know how many things Americans typically get to vote on in an
election.  Assuming that the voter's social security number and vote can be
encoded in 15 bytes or less and that there are about 150,000,000 voters, it'll
all fit on one Exabyte tape.)

David Canzi


Vote by Phone (RISKS-11.76)

Erik Nilsson <erikn@boa.mitron.tek.com>
Thu, 30 May 91 19:23:41 PDT
AURKEN@VAXC.STEVENS-TECH.EDU writes:

> voting by phone enables a citizen to verify that his/her vote is
> actually counted, which is something that is practically impossible to
> do with existing election technologies.

All of the vote verification schemes that I am familiar with will work for
paper, DRE (see below), and phone ballot methods.  Any method that works for
phone balloting can in principle be made to work for the other methods, using
the following technique: For each ballot, the counting board calls a computer
and re-votes the ballot by phone, thus using whatever verification scheme is
proposed.  Obviously, practical systems would probably omit the phone :-).

Vote verification is of little value without assurance that people who
didn't vote weren't counted.  Since voting by phone has no signatures,
voting the dead is much easier.  An inside operator could also monitor
who has voted, and have confederates jam in last minuted calls using
the IDs of people who hadn't voted, stuffing the ballot box.  A wide
variety of vote fraud techniques are facilitated by vote-by-phone, and
a few new ones are doubtless created.

> voting transactions can be time-stamped to help guard against fraud

How does this guard against fraud?  I'd much rather have a paper trail
that takes large amounts of effort in a short amount of time to
undetectably wedge than a time-stamp that can be easily mass-produced
by a computer.

> allowing voters to vote for "none of the above" is an improvement on
> the normal method of voting

What has this got to do with voting by phone?


doug@NISD.CAM.UNISYS.COM (Doug Hardie) writes:

> The question is can it fit acceptably into our society.  For example,
> there has always been an opportunity for poll watchers to challenge
> the registration of specific voters and their right to vote.

This is one of the things I don't like about vote-by-mail.  With
elections officials pushing for easier permanent absentee ballots,
many states that don't have vote-by-mail are headed toward de facto
vote-by-mail.  At least with vote-by-mail, you have a piece of paper
with a signature to challenge.

> how do you protect the computer collecting the votes from tampering by
> its users?

This is a problem even without vote-by-phone, as Hardie points out.
Vote-by-phone just makes it worse.


Martin Ewing <ewing-martin@CS.YALE.EDU> writes:

> I have used a number of VRS systems.  The most complicated is Fidelity
> Investments FAST system

I have also used this system, although mainly just to change my default
password to make snooping on me harder.  I'm a technical computer person, and I
found the system annoying, because the system repeatedly asks for the same
information over and over.  Vote-by-phone would have to be even more
complicated, and probably would have less financial resources than FAST,
resulting in a less robust design.

> The standard telephone ... is an extremely limited computer I/O interface.

You said it.

> In Pasadena, we used the (sigh!) Hollerith Card voting system ....
> In Connecticut, we now use voting machines.  These inspire a lot less
> confidence for me.

See Papers by Roy Saltman, Lance Hoffman, Howard Strauss and myself on the
problems with both of these systems.  Ron Dugger had an article in the New
Yorker, and Eva Waskell has written several articles.  Send me personal mail if
you'd like to see a more complete bib.

> I am sure that an electronic interface, based perhaps on ATM technology,
> could be developed to handle the authentication and the logical details of
> voting.

It already has been developed.  They are called Direct Recording Electronic
(DRE) machines.  The current crop suffers from many of your mechanical
complaints.  A carefully designed second generation of these machines would
ease many of our vote counting worries.

- Erik Nilsson erikn@boa.MITRON.TEK.COM


Re: the FBI and computer networks (D'Uva, RISKS-11.76)

<smb@ulysses.att.com>
Thu, 30 May 91 22:44:00 EDT
I fear we're wandering very far afield; I'll attempt to get in one last
letter before our esteemed moderator cuts us off....                   [Tnx.P]

I spoke a bit loosely when I used the phrase ``probable cause''; that's a
technical term, and does not (quite) apply.  Nevertheless, I stand by the
substance of what I said.  The FBI is not allowed to engage in systematic
monitoring of speech without reasonable suspicion of criminal activity.  The
intent of the ban is twofold: to prevent the chilling effect of constant police
surveillance on legal but controversial opinions, and to guard against abuses
of official authority.  The record amply supports both fears.

Certainly, a police officer may take appropriate action if he or she happens to
observe an illegal act.  If that act is observed through legitimate
participation in a public forum, the officer is not thereby disqualified from
acting.  But monitoring without reasonable grounds to suspect *illegal* speech
-- and precious little speech is illegal — is not permitted.  (Let me amend
this statement a bit.  Monitoring is allowed if demonstrably relevant to an
actual criminal investigation.)

I'm not sure what you meant when you dragged in the war on drugs in D.C.  The
activists I mentioned in my initial letter were making controversial political
statements, very much ``anti-establishment''.  The monitoring was an outgrowth
of the same mentality that produced local police ``Red Squads''.

To be sure, I'm not 100% certain whether these bans are statutory, regulatory,
or judicial.  I'll do a bit of checking and see what I can learn.  Or perhaps
one of the attorneys reading this list can provide a precise citation (or
contradiction)?
                            --Steve Bellovin


Two books on privacy that may be of interest

<ts@cup.portal.com>
Thu, 30 May 91 23:03:00 PDT
Here are two books that may be of interest to those who have been
following the privacy discussion on RISKs:

"Your Right To Privacy" (subtitled "A Basic Guide To Legal Rights
 In An Information Society"), ISBN 0-8093-1623-3, published by
 Southern Illinois University Press, is one of a series of books
 from the ACLU.  This is the second edition, and was published
 in 1990, so should be reasonably up-to-date.

 One of the authors is Evan Hendricks, editor/publisher of "Privacy
 Times", and the other two authors are ACLU lawyers who work in the
 area of privacy.

 Here's the table of contents:

    Part I. Collection, Access, and Control of Government
        Information.

        I.  Government Information Practices and the
            Privacy Act
        II. Access to Government Records
        III.    Correction of Governement Records
        IV. Criminal Justice Records
        V.  Social Services Records
        VI. Social Security Numbers
        VII.    Electronic Communications
        VIII.   School Records

    Part II. Personal Information and the Private Sector

        IX. Employment Records, Monitoring, and Testing
        X.  Credit Records and Consumer Reports
        XI. Financial and Tax Records
        XII.    Medical and Insurance Records
        XIII.   Viewing and Reading Records
        XIV.    The Wild Card: Private Detectives

The other book of interest is called "Low Profile" (subtitle: "How to Avoid the
Privacy Invaders"), by William Petrocelli, published by McGraw-Hill Paperbacks
in 1982, ISBN 0-07-049658-7.  It includes interesting sections on bugs and on
annoying people who you think are taping a meeting (whenever you would normally
say "yes" or "no" in response to something, just shake your head — this will
drive the person taping you up the wall).  Otherwise, it covers much the same
ground as the ACLU book above (warning: I haven't finished either book yet, so
I may be mistaken here!), although more emphasis is placed on protecting
privacy from hostile people such as competitors.
                                Tim Smith


Credit reporting

Paul Schmidt <prs@titan.hq.ileaf.com>
Fri, 31 May 91 08:04:13 EDT
  It strikes me that there is a fairly easy way for people to find out who is
accessing their credit reports. Require that the data bank administrators
immediately send mail to the person describing what was accessed, and for who.
These letters could easily be computer generated, and the mailing costs
included in the charge for the credit report.

  I imagine things will get interesting when the people start to discover how
many places their credit history is being sent.


More on Lossy Compression (ACM SIGSOFT SEN v16n2 p.5)(SEE RISKS-11.21)

David Reisner <synthesis!dar@UCSD.EDU>
Thu, 30 May 91 00:11:25 PDT
In the editor's note at the end of "Medical image compression & fertile ground
for future litigation" [in _Software Engineering Notes_, adapted from
RISKS-11.21 and subsequent commentary], PGN suggest that a lossless image
compression algorithm may become lossy in the presence of noise.  I presume Mr.
Honig will comment on this, but just in case he doesn't, I hope this input will
be useful.

There are, in fact, lots of compression algorithms that ARE lossy.  They do not
reconstruct the original data upon decompression, but rather data which is
"acceptably close" by some measure.  Obviously, these are not appropriate
methods for traditional computer data (e.g.  databases, text), but may be
perfectly acceptable for other types of data (e.g. images, sound), depending on
the application.

For example, the "fractal" image compression technique which has received a
fair bit of attention over the past 18 months reconstructs only an
approximation of the original image.  The algorithm can be applied iteratively,
and will produce a "better" image after each iteration, up to some limit.  The
relatively recent (and very useful) JPEG image compression standard is also
"lossy" - only approximates the original image.

Both of these systems exhibit losses or "infidelities" that are mathematical in
nature; they are at a level which is considered acceptable, but are not
specifically engineered to be innocuous in the application domain.  Mr. Honig's
comment "discussion about how compression schemes could exploit the (finite)
resolution of the human visual system to send only the information that was
perceivable", strongly suggests that they are considering application domain
specific types of compression and loss.

A current example of such a scheme is the Digital Compact Cassette (DCC)
compression scheme developed by Philips, which uses about 1/4 the data rate of
a Compact Disc (CD) to reproduce sound which is hoped to be indistinguishable
from the CD.  Philips developed an algorithm which is heavily based on human
psychoacoustics (particularly hearing threshold curves and masking phenomena).
The algorithm was then refined using the evaluations of trained listeners - a
very unusual step (as opposed to using only "mechanistic" objective
measurements).

Stepping back more directly to RISKS, when using these "lossy" methods, it may
be difficult to know what data will be reproduced inaccurately (particularly
when viewed from a domain-dependent "structural" perspective).  Philips
attempted to test and "measure" the quality of their algorithm in (an
approximation of) the actual application domain, but they cannot KNOW that it
will be successful for any given listener or program (music/sound) signal.  For
medical image compression, losses which are either not detected or determined
to be unimportant in the testing and development process COULD cause injury or
loss of life if they are of consequence during (potentially atypical) actual
application.  Thus, it would seem far safer to use the most rigorous imaging
and transmission schemes possible, only trading off the costs of inability to
transmit (due to financial or technical factors) versus "imperfect"
transmission.

If "lossy" schemes are used, designers and users will need to understand (to
the extent feasible) the quality/reliability of the information they are
actually dealing with.  Unfortunately, as when the calculator supplanted the
sliderule, and as computer packages with preprogrammed algorithms supplant the
calculator, people tend to loose a "gut feel" for the accuracy, and even
reasonableness, of their answers (as well as loosing sufficient understanding
to be able to innovate - a whole other type of "risk").

 David Reisner, Consulting, Los Angeles CA USA (213)207-3004

Please report problems with the web pages to the maintainer

x
Top