The RISKS Digest
Volume 11 Issue 63

Wednesday, 8th May 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Validation and Verification Issues for KB Systems
Paul Tazzyman
Disclosure of customer information (AT&T)
Lauren Weinstein
Quirk in British Computer Privacy Laws
Ed Ravin
Smart alecks vs. the census computer
Les Earnest
UK Interim Defence Standard 00-55
Martyn Thomas
Patriot vs Scud
Mike Schmitt via Mark Jackson
Some views about the debate sparked by Dutch Hackers story
Olivier M.J. Crepin-Leblond
Re: Gary Marx Comments at the Computers, Freedom and Privacy
Sanford Sherizen
Update on S.618
John Gilmore
Re: Fences, bodyguards, and security (of old O/S)
Rick Smith
TMP Lee
Re: The new California Drivers License
Alan Nishioka
Info on RISKS (comp.risks)

Validation and Verification Issues for KB Systems

Paul Tazzyman <paul@pandc.rta.oz.au>
Wed, 08 May 91 16:13:22 +1000
This article follows on from the more general discussion earlier this year
about validation and verification of knowledge. This discussion has tended to
discuss the technical aspects of V&V whereas the question we are faced with
relates to the legal defendability of decisions reached as a result of applying
an expert system to a problem and the establishment, in legal terms, of the
"expertness" of the knowledge base. In cases where the knowledge is used in a
system which is subject to public scrutiny, and consequently subject to
scrutiny by the courts when the eventual output of KB systems is implemented,
the heuristics may be challenged by other "experts".

The issue that this raises is "how is the source of the knowledge used by a KB
system given expert status by a court charged with hearing legal challenges to
decisions based on the KB system". Under most legal systems the "expert" must
satisfy the court that they are in fact suitably qualified in the topic. In the
case of a KB system the heuristics may be the result of many "experts" input to
the system and therefore there is no single "expert" who can be produced to
give expert evidence.

This implies that the knowledge gathering process must first establish, and
document, the sources of the knowledge and be able to establish, in sufficient
clarity for the courts, the development of the KB system's rulebase.

Presumably the courts have reasonably well established guidelines for the
admission of "expert" witnesses and the qualification of such witnesses.

How is a witness given "expert" status and how must the documentation of the
knowledge acquisition phase of an expert system be undertaken in order to allow
the system to withstand legal challenge to the decisions based on its
inferences. Obviously differences between the legal systems in the US and
Australia will produce differing opinion, but because of the infant nature of
the technology, from the legal aspect, any decision will be cited as a
precedent.

A discussion of these issues within this forum will help us, and no doubt other
organisations, establish more correct procedures.

Shaun Gray shaung@xwdev.rta.pandc.oz.au  paul@xwdev.rta.pandc.oz.au
Roads and Traffic Authority of NSW, Information Services Branch
3rd Floor, 260 Elizabeth Street, SURRY HILLS. NSW. Australia 2010


Disclosure of customer information (AT&T)

Lauren Weinstein <lauren@vortex.com>
Mon, 6 May 91 12:40:56 PDT
  [Also sent to TELECOM]

Like others reading the TELECOM digest, I was amazed to see the recent message
where an AT&T Communications employee apparently used his access to customer
data to conduct a "private" investigation of a "contest/telemarketing"
operation, then published the "results" via TELECOM.

Immediately after seeing his original message, I sent the author private email
asking for an explanation.  Of particular interest to me was whether he was
acting in violation of AT&T confidentiality rules, or whether the rules would
have permitted such actions.

I received a reply back from him today.  In essence, he says that he made a
mistake in making the information public, and that AT&T rules do *not* permit
such disclosures from customer data.  He also says that some of what he said in
that message was obtained directly from a conversation with the telemarketer.

In any case, it is obvious from his original message that he did access the
customer records of the firm in question, and did obtain information regarding
long distance calling patterns and telephone number usage information from
those records.  However obnoxious some may feel the firm to be, their telecom
records are still deserving of the same security and confidentiality we all
(should!) expect, and should not be subject to "private" investigations and
disclosures outside of official channels.

This is unfortunately symptomatic of the growing range of situations where the
data collected on individuals and organizations in the course of their normal
business is available to too many persons without authorization or "need to
know".  The amount of information that can be obtained with essentially no
security controls, or often at the best semi-useless, pseudo-controls such as
social security number, is vast and growing.

In the telecommunications arena, the problem has grown greatly with the breakup
of the Bell System--it seems like customer telephone data is floating around
almost freely between the local telcos and the private long distance carriers
these days.  But the same sorts of problems exist in many other areas of our
lives, and only seem to be getting worse, not better.

I believe that the time has come for another look at the Privacy Act in terms
of how it does, or does not, protect consumer (both individual and business)
information and who (both inside and outside of the firms collecting the data)
has access to that information.  I believe that meaningful, uniform minimum
standards must be established for automated systems that allow consumers to
access various account balances or similar data by telephone.  The excuses of
the firms providing these systems that it would be "too difficult for
consumers" to remember a passcode or even know their account number (i.e. the
ongoing Sprint account information case) must be treated as the unacceptable
responses that they are.

Consumers need protection both from the employees of the firms who maintain the
data (whether or not such employees act with malicious intent is not the issue)
and from outside persons who can gain access to such data through the often
non-existent security of these systems.

Many of the companies involved state that they are providing all of the
security required by law.  OK then--if they don't feel a need to go beyond the
current law to a meaningful level of protection, the time has come to improve
the laws to take into account the realities of the information age.  And there
isn't a moment to lose.
                                           --Lauren--


Quirk in British Computer Privacy Laws

Unix Guru-in-Training <elr%trintex@uunet.UU.NET>
Tue, 7 May 91 15:27:34 EDT
- Quote without comment from The Economist, May 4-10:

"Computers and Privacy — The eye of the beholder"

...A common feature of privacy law is that governments tend to treat
information held on a computer as fundamentally different from that held on
paper.  ...But technology is making the distinction less and less tenable--
with, for example, devices that can scan text from a printed page straight into
a computer.  ...Because British law lets individuals look at computerised
information that others may hold about them, British journalists turn from
their computer terminals to typewriter and paper when they write obituaries of
elderly (but not yet deceased) public figures.  When the subject is no longer
in a position to demand his right to freedom of information, the obituary is
then put into the computer for publication.
                                                 Ed Ravin  philabs!trintex!elr


Smart alecks vs. the census computer

Les Earnest <les@dec-lite.stanford.edu>
Tue, 7 May 91 19:53:23 -0700
A news report indicates that an increasing number of Americans are
thumbing their noses at the Federal government's mindless insistence
on classifying everyone into traditional ethnic categories ["Census
menu contained lots of spice," San Jose Mercury News, 5/7/91, p. 1A].
The feds are reportedly fighting back with advanced computer technology.

Several years ago I pointed out in this forum that no one had yet
devised a scheme that would reliably and unambiguously assign each
individual to a particular racial or ethnic category.  Those postings
were later developed into an article arguing that all statistical
studies of racial or ethnic categories and the governmental programs
based on them, especially Affirmative Action, rest upon unstable
foundations ["Can computers cope with human races?" CACM, Feb. 1989].
I advocated answering questions about one's ethnicity with "mongrel"
and a number of people later told me that they had used that answer
or similar ones in the 1990 Federal census.

Today's news story says that the Census Bureau received many answers
such as "a little bit Norwegian," "a little bit of everything,"
"California boy,"  "Heinz 57," "a fine blend," and "steak sauce."
They somehow decided that most of these responses came from people of
Hispanic descent, according to Roderick Harrison, chief of the Race
Statistics Branch of the Census Bureau.  Other answers included "child
of God," "none of your business," "NOYB," and "NOYFB."

The article goes on to say:
  "This year, for the first time, spiffy new technology enabled the
  census to decipher each and every write-in answer to the race
  question.  (In 1980, only a small sample was read.)  The census
  computer was able to sort and assign about 85 percent of those
  `unique responses' to a racial group."
This is truly a remarkable claim: apparently the computer has
somehow figured out not only how to classify individuals into ethnic
classes, but how to do it even when they give ambiguous or misleading
answers.  If this claim holds up under scrutiny, I will nominate it
as the first example of true artifical intelligence.

The article goes on to mention that there are limits to its classification
abilities:
  "But the computer could not match about 200,000 quirky, smart-aleck
  and just plain weird answers such as `golden child,'
  `extraterrestial,' `alien,' `exotic hybrid,' `exchange student,'
  `half and half,' `fat pig,' `father adopted, race unknown,' `all of
  the above,' `handicapped,' and `exquisite.'"

Despite these limitations, it appears that the Census Bureau is well ahead
of the rest of the world in computer science.  ;-)

Les Earnest                          UUCP: . . . decwrl!cs.Stanford.edu!Les
12769 Dianne Dr.         Los Altos Hills, CA 94022     Phone:  415 941-3984


UK Interim Defence Standard 00-55

Martyn Thomas <mct@praxis.co.uk>
Wed, 8 May 91 18:11:44 BST
UK Defence Standards 00-55 (Procurement of safety-critical software) and
00-56 (hazard analysis) have been reissued as Interim Standards by the UK
MoD. They have been revised, and are no longer *draft* interim standards.
They will be used in procurement at the discretion of individual project
managers until enough experience of their efficacy has been gained; then
they will be revised as necessary and issued as full standards.

I have only skimmed 00-55 as yet, but it seems to have come out of revision
improved. The requirement for formality throughout development is still
there (strengthened somewhat, it seemed to me), but contentious issues such
as the list of proscribed practices (assembler, recursion, floating-point
...) has been revised and removed to a separate "guidance" section, outside
the normative part of the standard.

Copies of the standards may be obtained, free of charge, by writing to:

Director of Standardisation, STAN 1, Kentigern House, 65 Brown Street
Glasgow G2 8EX  Scotland

...... and *not* to me!
Martyn Thomas, Praxis plc, 20 Manvers Street, Bath BA1 1PX UK.
Tel:    +44-225-444700.   Email:   mct@praxis.co.uk


Patriot vs Scud

<Mark_Jackson.wbst147@xerox.com>
Wed, 8 May 1991 04:34:58 PDT
Can't follow up quickly as they don't carry /Army Times/ in our Technical
Information Center. . .

Date: 7 May 91 22:42:35 GMT
From: bcstec!shuksan!major@uunet.UU.NET (Mike Schmitt)
Subject: Patriot vs Scud
Keywords: Software glitch
Organization: The Boeing Co., MMST, Seattle, Wa.
Extracted-from: sci.military Digest   V7 #19

According to the latest issue of 'Army Times' the Scud that struck the barracks
and killed 28 soldiers was not detected by the Patriot missile battery guarding
the area.  It was described as "a glitch in the internal working of the
system."

The fault was contained in a very complex problem-solving portion of the
system's software.  The incoming Scud was picked up internally by the battery's
computers, but was 'never filtered through the computational process,' and
therefore never showed up on the screen.

Once the fault was traced, it was "a simple fix."  The Scud was intact on
impact, and had not broken up as it descended.
                                                      mike schmitt


Some views about the debate sparked by Dutch Hackers story

"Olivier M.J. Crepin-Leblond" <UMEEB37@vaxa.cc.imperial.ac.uk>
Tue, 7 May 91 19:00 BST
I'd like to put forward a few views about the debate concerning the Utrecht
Hackers, computer security, etc.: (Note that I have not included any name on
purpose: I don't want to flame *anyone* in particular. Indeed, RISKS is not the
place for flames - keep that in mind )

1. Someone suggested locking the Utrecht site out of the Internet. vThis
is nonsence. Take a similar example: if such a problem had happened using
the UK's NSFNet-relay, would you lock it out ? You would be locking out
most of the UK's academic users of Internet. And this just because of
one hacking incident. (Note here that I am *not* saying that hacking is okay)

2. What level of security: Why is it always a NASA (apologies for mentioning
a name - no hard feelings), or military, or US government site which gets
hacked ? Why isn't it an undergraduate university computer facility site, or
the computer system of an obscure company manufacturing shoes ? (note: I
have nothing against shoe-makers)
Possible answers:
either... 1. only break-ins at NASA sites are reported in the press
or...     2. NASA sites seem more attractive to hackers
Hence: isn't it time that one starts setting security standards for
"important" sites ? I'm amazed at the fact that logging in a US government
computer is following the same procedure as if logging into a local
workstation on campus here. Physically speaking, US government
computers are protected by perimeter fences, guards, systems of personal
I.D., etc. etc. etc. (I've never looked at this any further than I
do today). Here, one can come during the day, in broad daylight,
and with a bit of luck, be able to switch-off the computer systems
by turning the key on the front panel. In the case of the Utrecht hackers,
Why is there so much security to reach the computer physically, and so little
to reach it virtually ? Follow the open-door policy described later (in 5) and
you might as well open your perimeter fences, classified areas, and generally
the path leading to your computer room.

3. Someone suggested that some systems could only run their specilaised
software on old operating systems. Well, why are they connected to Internet
then ? FTP ? Mail ? Wouldn't it be more appropriate to use a more recent
machine/Operating System (OS) for FTP and Mail ? There are quantities of
machines using an old OS in the world, running specialised software, not
connected to any outside network, and hence never hacked !

4. Passwords: assuming there is no bug in the OS, the hacker must use a valid
user password. Why not use 2 passwords at "risky" sites ? Indeed, why not use a
hard token (to be plugged into a terminal) or make some accounts only reachable
from specified terminals ? Why not record any unusual access failure ? Indeed,
do Systems managers ever read these log files of failures ?

5. Somneone compared a hack of a computer account with the trespassing of
a property which had an opoen door. Well, to answer this idea in the
same stupid way:
"If the door is open, if there is no sign showing -NO TRESPASSING- and
the doormat says -Welcome-, that I enter, get arrested by the police
for trespassing, I can always plead not guilty, because I IGNORED
(ie: DID NOT KNOW) that I was not allowed there and was breaking the law."

In short: 1. if there's a convincing warning notice at (or before) login,
and an unauthorised user is caught, then YES, PROSECUTE !
          2. if NO NOTICE, but just a "welcome" message, then NO, I'm not
buying the idea of prosecution.

Olivier M.J. Crepin-Leblond, Communications research student,
Electrical Engineering Dept., Imperial College of Science, UK.


Re: Gary Marx Comments at the Computers, Freedom and Privacy

Sanford Sherizen <0003965782@mcimail.com>
Tue, 7 May 91 20:39 GMT
Conference About Kids Holding Phone up to TV

Since there were so many postings about Gary Marx's comments at the San
Francisco Conference and he is a friend, I faxed him in Belgium and told him
that he was on the RISKS wanted list. I asked him about the source for his
comments.

He recalls that the example about kids being told to hold up the phone to the tv
set was cited in a Congressional hearing and he thinks it was for a candy
company.  Unfortunately, his documentation is not with him while he is in
Belgium.

He's certainly someone who has a lot to contribute to our understanding of
technological fallout issues.  For those who haven't read his most recent book,
it is worth reading UNDERCOVER: POLICE SURVEILLANCE IN AMERICA (U. of
California Press, 1988).  It is an objective analysis of policing dilemmas and,
even though not its major focus, a primer on how to frame some of the central
arguments about cyberspace and law abiding policing.

Sanford Sherizen, Data Security Systems, Inc., Natick, MA 01760 USA
MCI MAIL:   SSHERIZEN  (396-5782)  PHONE:      (508) 655-9888


Update on S.618

<gnu@toad.com>
Mon, 06 May 91 12:27:19 -0700
I phoned the Senate about S.618, the "Violent Crime Control Act of 1991".
Guess who sponsored it?  Our dear friend Senator Biden again.  You can call his
Judiciary Committee staff at +1 202 224 5225 to receive copies of the bill.

The esteemed Senator seems a lot more interested in controlling privacy than in
controlling violent crime or terrorism — since he seems to be seizing on any
excuse he thinks the public will swallow to do it.
                                                    John Gilmore


Re: Fences, bodyguards, and security (of old O/S) (Estell, RISKS-11.62)

Rick Smith <smith@SCTC.COM>
Tue, 7 May 91 12:09:04 CDT
Bob Estell writes:

>To pursue my physical world analogy, should the next President wear a
>bullet proof vest, a visored helmet, carry a .357 Magnum, and be a
>martial arts expert? Or can we still rely on the Secret Service?

>From what I understand, the President wears a bulletproof vest for
public appearances. At least, Reagan did after his earlier experience.

Anyway, physical world analogies don't always work when thinking about
computer security. The Vault and platoon of guards represent classic
physical security. The Trojan Horse is the classic threat in computer
security, and you don't have a serious threat of that kind in most
physical security situations.

Maybe the Trojan Horse program is a computer virus, or maybe it's
just some sneaky code that the author hid in your text editor. What it
does is make secret copies of your most secret files, putting them
where a spy can reach them. This easily bypasses "classical" OS
security, since *you* run the text editor, giving it access to *your*
files. The implications in a network environment are staggering.

>From what I understand, our technology for producing physical "bugs"
just doesn't compare; we still trail James Bond (and even the Man
from UNCLE) by decades. On the other hand, hardly any routine computer
users would be able to tell the difference between "bugged" or even
virus infested software and trustworthy software. Software is too
opaque, and does things that you can't really observe.

Rick Smith, SCTC, Arden Hills, Minnesota.


... old O/S

<TMPLee@DOCKMASTER.NCSC.MIL>
Tue, 7 May 91 12:38 EDT
In RISKS-11.62 Bob Estell wrote "UNIVAC's O/S 1100 ...  which I understand is
B2 now ...  thanks to ...  TMP Lee et al."

Although I will take some small credit for security enhancements to the
Univac/Sperry/Unisys OS 1100, but only a very small credit, I must point out
that the system has only a B1 rating, alas.  Although it was only superficially
looked at, I think it is fair to say that doing what needed to be done to reach
B2 was not in the cards.  I've not kept close touch recently, but even if I had
it would be improper for me to comment on or speculate on what might be
happening now.
                                           Ted


Re: The new California Drivers License

Alan Nishioka <atn@cory.Berkeley.EDU>
Tue, 7 May 91 13:41:10 -0700
I thought I would send this in since I first heard about the new California
license on comp.risks and I also just looked up the back issues there.

I just got my new California driver's license.  No, I'm not 17, but I take the
bus a lot.

It has a holographic plastic laminate of "DMV" and the California Seal.

My color picture was digitized into and IBM computer as was my thumb print
and my signature.  The mag stripe on the back has three tracks.

Just for fun, I thought I'd try to read it.  I had previously been able to read
bank cards (with help from sci.electronics).  I found that the information
encoded is basically just what is printed on the card.  Kinda uninteresting.
Of course I couldn't figure out what little extra information was encoded....
(marked unidentified below)

It took me a little while to figure out the format, and I suppose it is
documented somewhere (anyone know where?) but it was fun.

Bank Cards — conform to ANSI/ISO 7810-1985 ($10)
Track 1:    6 bit word with 1 bit parity.  LSB first.
            code offset 32 below ASCII code.
Track 2:    4 bit word with 1 bit parity.  LSB first.  Numbers only.

Driver's License --
Track 1:    6 bit word with no parity.  Otherwise same as Bank Card.
Track 2:    Same as Bank Card.
Track 3:    ?

California Driver's License:

Track 2:    (low density)
   8 unidentified digits
   License Number
   Separator
   Expiration Date (YYMM)
   Separator
   Date of Birth (YYYYMMDD)

Track 1:    (High density)
            DALAN TAKEO NISHIOKA                                       $
            974 TULARE AVE               ALBANY
   Name
   Address
   City

Track 3:    (High density.  Can't reposition read head. )

It looks like there is space for a 58 character name (since someone
was worried earlier), a 29 character address and a 13 character city.

I suspect the third track contains the rest of the information from
the front of the license.

Alan Nishioka      KC6KHV      atn@cory.berkeley.edu      ...!ucbvax!cory!atn
974 Tulare Avenue, Albany CA 94707-2540     37'52N/122'15W    +1 415 526 1818

Please report problems with the web pages to the maintainer

x
Top