The RISKS Digest
Volume 1 Issue 20

Wednesday, 9th October 1985

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Risks using robots in industry
Bill Keefe
Re: Computer databases
Matt Bishop
Registrar's databases; Database risks - census data
Hal Murray
2 messages
The winners of evolution...
William McKeeman

Risks using robots in industry

Bill Keefe <keefe%milrat.DEC@decwrl.ARPA >
Tuesday, 8 Oct 1985 07:26:03-PDT
I don't know who to credit with typing this in.  I was going to summarize,
but it's too easy to take some points out of context. It brings up many 
questions as to who bears the responsibility (liability?) to protect people 
from such occurrences.
 
                   --------------------------------

            In The Lion's Cage"      [Forbes  Oct. 7, 1985]

On July 21, 1984, at about 1 p.m., a worker at Diecast Corp. in Jackson, Mich.
found Harry Allen, 34, a diecast operator pinned between a factory pole and the 
back of an industrial robot. But Allen's co-worker couldn't come to his aid. 
Using the robot's controller, the company's director of manufacturing finally 
unpinned Allen, who was alive but in cardiac arrest. He died in a hospital 
five days later.

Allen had entered a restricted area,  presumably to clean up scrap metal from 
the floor. While there, he got in the way of the robot's work, and thus became 
the first - and so far only - U.S. victim of an industrial robot-related 
accident.

That's not a bad safety record, considering that 17,000 robots are now 
installed in the U.S. But the bet is he won't be the last. The Japanese, who 
lead the world in robot installations, also lead in robot-related fatalities: 
There have been reports of at least 5, and possibly as many as 20, such deaths 
in Japan.

That's only fatalities. In this country, companies are not required to report 
injuries related to specific equipment, so no reliable data are available. But 
in Sweden, a pioneer in the use of industrial robots, one study estimates that 
there is 1 accident a year for every 45 robots. By 1990, when the number of 
robots installed in American Industry could climb as high as 90,000, the 
number of injuries could climb accordingly. That's because robots move quickly 
and are programmed to go through a series of motions without stopping. A 
worker who gets in the way can be struck, pushed aside, crushed or pinned to 
a pole as Allen was.

How will industry minimize the risk to its workers? Probably with difficulty. 
Robots don't easily accommodate safeguards. Whereas most machinery operates 
within a fixed set of boundaries, robots have a large "striking distance" - the 
reach of their mobile arms within three dimensions. In automotive assembly 
plants, maintenance workers often collide with robots adjacent to the ones 
they're servicing because they don't realize they are in another robot's work 
area. A robot may perform one task five times and then start on a completely 
different  activity, and with it a different set of motions. Also, a robot can 
sit idly for a time and then come to alive again, threatening injury to a 
worker who mistakenly thought it was shut down.

What's being done to make robots safer? Right now, not much. "The extent of 
most safety precautions are signs saying, 'Restricted Area: Keep Out,' or 
maybe a guardrail," says Howard Gadberry of the Midwest Research Institute in 
Kansas City, Mo. Indeed, the most common safeguards - perimeter barriers such 
as guardrails and electric interlocked gates, which automatically shut down 
the robot when opened - don't protect those maintenance workers and programmers 
who must enter the lion's cage. Presence-sensing devices, such as 
pressure-sensitive mats and light curtains, both of which automatically cut 
off a robot's power, also don't seem to offer as much protection as is needed, 
if only because workers are even more unpredictable in their movements than 
robots. They may not step on the mat when feeding parts to a robot, or they 
may not break a light curtain's beam.

That's not to say that robots can't be made safer. Researchers at the 
Renssalaer Polytechnic Institute, for example, recently completed a research 
prototype for several large U.S. companies of a four-sensor safety system that 
continuously monitors the area around a robot. Using ultrasonic, infrared, 
capacitance and microwave sensors, the RPI system is designed to stop a robot 
in its tracks if a worker gets too close. Cost?  Five thousand dollars 
in production, according to Jack Meagher, a senior project manager at RPI.

The National Bureau of Standards has also been working with ultrasonic sensors 
on robot arms similar to the system at RPI. They both have developed a 
secondary, or watchdog, computer to monitor the actions of the robot and its 
microprocessor. After all, if the robot's computer goes berserk, how can it 
monitor itself? That's more important than you might think, 30% of robot 
accidents seem to be caused by runaways, according to John Moran, director of 
research at the National Institute for Occupational Safety & Health.

While such systems slowly make the transition form research to the factory 
floor, industry is trying to put basic safety standards into practice. 
Recently, the Robotic Industries Association proposed a set of national safety 
standards for robots that could go into effect as early as next summer.

Would such standards have prevented Harry Allen's death? Maybe not. The robot 
at the Diecast plant was surrounded by a safety rail with an electric 
interlocked gate that automatically shut down the robot when the gate was 
opened. However, there were two gaps in the rail that allowed workers to 
easily bypass the safeguard; that has since been corrected by the company.

Says Allan Harvie, deputy director of the Michigan Department of Labor's 
bureau of safety and regulation, "I could only presume Harry Allen thought he 
could go in and do what he intended to do without having to shut the robot 
down."


Re: Computer databases

Matt Bishop <mab@riacs.ARPA>
8 Oct 1985 1123-PDT (Tuesday)
   I guess I'll start the ball rolling on this discussion.

   I think the greatest risk is not from the technological end but the
human end.  For instance, there was a case a couple of weeks back where
someone got stopped for a traffic ticket.  Call this gentleman John Lee
Jones (I've forgotten his real name.)  A routine computer check showed
James Lee Jones was a fugitive from an LA warrent, and the description
of James Lee Jones was pretty close to what John Lee Jones looked like.
So the SFPD hauled him downtown, and ran a fingerprint check to see
if there was anything else they could find out about John Lee Jones.
Turned out he had used several aliases in the past — so the SFPD
notified the LAPD they had arrested James Lee Jones, and would the LAPD
please come up and get him?  The LAPD obliged, took him down to LA,
and notified the prosecutors.

   Throughout all this, Mr. Jones was (vehemently) denying he was James
Lee Jones.  About a week after he had first been locked up, his public
defender persuaded the judge to order the police to compare John Lee
Jones' fingerprints with James Lee Jones' fingerprints.  They didn't
match.  End of case.

   What's so surprising is that the people throughout the whole
proceeding did not question whether the data the computer gave them was
relevant.  True, it was accurate (so far as I know.)  But it was used
incorrectly.  In other words, in this case the technology didn't fail;
the human safeguards did.  (Incidentally, in defense of the police,
when this came out an investigation was begun to see why the
fingerprint comparison was not made immediately; according to police
procedure it should have been.)  And no amount of database security can
guard against this type of breach of security.

[Caveat — I read the newspaper story I outlined above a couple
 of weeks ago in the S.F. Chronicle.  I have undoubtedly misremembered
 some of the details, but the thrust of it is correct.]

Matt Bishop

    [Add that to the database-related cases of false arrest reported in 
     RISKS-1.5.   PGN]


Registrar's databases

<Murray.pa@Xerox.ARPA>
Tue, 8 Oct 85 06:59:25 PDT
To: RISKS@sri-csl.arpa

Just mentioning grades, computers and risks, all in the same paragraph
instantly brings to my mind visions of hackers who are flunking freshman
English smiling anyway, knowing that they have figured out how to get an A.

I've always assumed that everybody "knew" that students and grades couldn't
really coexist on the same machine. Does anybody know of a school
brave/silly enough to do it?

It seems like a great opportunity for somebody who makes a secure system to
get a LOT of publicity, one way or the other. Has anybody ever been
confident enough to try it? What happened?

Changing the topic slightly... Security on an ethernet is clearly
non-existent unless you encrypt everything you care about. Our personnel
people upstairs take the problem seriously. The solution is simple. They
have their own section of coax. It's not even gatewayed to the rest of our
network.


Database risks - census data

<Murray.pa@Xerox.ARPA>
Tue, 8 Oct 85 07:15:28 PDT
To: RISKS@sri-csl.arpa

The census bureau distributes their data broken down to quite small areas. I
don't know the details, but I'm pretty sure it gets down to "neighborhood"
sized regions, and it may even get down to the block.  When the sample size
gets small enough, there are obviously opportunities for gleaning
non-statistical information by using carefully crafted querys to read
between the lines.

I remember somebody telling me that they worked pretty hard to make sure
this couldn't happen. Anybody know what they actually do? Is it written up
someplace? Does it work well enough? Any war stories? Are the techniques
simple once you know the trick? ....

   [As I noted in RISKS-1.19, Dorothy Denning's book is a good source.
    The Census Bureau tries to add phony data that preserves all of the
    overall statistics but that prevents inferences...   PGN]


The winners of evolution...

William McKeeman <mckeeman%wang-inst.csnet@CSNET-RELAY.ARPA>
Tue 8 Oct 1985 10:00:15 EST
To: risks@sri-csl

A recent submission included the following paragraphs on evolution
of morals...

     Now I think it fairly easy to see that the capacity to put
     group survival ahead of self-interest is an important genetic
     trait and that tribes of people that had this trait would be
     more likely to survive that tribes that didn't.  That is not
     to say that this moral capacity doesn't vary greatly from one
     person to the next or that even that it may not be more fully
     realized in one person than another because of upbringing.  It
     is even possible that, because of some genetic error, some
     people may be born without a moral capacity, just like they
     might be born without arms or legs.

     Moral progress means the evolution of survival customs more
     appropriate to the current context.  The trouble in recent
     centuries has been that our ability to evolve new technology
     has outstripped our capacity to evolve the appropriate
     morality for it.  There is a strong tendency to stick to the
     morality that one learns as a child, even if it [is] not
     appropriate to the current situation.

Evolution is being used with its Darwinian meaning but with an
interpretation that includes the more ordinary progress of mankind.  The
central mechanism of evolution is the failure of the less successful forms
to reproduce — often for failing to live long enough.  Evolution is never
fast enough to avoid bloodshed — it is bloodshed that activates it.  Until
disaster strikes the adapted and unadapted survive undifferentiated.

My point is that if we treat the present sad state of affairs as a problem
in evolution rather than politics or technology, we are implicitly planning
on rebuilding a world with the (apparently) adapted survivors of WWIII.

W. M. McKeeman            mckeeman@WangInst
Wang Institute            decvax!wanginst!mckeeman
Tyngsboro MA 01879

Please report problems with the web pages to the maintainer

x
Top