The RISKS Digest
Volume 12 Issue 41

Wednesday, 25th September 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Ada Code Formatters pretty dangerous
Richard G. Hash
Risks of computerized typesetting
Simson Garfinkel
Galileo's Revenge - Junk Science in the Courtroom
Martin Minow
Readings in Judgement and Decision Making
Doug Jensen
Nintendo Lottery Is For Real
Jim Huggins
Radio Shack computerized mailing list problem
Joseph Poirier
Re: Security in software distribution
Kilgallen
Re: Bell V-22 Osprey
John Wodehouse
A. Padgett Peterson
Have you tested your machine lately?
K. M. Sandberg
Electronic Locks in Universities
Martin Ewing
Jim Huggins
Dean Rubine
Kraig Meyer
Mike Carleton
Info on RISKS (comp.risks)

Ada Code Formatters pretty dangerous

Richard G. Hash <rgh@shell.com>
Fri, 27 Sep 91 15:10:23 GMT
The Rational R1000 is a machine designed especially for Ada development, It has
a very powerful APSE, including a really nice (?) formatter.  You can type the
entire program in, hit the 'format' key and wham-bang, the entire program is
beautifully formatted, according to your configuration rules. It's pretty much
the "Rolls Royce" of Ada environments.

Unfortunately it has a really nasty habit:

Meant to type:

    if Kind_Of_Test_Well (Well_Coordinates) /= Dry_Hole then
        Drill_Oil_Well (Amount_To_Spend => 4_000_000_000);
    end if;

But if you have been LaTeX'ing or regexp'ing a lot lately, and have
backslash on the brain, you might actually type:

    if Kind_Of_Test_Well (Well_Coordinates) \= Dry_Hole then
        Drill_Oil_Well (Amount_To_Spend => 4_000_000_000);
    end if;

and the Rational formatter silently turns it into

    if Kind_Of_Test_Well (Well_Coordinates) = Dry_Hole then
        Drill_Oil_Well (Amount_To_Spend => 4_000_000_000);
    end if;

Turning a syntactically incorrect, but semantically well-meaning program into a
syntactically correct but semantically wrong program without even a warning is
quite a risk!!  In fact, when I finally found the reason that nothing was
working right I assumed that I had just made a typo - (which I had), so I fixed
it, hit format, and then "Whoooa!!".  Everything does exactly opposite what you
expect...

Some tools are just too smart(?) for their own good!

Richard G. Hash, Shell Bellaire Research Center, 713-245-7311 rgh@shell.uucp
                   ...!{sun,bcm,rice,psuvax1,decwrl,cs.utexas.edu}!shell!rgh

[WWN HEADLINE: Ada in 4-Mat CollAPSE of Back Slashes, Hits Well, No Field.  PGN]


Risks of computerized typesetting

<simsong@nextworld.com>
Fri, 27 Sep 91 09:58:26 PDT
In June 1991, I published the book Practical UNIX Security with Gene Spafford.
After the book came back from the publisher, we discovered to our horror that
all of the backquotes (`) in the UNIX shell-scripts had mysteriously been
changed to forward quotes (').  Of course, this breaks all of the UNIX
shell-scripts.

Both Gene, myself and the book's editor were greatly confused, because we had
seen backquotes in all of the galleys and, indeed, in the semi-final PostScript
files that we had seen, backquotes were specified.

We thought that somebody at the publishing house had run some sort of filter
over the text files (we used troff as our typesetting language) and changed the
quotes around.  But nobody would own up to the deed, and when we checked the
files, they didn't appear to have been modified.  So the next thing to suspect
was the troff programs themselves; ORA, the publisher, had just changed over to
a new version.  Perhaps that was to blame.

We printed new proofs of the book and discovered, much to our confusion, that
the problem was gone: all of the backquotes properly printed....

Three months pass and, thanks to a number of very kind reviews, ORA finds that
they need to do a second printing of Practical UNIX Security.  This time we
carefully look at both the galleys and the PostScript files themselves.  All of
the backquotes are as they should be.  We ship the files off to the printer,
which will take the PostScript, typeset to film, burn plates, and make the
books.

In the words of the editor, Lightning Strikes Twice in the Same Place.  Same
problem: all of the backquotes print as forward quotes.

It turns out that the printer has a slightly non-standard version of the
PostScript courier font.  In the spot where they should have a backquote, they
have a forward quote instead.

"The nice thing about standards is that there are so many of them to choose
from."


Galileo's Revenge - Junk Science in the Courtroom

Martin Minow at Shiggy 27-Sep-1991 1629 <minow@ranger.enet.dec.com>
Fri, 27 Sep 91 13:27:40 PDT
>From a book review by Louise Kennedy (Globe Staff) in the Boston Globe,
Sep 27, 1991 (shrunk by about 50%: I didn't mark interpolations and elisions,
but the tone of the review and choice of adjectives are Kennedy's)

    Galileo's Revenge: Junk Science in the Courtroom
    by Peter W. Huber,  Basic Books, 274 pp. $23

Everybody knows about the Audis that accelerate, or Bendectin, the drug for
morning sickness that causes birth defects, or "chemically induced AIDS."
Except, the these things that everybody knows simply aren't so.

Why do we think otherwise? Because lawyers managed to persuade juries
that their clients had been harmed. With the PR campaign of one plaintiff's
lawyer, the Audis even made it onto "60 Minutes."

All these cases, and the several others Huber studies in this ferocious and
highly readable book, are the product of what he calls "junk science."  The
plaintiffs' claims rely on the testimony of "expert witnesses," who, as Huber
documents with devastating clarity, often have no real expertise in the
relevant specialty.

A chief proponent of "chemically induced AIDS" has a medical degree but hasn't
practiced for 20 years; he has failed board exams in internal medicine five
times; he heads a firm that specializes in ... expert testimony.  In short,
he's expert only at being an expert — and his "research," Huber says, is as
shoddy as you would expect.

Why would any judge let him in a courtroom? The abandonment of traditional
rules requiring "experts" to be generally recognized in the relevant scientific
community allows unscrupulous trial lawyers to put almost any crackpot with a
degree on the witness stand.

In short, Huber — who holds degrees in law and mechanical engineering and is a
columnist for Forbes magazine — paints a disturbing picture of the modern
legal system, and his passionate argument for a return to "the rule of fact" is
eloquent and compelling. But there are moments when his faith in science seems
as risky as the superstitions and pseudo-scientific fads he so effectively
debunks: even good scientists can be wrong.

Huber also weakens his case by slipping into easy sarcasm — a temptation
that's understandable, given the chicanery of his villains, but nonetheless
distracting. But Huber is a lively enough writer (and his cited outrages are
extreme enough) to keep you reading — and, more importantly, thinking about
the junk in the courtrooms and how to clear it out.
                                                            Martin Minow


Readings in Judgement and Decision Making

Doug <xanax!edj@ARCHONS22.ARCHONS.CS.CMU.EDU>
Fri, 27 Sep 91 09:46:38 EDT
As a relatively recent reader of this forum, I don't know if these two
books have been mentioned before; if so, I'm sure Peter will filter this
message out. My favorite cognitive science references for judgment and
decision making with uncertain information are the two collections:
Judgment and Decision Making, Arkes and Hammond (Eds.), and Judgment Under
Uncertainty: Heuristics and Biases, Kahneman and Tversky (Eds.), both by
Cambridge University Press.

E. Douglas Jensen, Digital Equipment Corp., jensen@helix.dec.com


Nintendo Lottery Is For Real

<huggins@zip.eecs.umich.edu>
Fri, 27 Sep 91 19:03:47 EDT
George Anderson, director of the Minnesota State Lottery, was interviewed today
on NPR's _All_Things_Considered_ about the new system for playing the lottery
which Minnesota will be implementing next year.  As mentioned previously in
RISKS, Minnesotans with a Nintendo will be able to play the lottery using a
home Nintendo unit outfitted with special software and a modem.

Details that may not have appeared previously: users will be required to
deposit money in advance (no playing on credit) and will receive a PIN to
verify their identity when playing the lottery.  Any winnings will be issued by
check to the owner of the PID, in an attempt (according to Anderson) to
minimize possibly security risks for stolen PIN numbers.  No other mention of
security measures (besides the PIN) was mentioned.

While this may remove the financial motive from stealing a PIN, since a thief
can't collect on winnings from a stolen PIN, it does nothing to protect against
the malicious person who just wants to spend someone else's money...

Jim Huggins, Univ. of Michigan (huggins@eecs.umich.edu)


Radio Shack computerized mailing list problem

<ach@mentor.cc.purdue.edu>
Fri, 27 Sep 91 21:27:38 -0500
   I'm not sure if this has been mentioned in the past, but it seemed
interesting enough to send to comp.risks.

   I went to Radio Shack the other day.  When I tried to pay for my purchases,
the cashier asked for the last four digits of my phone number and my name and
address.  Evidently this is for their mailing lists.  I knew I was on the
mailing list, and I tire of having to give out all this information every time
I buy some bauble at Radio Shack, so I refused to give the employee such
information.
   Apparently, however, the Radio Shack computer cannot deal with someone
saying "no" to them.  Every transaction must have a name and address.  This
information is printed on your receipt.  So the employee typed in a random
phone number and that random person's name, address and phone number appeared
on my receipt.
   I'm glad Radio Shack doesn't ask for Social Security numbers!

   I mentioned this to friends and found out that employees at one Radio Shack
nearby routinely "log" local businesses' phone numbers when customers don't
want to give their name and address.  They say they "round-robin" these logs so
one business doesn't get logged with a lot of transactions.
   Currently, these transactions (hopefully!) are only used for their mailing
list — not for billing purposes.  But it certainly seemed to me as a RISK
waiting to happen.  Plus, does Radio Shack exchange this information?  My
receipt was of a person whose last four phone number digits were "0000".  I
expect it looks like he buys a lot at Radio Shack.

   Can Radio Shack employees bypass this mandatory information need?  Is it
that the software itself cannot handle an unknown customer, or do the Radio
Shack employees avoid using this bypass (bad user interface, for example)?  Did
the programmers of the software purposely put this in in an effort to try to
force employees to get customers on the mailing list?

   I find it hard to believe that such a simple case of an unknown customer
cannot be handled by Radio Shack's systems.
                                                            Joseph Poirier
Internet:  jrp@cs.purdue.edu        UUCP:  ...!{ backbone }!purdue!jrp


re: Security in software distribution (Morris, RISKS-12.32)

<Kilgallen@DOCKMASTER.NCSC.MIL>
Fri, 27 Sep 91 23:01 EDT
Last year I talked with someone who said he had been a target of such
an attack aimed at a VAX/VMS system.  Although I was not in a location
to examine evidence, there was no reason why the individual should make
up such a tale to tell me.  His story is as follows:

The target and the perpetrator were both in California, and the perpetrator
went to the trouble to have the package mailed by a confederate in
Massachusetts (enhancing the impression that it came from DEC.)

The cover letter included in the package said "important VMS security
patch - install immediately".  In fact, the software was a recycled
cracker modification to introduce a trap door.

The bottom line was that the attack did not succeed, because the target
merely put the package into a pile of other software from DEC which had
not yet been installed.


Re: Bell V-22 Osprey (Wodehouse, RISKS-12.41)

"John Wodehouse" <w0400@usav01.glaxo.com>
28 Sep 91 12:12:00 EST
My feeling about triple-redundancy and voting are worried, not so much because
two bad units outvoted the good one in this case, but that the systems design
allowed two aircraft with one bad unit to continue to fly for sometime without
alerting anyone to the problem.

If the same sort of system allowed an airliner to fly with only two out of
three unit working correctly and a further failure then occurred over
mid-Atlantic, I think passengers might give up flying.  From the USN report, we
are lead to believe that this problem existed from aircraft build time and thus
the whole testing of the triple-redundant system must thus be flawed.  I just
cannot see how a system can be built that does not allow for the check to see
if all units are working correctly and providing the correct data before
take-off.  The facts show that I am not correct here.

Lord John - the programming peer


Re: V-22 Osprey (Wodehouse, RISKS-12.41)

A. Padgett Peterson <padgett%tccslr.dnet@uvs1.orl.mmc.com>
Thu, 26 Sep 91 08:25:34 -0400
    Not having the wiring diagram, second-guessing is dangerous but
consider the case in which the triple sensors are not "reverse-wired" but
cross-wired (e.g. sensor 2 is connected to input 1 & vs). In this case, with
"all good" everything is fine. If 3 fails all is ok. However if 1 or 2 fails,
the other is reported failed, voted out, and an immediate mismatch occurs
between 3 and the failed sensor (still considered good).  The flight control
must now rely on some other (and often lessor) means of selection (usu a
calculated value or range checker) of the proper value.

    This is an inherent problem in any flight-critical design that relies
on detection of "first-fail". In this case the failed sensor was evidently
a "second-fail" condition but thought "first-fail" & is a very real concern.

    Another concern not mentioned (and again merely hypothesised) is that
from the text, it would appear that at least two of the critical triplex sensor
signals are routed through a single connector, not a good idea since connectors
are one of the major failure areas. (there are some other equally dangerous
possibilities that also have to be considered, i.e., if the signals have
redundant routing shouldn't that have caused a mismatch).

    On the quadruplex AFTI-F16, one of our concerns that influenced
a number of routing decisions was the number of simultaneous faults that
could be caused by one 20mm cannon shell.

    Of course, it is all too easy to second guess a design team after
the fact, on first flight everyone is crossing their fingers, anyone who
isn't shouldn't be there.
                            Padgett


Have you tested your machine lately?

K. M. Sandberg <kemasa@ghost.hac.com>
26 Sep 91 15:48:20 GMT
We have an Alliant fx/800 computer which uses the i860 processor and they made
a decision, which I think was wrong, that such things as divide by zero should
not be a fatal error by default. You can change this if you want to by making a
call in your program. By default no action is taken on many of the exceptions.
Note that also by default you have to do this for each program, not for the
machine (we should be receiving the patches to make the default values what
*we* consider to be more reasonable).

How many people try *ALL* operations that should fail on a machine before using
it to get some work done?  It is documented, but who has the time to read all
of the documentation? This is not to say that the users do not bear some
responsibility in testing the machine to make sure that it correctly handles
error cases, but I think that companies selling computers should think about
what the users expect the machine to do and make that the default *if* it is
reasonable. I do not know the exact reasons for this decision, it might have to
do with performance, but who cares if the machine is fast if the answers are
wrong?

Also if you notice on the output that the results are different depending on if
you optimize or not.
                         Kemasa.

Script started on Thu Sep 26 08:24:24 1991
% cat t1.c
#include <stdio.h>

main()
{
 int    i = 0;
 int    j = 10;
 int    k = 0;
 int    ir = 0;
 float  f = 10;
 float  g = 0;
 float  fr = 0;

 for (i = 0 ; i < 20 ; i++) {
    printf("i: %3d j: %d k: %d ir: %d f: %f g: %f fr: %f\n",i,j,k,ir,f,g,fr);
    fflush(stdout);
    ir = j / k;
    fr = f / g;
    printf("       j: %d k: %d ir: %d f: %f g: %f fr: %f\n",j,k,ir,f,g,fr);
    fflush(stdout);
    j--;
    f--;
 }
}
% cc -g -o t1 t1.c
% t1
i:   0 j: 10 k: 0 ir: 0 f: 10.000000 g: 0.000000 fr: 0.000000
       j: 10 k: 0 ir: -1 f: 10.000000 g: 0.000000 fr: nan
i:   1 j: 9 k: 0 ir: -1 f: 9.000000 g: 0.000000 fr: nan
       j: 9 k: 0 ir: -1 f: 9.000000 g: 0.000000 fr: nan
    [...]
i:  10 j: 0 k: 0 ir: -1 f: 0.000000 g: 0.000000 fr: nan
       j: 0 k: 0 ir: -1 f: 0.000000 g: 0.000000 fr: nan
i:  11 j: -1 k: 0 ir: -1 f: -1.000000 g: 0.000000 fr: nan
       j: -1 k: 0 ir: -1 f: -1.000000 g: 0.000000 fr: nan
    [...]
i:  19 j: -9 k: 0 ir: -1 f: -9.000000 g: 0.000000 fr: nan
       j: -9 k: 0 ir: -1 f: -9.000000 g: 0.000000 fr: nan
% cc -O -o t1 -uniproc t1.c
% t1
i:   0 j: 10 k: 0 ir: 0 f: 10.000000 g: 0.000000 fr: 0.000000
       j: 10 k: 0 ir: 0 f: 10.000000 g: 0.000000 fr: +Infinity
i:   1 j: 9 k: 0 ir: 0 f: 9.000000 g: 0.000000 fr: +Infinity
       j: 9 k: 0 ir: 0 f: 9.000000 g: 0.000000 fr: +Infinity
     [...]
i:   9 j: 1 k: 0 ir: 0 f: 1.000000 g: 0.000000 fr: +Infinity
       j: 1 k: 0 ir: 0 f: 1.000000 g: 0.000000 fr: +Infinity
i:  10 j: 0 k: 0 ir: 0 f: 0.000000 g: 0.000000 fr: +Infinity
       j: 0 k: 0 ir: -1 f: 0.000000 g: 0.000000 fr: nan
i:  11 j: -1 k: 0 ir: -1 f: -1.000000 g: 0.000000 fr: nan
       j: -1 k: 0 ir: 0 f: -1.000000 g: 0.000000 fr: -Infinity
    [...]
i:  19 j: -9 k: 0 ir: 0 f: -9.000000 g: 0.000000 fr: -Infinity
       j: -9 k: 0 ir: 0 f: -9.000000 g: 0.000000 fr: -Infinity
% exit
%
script done on Thu Sep 26 08:25:47 1991

The best defense is insanity.

      [Foreshortened [...] for your reading pleasure by PGN.]


Electronic Locks in Universities

Martin Ewing <ewing-martin@CS.YALE.EDU>
Wed, 25 Sep 91 22:52:22 EDT
    [Moderator's note: This and the following messages are slipping through.
    There are some good general points.  But this is probably (more than)
    enough on the topic, so let me not encourage a barrage of spinoffs.  PGN]

    David Holland's story on Harvard's new e-lock system, (RISKS-12.40)
struck a chord.  I can share a few experiences with systems at Yale.  (The
University is currently implementing a "college" [dorm] key system which is
different from the one I will discuss.)

    We control access to some of our student computing areas by means of
the Yale ID card, which has a bar-code cleverly hidden behind a visually
opaque/infrared transparent band that looks just like a magnetic credit card
strip.  The codes are not obvious, and I believe would be relatively hard to
forge.  (A screwdriver or hammer taken to the door would be a lot easier for
your typical perpetrator.)  I don't believe card forgery or duplication is a
problem for the more advanced card systems.  A lost card can be invalidated and
a new one assigned with a new tag digit. There is also no way for the finder of
a card to know what privileges are associated with it.  (Except probably the
right to check out library books, but that's not my problem!)  Keep in mind,
the typical risk we are protecting against is the one-time major intrusion.

    We have observed some of problems that David worries about.  There was
an immediate protest when it became known that we log key accesses.  Students
were ultimately convinced that we would not routinely examine these files to
see who was doing his homework, etc.  We have had to consult the files on two
occasions of vandalism and theft, however. Having the records in these cases
was surprisingly unhelpful.  First of all, the police don't seem to have a clue
(PGN - forgive) what to do with electronic records like this.  Secondly,
evidence of room access is pretty weak when it comes to establishing who did
what and when.  (Students commonly enter two at a time, or let others in.)  At
best, I think such evidence will be circumstantial.

    A key problem has been administrative, keeping track of a large and
fluctuating student population.  The electronic system is better than the old
scheme of physical keys (or non-serialized cardkeys), which required cash
deposits, but one of our secretaries keeps very busy at the beginning of terms.
Inevitably, somebody's registration is wrong, or their card was manufactured
out of tolerance.  (The bar codes are applied by hand before the card is
laminated.)  It is very important to have good database programming and
production controls.  (Currently, our PC software takes the whole system
off-line when it's "re-orged" to include a new ID.)

    On a nostalgic note, how many people remember the MIT student IDs of
the late 60s/early 70s?  These had your ID number (SSN) punched as a Hollerith
code right through the plastic.  Not too tricky to forge, I'd say.  Were the
codes ever used?

Martin Ewing, Yale Science & Engineering Computing Facility,  Ewing@yale.edu


Electronic locks at Harvard

<huggins@zip.eecs.umich.edu>
Wed, 25 Sep 91 23:18:21 EDT
Though I'm not familiar with the Harvard Yard layout, I find it somewhat
amusing to think that computer-card locks will improve security at Harvard
dormitories.  Having lived in dorms and co-operative houses for the last 6
years, I can testify that the biggest causes of unauthorized entry for group
housing aren't the possession of keys by non-residents, but doors propped open,
doors with broken locks (usually broken by students who lost their door key and
didn't want to call security to gain entrance), and good-natured people who let
anybody in who knocks on the door.  None of these things will change by
installing computerized locks.

The idea itself isn't necessarily that bad — I worked for a summer at IBM in
Rochester, MN, where they used a security card system for employees to gain
entrance to the buildings.  The reason the system worked there, however, was
that everyone agreed (was ordered?)  to not only use the system, but to deny
entry to anyone who didn't have a valid computer card.  Just about everyone had
a story or two about leaving their ID card at home and having to get a
temporary card from security, having their card fouled up by a strong magnetic
field, and so on.  But the system worked the vast majority of the time, mostly
because people agreed to follow it.

The moral to the story is an old one: treating the symptoms of the disease
instead of the cause doesn't cure the patient.

Jim Huggins, Univ. of Michigan (huggins@eecs.umich.edu)


Re: Electronic locks at Harvard

Dean Rubine <dandb+@andrew.cmu.edu>
Thu, 26 Sep 1991 18:43:19 -0400 (EDT)
I work at the Information Technology Center, a 100% IBM supported research lab
at Carnegie Mellon University.  There are badge readers and electronic locks on
all the entry doors.  Last week a power glitch damaged the computer that runs
the locks (at around 6am) and as a result no one was able to enter the facility
that morning.  Even campus police was locked out.  It was necessary to pull the
building fire alarm, which apparently releases the electronic locks.
Presumably if that didn't work, the next attempt would have been an ax to the
door or some bricks through the window.  So, as we consider the risks of too
many people having working keys, let us not forget the risk of no one at all
having working keys.

One nice feature of the system is that when a badge is reported lost or stolen,
it is invalidated an a replacement is issued.  Using the missing badge causes
alarms to go off.

Several days after the incident, an undergraduate employee told me of an
ingenious way to get into the facility without a badge.  So much for security.

        Dean Rubine / CMU ITC / Rubine@andrew.cmu.edu


Electronic locks at Harvard

<kmeyer@aero.org>
Thu, 26 Sep 91 15:58:30 PDT
In RISKS-12.40, David A. Holland (dholland@husc.harvard.edu) quoted a Harvard
Independent article describing how Harvard's dorms were being wired with
card-key systems.

Nearly all of the university-owned apartments and residence halls at USC have
had these for several years now, and my understanding is that Columbia has had
them even longer.  The card key is simple: it is merely the student ID card
with a magnetic strip on the back, similar to those used by residence hall
dining for a number of years now.

By each door, there is a card reader that controls the door lock: on some
doors, there is actually a solenoid-controlled latch; on others there is a very
strong magnet that literally keeps the door stuck closed.  To enter, you swipe
the card through the card reader, to exit, there is a touch-sensitive pad on
the door that unlocks the lock electronically.  The card readers are connected
via a network, and periodically a centralized computer downloads each card
reader with a list of cards that are valid for that building.

This system has some real advantages, namely that if an individual student
leaves the university or loses a card, that card is taken out of the database
and can no longer be used for building access.  Some cards, such as those
issued for on-campus conferences, are valid only on the days of the conference
and then become deactivated.

The risks of this system have been reported at great length in the Daily Trojan
over the last two years and have nothing to do with lock jimmying or card
duplication (not many college students know how to duplicate magnetic strips).
The first risk is that the magnetic strips on the cards become unreadable;
these cards are swiped through readers 2 or 3 times a day for entry, plus
another 2-3 times/day if the student is on a meal plan.  USC security also
claims that keeping the card in an eel-skin wallet will erase the magnetic
strip (anyone have any idea why this would be?)

The card readers were cleverly designed to store all valid cards numbers in
local memory so that when the network or central computer was unavailable, it
would still allow access.  Unfortunately, the memory isn't large enough to hold
all of the valid cards for the larger buildings, and whenever the network goes
down, USC security has to dispatch a guard to each of the large buildings to
let folks in.

In power failures, all of the doors with magnetic locks swing open (which has
happened on several occasions).
                                              --Kraig Meyer


An electronic lock that failed in an incorrect application

Mike Carleton 297-4114 MRO1-1/JK33 <mcarleton@vino.enet.dec.com>
Fri, 27 Sep 91 17:23:16 PDT
In RISKS 12.40 David Holland writes about the potential risks of the electronic
key-card locks on Harvard Union dormitories.  His mention the potential of
electronic failure in these types of systems reminded me of another type of
failure that I had observed.

A large oil company in Dallas TX used a key card access system to control
access to sensitive sections of their building.  The doors were 'locked' using
a large permanent magnet mounted on the door jam and a large steal plate
mounted on the door.  The magnet was strong enough to hold the door closed
against substantial effort.  When the lock was electronically opened, a coil
running through the magnet was energized with AC current.  The field from the
AC current disrupted the permanent magnetic field enough so that the door can
be opened with little effort.  A small light near the doorway was lit to
indicate that the door had been unlocked.

The scheme seemed to work well with all but the main door to the lobby of the
building.  This door required a card-key at night but was open during business
hours. They had programmed the system to leave AC coil on the front door
energized all day. The degaussing effect of the AC caused the permanent magnet
to loose its holding power, making the lock ineffective at night.

The normal nocturnal visitors to the site did not seem to notice the problem as
they were in the habit of using the key card to open door.  I expect that if
the problem was noticed, maintenance workers would have replaced the magnet
only to find the lock failing again soon afterward.  It seems apparent that the
implementors of the locking system did not understand the technology of the
locks well enough to know that the scheme could never be made to work on a door
that must be unlocked all day.
                           Mike Carleton

Please report problems with the web pages to the maintainer

x
Top