The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 11 Issue 64

Wednesday 8 May 1991


o Cable Zapping
John Sullivan
o Fences, trojan horses, and security
Bob Estell
o More on Almost Humorous Fly-by-Wire Glitch
Mary Shafer
o Old cases of Telco tieup and grade hacking
George Malits
o Re: Changing class grades
Adam Engst
o 9th Federal Reserve Bank Drowned
Brinton Cooper
o Denise Caruso reports on new anti-encryption bill: S.618
John Gilmore
o S.618 via FTP & mail server, instead of flooding Washington
Brendan Kehoe
o NYTimes article on E.F.F and John Barlow
John Sullivan
o Re: Disclosure of customer information
Steve Bellovin
Lauren Weinstein
o Re: The means justify the ends? (piracy probe)
Henry Spencer
o Info on RISKS (comp.risks)

Cable Zapping

Wed, 8 May 91 13:08:50 CDT
An earlier RISKS had excerpts from an April 25 NYTimes article about American
Cablevision 'zapping' chips inside illegal cable hookups.  Not included were
quotes at the end of the article from some customers who claimed they had only
regular cable service, but had been zapped anyway.  I don't know the laws
governing cable service, but it would disturb me if, say, the phone company
started sending destructive signals down the phone lines.  Perhaps, though,
you're allowed only to connect the cable company's own equipment to your cable.
Otherwise, the cable company sounds just as bad as Prodigy.

I basically feel I have a right to sense my environment.
Though the common disregard for speeding laws is disgraceful, I
believe radar detectors must be legal.  I am bothered by
claims that decoding satellite TV signals is illegal: though of course
I could not resell copyrighted programming, if someone broadcasts
radiation at me, and I'm clever enough to decode it, that should
be allowed.  Cable TV is different, though, since I have
signed an agreement with the cable company to get a hookup.

John Sullivan, Geometry Center

Fences, trojan horses, and security

"351M::ESTELL" <>
8 May 91 14:28:00 PDT
Just yesterday, I saw a video tape "movie" (docudrama?) offered by the NIS
(Naval Investigative Service) on "human intelligence."  (It is "security and
safety awareness week here.)

The points I'd like to pass on, in reply to Rick Smith's posting about "trojan
horses" are: (1) NIS still believes that "trojan horses" are a SERIOUS threat
to government RDT&E labs; i.e., "moles" as well as employees who are angry,
frustrated, broke, greedy, or stupid.  The FBI, et al agree with this opinion.
(2) One reason that the threat is NOT more serious (i.e., that the damage
is usually limited) is that we've made serious efforts to reduce the risks.

Yes, it is hard to spot a trojan horse (or virus, etc.) in a software package.
It is also hard to spot a spy in a crowd.  For the last several years, they
have abandoned wearing trench coats, carrying daggers and magnifying glass, and
speaking in obvious accents.  True, the physical analogy is limited; every
analogy is.  (That's true by definition of "analogy.")  But the typical
computer security person still has things to learn from it.

More on Almost Humorous Fly-by-Wire Glitch (RISKS-11.61)

Mary Shafer <>
Mon, 29 Apr 91 20:10:10 PDT
Joseph Nathan Hall ( writes:

      From the pages of Popular Science, April 1991:

   "...Spectators at the first flight of Northrop's [YF-23] prototype
   noticed its huge all-moving tails--each larger than a small
   fighter's wing--quivering like butterfly wings as the airplane
   taxied out to the runway.  Test pilot [Paul] Metz says this
   occurred because the early generation flight-control software
   didn't include instructions to ignore motions in the airframe
   caused by pavement bumps.  The answer, he adds, is inserting lines
   of computer code that tell the system not to try to correct for
   conditions sensed when the fighter's full weight is on its nose gear."

Talk about a pack of slow learners.  I remember sitting in the control room
watching the AFTI/F-16 waving its canards and tail at every expansion joint in
the taxiway.  They finally stopped it with a squat switch.  But I shouldn't
criticize the YF-23 team too much, because the X-29 didn't have one originally,

   I'll grant that in the 1990s we can analyze wind-tunnel tests in a
   few hours (or less) and can even simulate untested airframes with
   some success.  In the 1950s pilots frequently flew prototypes
   before the final results of early wind-tunnel tests were completely
   analyzed--a process that sometimes took weeks or months.  But am I
   alone in thinking that in some respects it takes more chutzpah to
   test-fly one of these modern fly-by-wire wonders?  

Old cases of Telco tieup and grade hacking

George Malits <>
Mon, 6 May 91 18:19:25 EDT
1) on the topic of a computer failure at the fed reserve, RISKS-11.62.
Consider this.  In the mid 70's sometime, a disgruntled x-employee of the phone
company started a number of wastepaper basket fires in various telco
facilities.  He got lucky at Irving place and destroyed a switching facility.
A goodly chunk of Manhattan was without phone service for several MONTHS.  What
would the effect have been if he had chosen a different facility?  Like one
that serviced Wall St?

Let me tell you what I remember.  The year would be in the 74-76 range and the
facility attacked was the telco building on Irving place in Manhattan (around
15th and second).  In any case, it was quite the fire and the switch was
rendered scrap.  What we (the telephone dialing public) found out was that
these switches are normally custom built and ordered years in advance.  The
solution was to ship whatever switch was under construction to Manhattan and
"make it fit".  This also involved shipping the people building the switch to
Manhattan so that they could continue to build the second half while other
tech's were installing the first half.  Meanwhile, no phones.  Small business
men were using CB to communicate with friends/family outside of the affected
area who would then relay the call.  Telco set up banks of microwave link phone
booths at certain street corners but....In any case, the key limiting factor
turned out to be the space available in the cable lockers in the basement.
They were so cramped that only one or two techs could work at a time.  Also,
it was so hot down there that they worked REAL SHORT shifts.  Their progress
was of course a hot topic in the news.  I remember that there was some key
piece of diagnostic equipment (?) and that something like 3 of the 5 units in
the US were in use in Manhattan.  I don't have any good references for further
details (I read about it in the Daily News at the time) but it was quite a big
tadoo so I suspect that it would have been covered in some journal or other.

2) Computer hacking of grades, RISKS-11.62.  Back at Columbia, in the mid 70's,
several prof's kept a list of the various test grades on line.  This was a
completely "unofficial" set of grades for their own use.  We discovered it and,
from looking around the directory a little, figured out that they were
computing "the curve" from this file and thus assigning final grades for the
class.  After MUCH discussion, we decided that the way to turn this to our
advantage was to either add non-existent students who did very poorly on all of
the tests or to lower the grades of real students that we didn't know/like.
The idea was to lower the curve but leave our own grades unaltered (no smoking
gun).  In the end, we whimped out but I did add one line to the file that said
"do you know how tempted I was to change this file".  I figured I'd give the
prof something to think about.

Re: Changing class grades (RISKS-11.62)

Adam Engst <ace@tidbits.UUCP>
Mon, May 6, 1991 7:24:42 PM
>    Concannon, a database specialist at the university's statewide computer

Does anyone have any more information about this case? It strikes me as a
little odd that this guy would have changed a random student's grades so
radically. It seems more likely that there was some collusion present between
Concannon and the student, at which point the management sorts might want to
think about the why and how such collusion, if indeed present, became possible.
No mention is made of what happened to the student, if anything, which would
clear up my question slightly. Somehow I doubt that database people are
recompensed handsomely enough in general to prevent the occasional incident of
bribery in whatever form.

Just curious ... Adam Engst, TidBITS Editor

TMPLee: 9th Federal Reserve Bank Drowned

Brinton Cooper <abc@BRL.MIL>
Wed, 8 May 91 17:52:12 EDT

"On Monday April 8 the computer center at the Minneapolis Federal Reserve Bank
was flooded out of commission by a broken air-conditioning cooling water pipe
in the ceiling.  [I'll ignore the RISKs of such a design; the point of this
note is something else.]"

Let's not ignore this and similar risks.  In an effort to minimize the risks to
the environment, we are strongly urged not to use Halon to extinguish computer
room fires.  This puts us back to sprinkler (water) systems.  The advantage to
Halon was that it was unlikely to do secondary damage to the installation;
water has no such feature.

Now what?   _Brint

    [To stave off discussion on halon itself, let me point out to new RISKSers
    that we have had numerous discussions in RISKS in the past:
  Halon (Dave Platt, Steve Conklin, Jack Ostroff, LT Scott Norton, Scott Preece)
  Risks of Halon to the environment vs. risks of other fire protection
    (Dave Cornutt)
  Halon environmental impact citation (Anita Gould)
  Halon environmental impact citation (Jeffrey R Kell)
  Halon (Romain Kang)
  Halon agreement and the ozone models (Rob Horn)

     Just in case you really want to get gassed on this subject.  PGN.]

Denise Caruso reports on new anti-encryption bill: S.618

John Gilmore <>
Mon, 6 May 91 11:54:15 PDT
Denise Caruso wrote a great piece for her "Inside Technology" column of the
Sunday, 5 May 1991, SF Examiner, on page E-14.  It concerns the attempts
to outlaw encryption and why that is a bad idea.  She claims that there
is a second bill that has had anti-encryption stuff quietly slipped into it
last week by the FBI: S.618, "The Violent Crime Control Act of 1991".

I'll quote her closing paragraph to encourage you to get and read it all:

    "I want crime to stop.  I want terrorism to stop.  But do we
    want to secure the networks or not?  I have *never* seen
    evidence that power in the hands of government authority
    didn't corrupt.  I have never heard of a compromise-able
    network that didn't get compromised.  With increasing reliance
    on computer-based networks, back doors for law enforcement (or
    whoever else figures it out) make me afraid.  I don't think
    they're a good idea."

S.618 via FTP & mail server, instead of flooding Washington

Brendan Kehoe <>
Wed, 8 May 91 14:39:26 -0400
  Senate bill S.618 is available via anonymous FTP from: []
  in the file
  as part of the Computer Underground Digest archives.
  It's about 227k, so please try to do it after 5pm EDT.

  Hopefully this will save the taxpayers a few dollars.

     Brendan Kehoe - Widener Sun Network Manager -
  Widener University in Chester, PA                A Bloody Sun-Dec War Zone

NYTimes article on E.F.F and John Barlow

Wed, 8 May 91 13:20:49 CDT
The New York Times Magazine, April 21, 1991, had an article "In Defense of
Hackers" by Craig Bromberg.  The article is based substantially on discussions
with John Barlow (founder of EFF), pictured as an "electronic cowboy".  The
cases discussed (including rtm, Craig Neidorf's 911 memo, and Steve Jackson
Games) are familiar to risks readers, but it is nice to see a well-reasoned
discussion in the mainstream press.

John Sullivan,  Geometry Center

Re: Disclosure of customer information (AT&T)

Wed, 08 May 91 15:28:13 EDT
Lauren relates a story of someone posting confidential information based on
internal AT&T data.  So what?

As noted, misuse of such data is already against the rules.  It is quite likely
that the individual will be disciplined, possibly even fired.  I don't see that
there would have been any greater protection if a Federal law were involved.
If someone chooses to act unethically, rules, at whatever level, won't stop

Talking about inadequate technical protection of data doesn't wash in this
case.  I haven't seen any evidence that the employee in question wasn't
authorized to retrieve that data -- maybe that person did have legitimate
access to that database.  This is not a situation where an outsider called up,
and was able to bluff or hack a way in.

Ultimately, security rests on people.  The most sophisticated technical means
in the world provide no protection against a suitably-placed, suitably-skilled,
dishonest person.  Organizations that care about security know this, of course;
critical objects (large sums of money, missile launch systems, etc.) are
protected by at least two individuals.  But for routine access, it would be
crippling to the organization to do that, and I don't think that that's
                         --Steve Bellovin

Customer Info Disclosure (AT&T)

Lauren Weinstein <>
Wed, 8 May 91 13:31:33 PDT
I think Steve may have partially misinterpreted the thrust of my recent
message.  In no way did I mean to imply that a "technical" problem was at the
heart of the recent AT&T customer information disclosure.  In this particular
case, it is clear that a "people" (and possibly a policy) failure occurred.
Whether or not the employee in question had a legitimate "need to know" the
information in question, and so whether or not he *should* have had access to
the data, are important questions, however.

The reason I brought up the related issues of automated account interrogation
systems and the like is that information privacy is based on the triad of
policies, people, and technology.  No one element stands alone.  The current
lack of adquate standards relating to all of these areas is resulting in far
too much information being passed around, both within and between
organizations, without adequate controls.  Failure or inadequacy of elements in
any leg of the triad can have significant negative results as far as the end
effects are concerned.

A key point that technologists need to be concerned about is that the available
technology may result in policy and people failures that are much more
far-reaching than might have occurred without the speed and finality that these
systems allow.  One obvious example among the multitude: A single "people"
error in a credit entry, propagated through credit bureau databases through
lack of adequate policy controls and checks, can cause an individual incredible
hassles--or much worse.

More and more, the transactions of our lives are being viewed as the raw data
of targeted marketing.  The telcos and long distance carriers would like to
market calling pattern data to businesses who would use that information to
"target" customers of interest.  The credit card companies use customer buying
information to cross-promote other merchandise through outside firms.  Even if
you view these particular cases as relatively "innocuous" (though I would
disagree with you!)  these are but the tip of the iceberg.

The issues related to customer information collection and the subsequent access
to, marketing of, and use of that data for purposes that the customers might
not even imagine, need to be addressed as broadly as possible.  Attempting to
deal with them purely from the standpoint of one or two legs of the triad won't
work.  The policies, the people, and the technology must all be considered, and
where appropriate, minimum standards for the handling and control of this
information must be mandated by law.

As others have pointed out, it starts to look more and more like the proverbial
Big Brother won't necessarily be a government entity (at least in this
country).  Rather, it might be Big Brother, Inc.

The means justify the ends? (piracy probe)

Sun, 5 May 91 00:45:37 EDT
>  The evidence was obtained by a consultant employed by DEC at attend
>  a Syntellect training course in February. He copied it's system
>  software which was later examined by DEC. [...]
>... Surely the consultant ... didn't ask for permission to copy their
>system? In which case, is the evidence not inadmissable by virtue of
>being gained by illegal means?

What law has been broken?  That wasn't *their* software.  You don't own the
licensed software running on your system; if you inspect the license agreements
carefully, you will find that the authors retain ownership, and you have bought
only the right to use it.  You are usually required to protect it from
unauthorized copying, but copying done by the owner's representative, with or
without your knowledge, hardly qualifies.

                            Henry Spencer at U of Toronto Zoology utzoo!henry

Please report problems with the web pages to the maintainer