The RISKS Digest
Volume 9 Issue 36

Friday, 27th October 1989

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Bug in Intel 486 chip
PGN
UK Banking Error
Brian Randell
The Presentation of Risky Information
Joshua Levy
Hardware failure mimics hackers
Pat White
Andy Goldstein
Worms in a data stream
Rick Simkin
CERT Advisory on Sun RCP
J. Paul Holbrook
Warning About CERT Warnings
anonymous
Licensed users exceeded
Tim Steele
A lesson involving 'CRACKERS' (APPLE II)
Olivier Crepin-Leblond
Info on RISKS (comp.risks)

Bug in Intel 486 chip

"Peter G. Neumann" <neumann@csl.sri.com>
Fri, 27 Oct 1989 10:41:03 PDT
This morning's San Francisco Chronicle notes that there is a flaw in the
trigonometric functions in the new 486 chip.  It was discovered by Compaq in
testing its new system, expected to be announced on 6 November.  New chips
without the flaw will begin to appear next week, with volume production a few
weeks away.  This may delay some of the applications that are in the works.

The chip contains 1.2 million transistor equivalents, and at 12 to 15 MIPS is
three times faster than the 386 chip.  200 companies are reportedly actively
designing new computers using the chip.

                     [Which chippies could talk about a better MOS TRAP
                     with everyone making a beaten path to their door?]


UK Banking Error

Brian Randell <Brian.Randell@newcastle.ac.uk>
Fri, 27 Oct 89 16:54:02 BST
There has been a follow-up article in Computer Weekly (Oct 26, 1989, p.9) to
the story about the software error which caused #(pounds)2 billion in duplicate
payments, which I posted to RISKS (UK Banking Error, RISKS 9.33) last week. The
new article, though quite lengthy, doesn't add much detail. However it does
reveal that the bank concerned was Citibank, that 95% of the #2bn payments were
reversed within the same day, that delays only occurred where Citibank was
"acting through intermediaries or other branches", and that all the money was
returned within three days.

The nicest bit in the new article is where it indicates that the story was
originally broken in a speech by Jim Reeves, technical manager of the Clearing
House Payment Systems (Chaps) to a gathering of bankers and other financial
industry delegates at the Compsec computer security conference:

 "Reeves struck observers as the most unlikely bean-spiller in the
 financial industry. His voice was so unexpressive that his Compsec
 speech left one immaculately dressed banker in the audience slumped
 across two seats, snoring loudly.

 The enormity of the #2bn error was lost on his audience. Yet Reeves
 was saying, in effect, that in a matter of minutes a systems or
 operator error could cost a bank more than its annual profits."

And the article ends:

 "Citibank insisted that there had been no financial loss to the bank or
 any third party as a result of the error. But this comment begs many
 questions which Citibank would not answer. What about the lost
 interest on the #2bn? Even a day's lost interest would amount to more
 than #500,000.
 ...
 To most people the idea that systems could be so vulnerable as to
 allow a criminal, perhaps organised criminals, to divert
 electronically hundreds of millions into false accounts would be
 inconceivable - just as incredible as the notion that a bank could pay
 out #2bn by mistake."

Brian Randell, Computing Laboratory, University of Newcastle upon Tyne, UK


The Presentation of Risky Information

Flame Bait <joshua@Atherton.COM>
Thu, 26 Oct 89 16:44:31 PDT
Background:
I have written a paper, "A Comparison of Commercial RPC Protocols" (RPC
stands for Remote Procedure Call, and it is a simple way of writing
distributed applications).  This paper contained two sections: one
comparing the speed of various RPC products, another comparing their
reliability.  This paper has been read by dozens (hundreds?) of people,
and I have gotten a lot of feed back about it.

Problem:
The problem is that everyone ignores the reliability section of the paper.
People are constantly discussing the relative speeds of the RPC systems,
but they ignore the differing reliability measures.

I am in the process of writing the second version of this paper, and I
want people to pay more attention to RPC reliability, but since I do not
know why they ignored reliability in the first paper, I do not know what
to change in the second paper.

Discussion:
In the first version of the paper the speed section came first and was
longer than the reliability section (3 pages vs. 1 page).  For the second
version, I plan to put the reliability section first, and try to run more
reliability tests.  Unfortunately reliability is, in many ways, simpler than
speed, so I doubt the reliability section will ever be as long as the speed
section.

A second possible problem is that my wording was to neutral or too clinical,
and that I should replace it with stronger or more emotional language.

I have a bad feeling that the reaction I am seeing is just a reflection
of the fact that programmers and programming companies care much more about
speed than reliability.  I'm worried that no matter how I present the
information, the reliability numbers will be ignored.

If you have any thoughts (answers or just comments) please email them to
me, even if you send them to RISKS.  (I do not seem to get every RISKS
issue.)  I will summarize responses sent to me.  Tell me if you do not
want your response summarized, or if you want to be anonymous.  Thanks.

Availability:
The paper in question, comparing RPCs, is available by sending email to
archive-server@joshua.atherton.com.  The body of the message should contain
the lines "send other comprpc.shar" or "send other comprpc.txt".  The first
line will give you the paper in troff's me macros, the second in plaintext.

Joshua Levy                      joshua@atherton.com   home:  (415) 968-3718
                    {decwrl|sun|hpda}!athertn!joshua   work:  (408) 734-9822


Re: Hardware failure mimics hackers (Rob Wright, RISKS-9.35)

<ain@sage.cc.purdue.edu>
Fri, 27 Oct 89 12:38:37 -0500
>Sure enough, when the FPA was replaced in our Geophysics machine the problems
>vanished. Apparently the FPA is used for password encryption. No one has
>satisfactorily explained the 'licensed users exceeded' phenomenon.

    If your FPA is like the one on our dual 780's, then it is also used to
speed up integer math — therefore, inode calculations and the like can be
affected.  Since file system damage was reported, this probably was the
case.  So, the 'licensed users exceeded' might be caused by the system looking
at the wrong disk sectors.

>As to the failure mode of this subsystem, one presumes that those computing at
>the time of the failure would have some wrong answers, but further users were
>spared this worry by not being able to access the system.

    Yup, they do get wrong answers... and sometimes even notice it (which
is another computer "risk" — people assuming that computers *never* make
math mistakes).

Pat White


RE: Hardware failure mimics hackers (Rob Wright)

Andy Goldstein <goldstein%star.DEC@src.dec.com>
Wed, 25 Oct 89 15:08:02 PDT
A couple of points regarding the notes from Rob Wright and Tom Ivers
regarding wholesale password failures and multi-user licensing problems
caused by a faulty floating point accelerator:

(1) The FPA implements integer multiply as well as the floating point
    operations, since a fast integer multiply helps array references
    often associated with floating-point intensive applications.

(2) The Purdy algorithm [CACM, August 1974] VMS uses for one-way
    encryption of passwords makes extensive use of integer multiply
    operations.

(3) The multi-user license validation algorithm used in VMS V4.* also
    uses the Purdy algorithm in checking its license database.

Thus the bad FPA causes both user passwords and the multi-user licensing to
fail. (Actually, a bad FPA can cause all sorts of things to fail because of the
multiply implementation. Just another unfortunate consequence of an operating
system's assumption that the CPU works.)
                                   Andy Goldstein, VMS Development


Worms in a data stream

Rick Simkin <simkin@zah.samsung.com>
Thu, 26 Oct 89 14:35:00 EDT
In issue 9:35, Will Martin reported on an article which suggested that a data
stream could somehow corrupt the workings of a computer system.  He also
expressed skepticism about this claim.  From what I've heard recently, his
skepticism is correct, as the claim itself is the product of confusion.

At a recent seminar hosted by the Greater Boston Chapter of the ACM, Pamela
Kane and Andy Hopkins of Panda Software discussed this notion briefly.  They
cited a magazine article where someone had speculated that a spreadsheet
overlay might contain malicious actions.  Since spreadsheet files are commonly
considered data, the Incautious Reader could jump to the conclusion that
malicious data could take over a computer's operations.

Pamela Kane was quick to point out that a file which contained executable
instructions--even if they were to be interpreted by another program, rather
than run directly by the hardware--shouldn't rightly be considered a
"data file."

Rick Simkin, Samsung Software America, Inc., 1 Corporate Drive, Andover MA 01810
Any opinions expressed are my own, not my employer's.

       [Don't forget the old squirreled-character mail message problem,
       where the normal reading of the mail message could trip up on a
       control character or escape sequence and cause a nasty effect.
       Just because it is data does not mean that it cannot somehow get
       executed...  PGN]


CERT Advisory on Sun RCP

CERT Advisory <cert@cert.sei.cmu.edu>
Thu, 26 Oct 89 21:26:06 EDT
                CERT Advisory
               October 26, 1989
            Sun RCP vulnerability

A problem has been discovered in the SunOS 4.0.x rcp.  If exploited, this
problem can allow users of other trusted machines to execute root-privilege
commands on a Sun via rcp.

This affects only SunOS 4.0.x systems; 3.5 systems are not affected.

A Sun running 4.0.x rcp can be exploited by any other trusted host listed in
/etc/hosts.equiv or /.rhosts.  Note that the other machine exploiting this hole
does not have to be running Unix; this vulnerability can be exploited by a PC
running PC/NFS, for example.

This bug will be fixed by Sun in version 4.1 (Sun Bug number 1017314), but for
now the following workaround is suggested by Sun:

Change the 'nobody' /etc/passwd file entry from

nobody:*:-2:-2::/:
to
nobody:*:32767:32767:Mismatched NFS ID's:/nonexistant:/nosuchshell

If you need further information about this problem, please contact CERT by
electronic mail or phone.

J. Paul Holbrook, Computer Emergency Response Team (CERT)
Carnegie Mellon University, Software Engineering Institute
Internet: <cert@SEI.CMU.EDU>   (412) 268-7090 (24 hour hotline)


Warning About CERT Warnings

<[Anonymous]>
Thu, 26 Oct 89 23:43:41 [x]DT
In RISKS 9.34, CERT once again passes along advice on checking for security
holes.  Once again CERT recommends use of the UNIX utility "sum".  Forgive me
for shouting, but THE UTILITIES ON A POTENTIALLY COMPROMISED SYSTEM SHOULD NOT
BE USED TO EVALUATE THAT SYSTEM'S SECURITY.  This is especially true when CERT
publishes the output the utility should produce.  Another possibility is that a
compromised file might be constructed in such a way as to produce the same
output as an uncompromised file, even when viewed with an uncompromised
utility.

A poor security procedure is worse than no procedure, because the poor
procedure may produce a false sense of security.  It's better not to know the
truth and remain paranoid, than to know a false truth and feel comfy.

Sometimes I wonder whether CERT has been penetrated by a hacker.


Licensed users exceeded

Tim Steele <tjfs@tadtec.uucp>
Thu, 26 Oct 89 09:51:39 BST
When I was using ULTRIX, we had the same problem after a system upgrade - the
system would only allow 2 users to log in before reporting that the number of
licenced users had been exceeded.  Calling DEC confirmed this was the case, as
they had just (in that release) installed a mechanism to check this.  They
promised they would ship us a tape to allow us to carry on using the system as
before, but said it would take "several weeks".

I wasn't happy with this as we needed the system back online that morning (!)
so I broke the protection system.  Essentially it used a file in / (called
/.license, I think) which contained 1K or so of "random" junk.  This is the
file you buy from DEC for a certain fee depending on how many users you want to
enable on your system.  The file gets opened by /etc/init and the number of
licensed users read out and checked for every time a new login process is
started.  This number is also printed at boot time.

GOTCHA: The number printed is NOT the same as the number used in the check!
For a "real" license shipped by DEC, these numbers will be the same.  The idea
is to stop crackers disassembling init and patching the obvious number printed
at startup.  The real number is stored in several places in a suitably tricky
way.  Even so, armed with a couple of genuine licence files, it's fairly easy
to crack.  I ended up with a 10-lines-of-C utility that would forge any licence
you liked...

If init reckoned the license was not valid, it would default to the
minimum, i.e. 2 users.  I suspect the VMS mechanism is similar, and is
defaulting to 2 users as the value returned from the (bad) FPA is not
matching the proper encrypted value in the system license.


A lesson involving 'CRACKERS' (APPLE II)

Olivier Crepin-Leblond <ZDEE699@elm.cc.kcl.ac.uk>
Thu, 26 OCT 89 18:43:52 GMT
This message is being sent to both RISKS and VIRUS lists.  Apologies to those
who receive both digests.

    I was well shocked in finding-out that there was actually a virus
running on the Apple II family of computers ! Where could the LODE RUNNER virus
have infected such a small machine, with no integrated hard disk, and the
possibility of rebooting the machine quickly by using a simple sequence of
control codes ? (open-apple-ctrl- reset ). In FRANCE, of course !

    The Apple II did very well in France. It is very widely used over
there. This success, like in the U.S.A., triggered a large market for pirated
copies of programs.

    I have been an Apple II owner since 1982. It is absolutely amazing how
many copies of programs went around since that time. I guess that virtually
every program for this type of computer was available as a pirated copy in
France. This is because of the following:

1. There are laws about unlawful software copying, but they are very hard to
   enforce. In addition to that, it is extremely difficult to find the
   originators of the software. ie: The "top" pirates are well hidden,
   and if the police was to catch every person who copies a program,
   then they'd probably have to prosecute virtually *any* computer user !
2. Most software was copied and "exchanged" against other software, a bit
   like a one to one swap. Commercial pirate factories were discovered in
   Lyons a few years ago. There, the programs were deprotected, copied, and
   then protected again, and sold to customers for a fraction of the price.
   The pirates were arrested and heavily fined (and given a prison sentence).

SOME SORT OF COMPETITION

    There were many independent groups of pirates. The average age was
16-22 years old. All of them were experts at Apple II's Disk Operating
System. The most "advanced" of these "crackers" were the CCB.
CCB for "Clean Crack Band". From the number of programs that they have
cracked, they seemed to spend their days and nights cracking games and
software. Some French magazines and newspapers wrote articles and
interviews with them. They even went on national French TV. Of course,
they were in hiding; a bit like drug dealers, really. The quality
of their "work" was unbelievable. The program was as good as new, only
it had their name in the presentation page. Often, they added pretty
graphics, and additional options in some cases. In fact, it looked
as though they had completely re-written the program entirely.
At the end of 1985, I think, they renamed themselves, the SHC,
"Solex Hack Band". (A Solex used to be a cheap moped at the time)
They hacked a few French Computers by using dial lines; they did
one "Hacking" direct, on TV, showing the journalists how vulnerable
computers were. Since that time, I don't know what happened to them.

OTHER GROUPS

    There are a lot of other groups of pirates around France. The CCB
were based in Paris (according to the press), and the two most famous
members of this group called themselves: Aldo Reset, and Laurent Rueil.
Other groups include:

- Johnny Diskette: this name was used by many anonymous pirates who had
  formed some kind of club in Paris, where they had competitions (!)
  on who would be the fastest to unprotect a disk.
- BCG (Baby Crack Gang): funny name. They seemed to like Karateka games.
- CES (Cracking Elite Software): They added features to games from time
  to time.
- Chip Select and the Softman: These pirates went as far as including a
  digitised picture of themselves wearing dark glasses and saying:
  "I am Chip Select". A Certain Eric IRQ (Interrupt Request) was also
  part of this group.
- Mister Z (Geneva): These were Swiss pirates, but for some reason, they
  were sending copies to French crackers, telling them to change the
  title page that they had made-up. It was some kind of competition of:
  "We can protect this program; can you unprotect it ?"
- MAC (Marseilles Association of Crackers): group based in Marseilles.
- P.Avenue Nice: and this one is in Nice...

These groups deprotect the software. Once deprotected, it can be copied
very easily using a normal copy program.
Most copying goes-on in large computer centres, where machines can be
used free of charge. There is no supervision there, and no control on what
goes-on. Somes places are popular just because it is such an easy way
to get hold of any program for no charge (well... just the cost of a
diskette). Since 1987, though, the shops are more careful since they
could be held responsible for what happens on their machines.

HIDDEN INFO

If you use a track/sector disassembler, you can see the information on
the tracks of the disk displayed as ASCII characters. Often crackers would
converse between themselves in this way. Software is copied through a
string of intermediaries, and the messages can therefore be passed this way.
It is impossible to know if there is some hidden information on the
disk if it is not analysed by using a track/sector disassembler.
It is therefore very easy to hide other programs within the disk, whether
they are games, or even viruses !

IN CONCLUSION

So in fact, considering the level of expertise that these crackers have,
it would be very easy for them to hide a virus within a floppy disk,
which would be triggered by the actual program. I am talking here about
the APPLE II computer, but I am sure that other computers (including PC's)
have their "expert" crackers, who no doubt, would be very happy to write
viruses/worms/trojan horses/time bombs etc.
Why do they do it ?
My idea is that they do it for "fame", just to see other people talk
about "their" virus. Any suggestions ?

Olivier Crepin-Leblond, Computer Systems & Electronics,
Electrical & Electronic Eng., King's College London

Disclaimer: My own views. Any comments/flames/congratulations welcome !

Please report problems with the web pages to the maintainer

x
Top