The RISKS Digest
Volume 6 Issue 56

Thursday, 7th April 1988

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Guess what? A modified FLUSHOT!
James Ford
Scrambled FAT from hell (EDRAW)
Jay F. Rosenberg via Geoff Goodfellow
Re: Notifying users of security problems
Eric Postpischil
Another quarter heard from (re: viruses)
T.M.P. Lee
Virus distribution idea
Will Martin
Kerberos documentation — [Third-Party Authentication]
Jennifer Steiner
Terminals: Why the discussion was interesting
Jerry Leichter
Info on RISKS (comp.risks)

Guess what? A modified FLUSHOT!

James Ford <JFORD1%UA1VM.BITNET@CUNYVM.CUNY.EDU>
Wed, 06 Apr 88 10:17:12 CDT
As some know, I recently got a copy of THE DIRTY DOZEN from Eric's BBS.  The
description of the CHRISTMAS EXEC was (almost) non-existant, so I uploaded
some back issues of RISKS to the board.....leaving my BITNET address on the
text.

Well, I have received a reply from someone who is writing a paper/text on
trojans, and he had this warning about FLUSHOT:

> FLUSHOT.......... FLUSHOT3 is OK. Watch out for FLUSHOT4. There
> is a Trojan Horse version of it going around. Some unscrupluous
> person modified the "cure" so it became a disease. For any of the
> FLUSHOT programs, the valid programs have a separate ASCII text
> documentation of the program. The hacked version, made a text
> file that is embedded in an executable file. Any version without
> Ross Greenberg's documentation in a text file should be avoided.

  Yet another bug on the loose.........(sigh)

                            James Ford


Scrambled FAT from hell — a brief report. (EDRAW)

Geoff Goodfellow <geoff@fernwood.mpk.ca.us>
Tue, 5 Apr 88 21:45:03 PST
From: unbent@ecsvax.UUCP (Jay F. Rosenberg)
Newsgroups: comp.sys.ibm.pc
Subject: Scrambled FAT from hell — a brief report.
Keywords: HD crash scrambled FAT EDRAW Quattro interaction
Date: 5 Apr 88 12:53:58 GMT
Organization: UNC Chapel Hill

     Having spent 2 hours last night recovering from a thoroughly 
scrambled FAT, I thought it appropriate to hold a small post-mortem.
     The culprit *appears* to have been a shareware program called EDRAW 
(Version 3.2), which I picked up as PCSIG Disk #828 from a local 
university's public bbs.  As near as I can diagnose the phenomenon from 
the rather incredible list of messages I received from CHKDSK, what 
happened was this:
     I had installed Borland's Quattro spreadsheet program.  As far as I 
can tell (by using assorted MACE tools), when Quattro installs, it marks 
various disk sectors as protected, probably in aid of finding its own
overlays.  (MACE had been respecting these and not moving them about 
during unfragmenting operations.)  EDRAW apparently did *not* recognize 
and/or respect this protection.  When I used the program to make some 
sketches and symbols and proceeded to save them to the disk, then, EDRAW 
evidently wrote good parts of them over these protected sectors.  The 
result was the most incredible mess of truncations and crosslinks I've 
ever seen.
     Whether and, if so, how the various memory resident utilities I had 
installed entered into the scenario of destruction, I do not know.
     Responses, reactions, comments, and alternative diagnoses will be 
most welcome.  I've learned a lot from the net over the years.  One
thing I learned:  Keep current backups!  I did.  Go ye, and do likewise!

JAY ROSENBERG  Dept. of Philosophy  CB# 3125  UNC  Chapel Hill, NC  27599
...{decvax,akgua}!mcnc!ecsvax!unbent         ...tucc!tuccvm!ecsvax!unbent  
   unbent@ecsvax.UUCP      unbent@ecsvax.BITNET      unbent@unc.BITNET      


Re: Notifying users of security problems

<postpischil%alien.DEC@decwrl.dec.com>
Wed, 6 Apr 88 08:40:03 PDT
In Risks Digest 6.54, Andy Goldstein says that knowledge of security bugs,
during the period in which they are being corrected, will not do VAX/VMS
system managers any good and so should not be distributed until a fix is
available.

I disagree for two reasons.  First, there are work-arounds to any problem.
Almost every site has a big red button (or equivalent) that will make any
computer system secure, and a very few sites might have information sensitive
enough to warrant the button's use.  For another few sites, the VMS software
might be only a portion of their computing resources, a portion they can do
without or with limited use for a period.  And probably a larger number of
sites can control network and physical access.  Many sites can restrict
accounts, temporarily removing general accounts.  (Unless the bug is so basic
it can be used to access the system without even the simplest account.)
Another work-around is to use captive accounts.  And other sites may be
satisfied with establishing some sort of auditing procedures so they can tell
who is being naughty and stop it if not prevent it beforehand.

The second reason is that the publisher is not entitled to make these
decisions.  How can one honestly sell a supposedly secure system knowing it is
not secure?  Are sales to be stopped while the bug is being corrected, or will
the salespeople lie when the potential customer asks about security?  Is one
going to renege on the customers who have paid money to be informed of bugs?
It is not entirely a matter of whether the publisher thinks the customer can
make use of the knowledge or not; the customer has a right to know they are not
secure regardless of what the publisher thinks about it.
                                                 edp (Eric Postpischil)

Another quarter heard from (re: viruses)

<TMPLee@DOCKMASTER.ARPA>
Wed, 6 Apr 88 00:41 EDT
The info-apple (Apple II series) interest group has been having sporadic
missives on viruses lately (of course.)  One of them expressed thorough
disbelief.  It was responded to quite handily; the response, which 
cites most of the original ostrichian comments, seemed worth sharing
with the RISKs population. — Ted


[3382] (130 lines) Network_Server.Daemon 04/05/88  1032.6 edt Tue info-apple
Subject:  viruses ARE possible
From: info-apple-request@BRL-SMOKE.arpa@BRL-SMOKE.ARPA

>Date:         Mon, 4 Apr 88 15:37:46 CDT
>From:         SCHUESSLER <GA.NES%ISUMVS.BITNET@CUNYVM.CUNY.EDU>
>Subject:      Viruses: Fact or Fiction?

>Well, folks, I am totally confused about this virus stuff.  In reading
>about them in a local paper (Today section DesMoines Register) about
>monitors exploding, and hard disks crashing, I don't see how anybody
>could possibly write a virus that would get by enough people to become
>dangerous.  Please examine my reasoning, and point out where I missed
>something.

They get by because they generally don't do anything damaging right away.

>Suppose I wish to write a virus.  I have read that the operating system
>is the place where they're supposed to be put. Here are some problems:
>
>  1. How do I add routines to prodos w/o changing the block length?
>     I don't know about anyone else, but I think I would
>     probably notice that Prodos would take longer to boot, or
>     that it was 32 blocks instead of 31.

You might, but you probably wouldn't notice it right away.  Anyway, ProDOS is
*not* the only place to hide it.  Could tack it into other applications, or
even in the boot blocks (there is some unused space there for booting SOS on
the Apple ///).  Heck, you could even hide some code in the DIRECTORY (which
gets read into RAM during the boot process anyway, while the boot blocks are
looking for the PRODOS file).  (This would cause a problem when the directory
started getting full.)

Also, there is most likely some space in PRODOS that isn't currently
used (I haven't looked lately).

>  2. Viruses are supposed to "spread" themselves. Spreading implies
>     (to me at least) saving themselves on other disks in other drives,
>     which would be extremely obvious if you did a catalog of drive1
>     and it went to drive2, or it would suddenly start working on the
>     disk w/o direct commands from the keyboard.  Equally suspicious
>     would be a slow catalog listing (with a virus 'spreading' itself
>     sometime during the execution of the command).

It wouldn't take very long to spread itself, and it would not do it
spontaneously.  For example, it could writ itself into the boot blocks
one out of every 20 times you write to your main directory.  It wouldn't
take too long, since your drive would already be in the right area of
the disk anyway (main directory = blocks 2-5, boot blocks=0-1).  Writing
to disk already takes a variable amount of time depending on where the
free blocks happen to be on disk, so one or two more block writes with
no head movement would be hard to notice (ESPECIALLY on a 3.5 drive or
a hard drive.  Or a RAM drive [with or without a battery backup!].)

> 3.  The next thing in question is the delayed effect, which no doubt
>     is done by incrementing a counter each time it is executed.  In
>     order to retain this value, it must be stored back on the disk
>     which causes another timing problem as far as working with the
>     disk is concerned.

Counters could be in RAM as well as on disk, or it could skip conters
completely and trigger based on some semi-random number or some set of
conditions on disk.  — Even if you use counters, it might not have to
do any extra disk writes (for example, increment 2 unused bytes in the
root block of your directory whenever the block is being written ANYWAY).

> 4.  To spread itself, it must know the volumes on line, which
>     have prodos copies that are not infected already (which will
>     take a bit of code to check for) and then probably set some
>     flags to point to the clean copies so that when executed next
>     it can spread itself.

Nope, it doesn't have to be that complicated.  Just infect disks as they are
accessed by the running application, and set it up so it doesn't matter if the
thing you're infecting is already infected or not.

> 5.  Finally, there is the problem of doing all the things viruses
>     are famous for in 200 bytes or less.  I don't know about anyone
>     else....maybe it's just me, but I can't do all that fancy I/O
>     in 200 bytes or less ( which is supposed to be the optimum length).
>     That's w/o the fancy routine to time the spreading with save/bsave
>     load/bload's which would be a nightmare in itself.

You can do a *lot* in 200 bytes, although there's not much reason to
limit them to being that small.  It only takes 18 bytes to say
"WRITE_BLOCK number 0 on the last-accessed device" in machine.
(Doing file-level I/O rather than block-level I/O would take a few
more bytes, but not *that* many more.)

>With all that to worry about, why would anyone go through all the trouble?

I don't know, but it only takes *one* deranged person.  If your hard drive
has just fallen victim to someone's virus, you won't really care *why*
they went to the trouble.

>Maybe I could see it possible for someone who just uses the software, and
>doesn't do the programming/doodling around with operating systems to miss
>the differences, but I hardly think that it would result in a major crisis
>to society.

But people are so eager to give the latest nifty software to their favorite
bulletin boards that the viruses can potentially spread *very* quickly.
If we teach people to be careful the problem can be kept under control, but
it gets harder as operating systems get larger and more complex--there are
lots more interesting ways to "infect" IIgs's than IIe's, for example (desk
accessories, RAM vectors that survive an Apple-Ctrl-Reset, patching system
tool vectors, etc).

>  Also--Is it legal to create a 'harmless' virus to see if it works
>       and you supply an antidote?

I don't know if it's legal, but it's pretty stupid--everyone will hate you
when they find out about it.  (Someone [in Canada?] wrote a "harmless"
virus for the Mac that displayed a World Peace message on a certain date.
This pissed lots of people off & I think caused a few problems for people
even though it was supposd to be harmless.  This virus [or was it another
one?] has accidentally made its way into some factory-fresh copies of at
least one piece of commercial software for the Mac.)

>  | |    Niko Schuessler    | |
>  | |    GA.NES@ISUMVS      | |
>  | | Iowa State University | |

--David A. Lyons  a.k.a.  DAL Systems
  PO Box 287 | North Liberty, IA 52317
  BITNET: AWCTTYPA@UIAMVS
  CompuServe: 72177,3233
  GEnie mail: D.LYONS2

---[3382]---


Virus distribution idea

Will Martin — AMXAL-RI <wmartin@ALMSA-1.ARPA>
Wed, 6 Apr 88 15:20:39 CST
I just received a survey in the mail from a company in Boston called
The LEK Partnership, on the subject of spreadsheet software. It is a
form of survey different from other surveys I have previously
received from various sources, which were usually multiple-choice paper
forms. This one is a diskette with an accompanying letter and some
printed material. There is also a Business-Reply diskette mailer and
a dollar bill. The letter describes the survey as being "a computer
interview." The instructions are to boot any IBM PC or compatible with
this diskette and "the rest is automatic." You then put the diskette
back in the mailer and drop it in the mail to go back to the sender.

Now, what immediately occurred to me was, "What a beautiful way to
disseminate a virus!" Adding the dollar bill is a nice touch, but most
computer users would be intrigued enough by the concept to at least
stick the diskette in their machine and see what it was like, even if no
money or other incentive accompanied it. Just to put it in your machine
would be enough to spread a virus on that diskette, and the fact that
you send the diskette back to them not only eliminates the evidence, but
would also let hidden programs pull some amount of data from your files and
stash it on the diskette for the use of the sender. (I agree that the
latter is pretty farfetched, given the vagaries of naming PC files, and
the low likelihood that simple software could find anything of value or
interest on any random PC out there.)

A party interested in doing this could rent a mailing list from any of
several magazines, and get access to many corporations and government
agencies, bypassing network security and reaching areas isolated from
any networks. It wouldn't be cheap, but it would be effective (at least
the first time). It might be a reasonable method of economic sabotage.

Let me hasten to add that I have no reason to suspect this vendor of
doing such a thing, nor have I heard of anything like this happening.
The company seems legitimate; they provide an 800 number in their cover
letter for recipients to call. The survey is not anonymous, though --
the diskette has a serial number, which matches a number on the label on
the envelope, so they know who got which diskette, even if it does not
request a name and address as part of the on-line dialog.

The virus possibility just sprang to mind as I read the letter; I suppose
that's a reflection on my evil nature. :-) I have not yet put this diskette
into a PC, but I have run demo diskettes from other vendors without thinking
first about the RISKS I'm voluntarily accepting by doing so. (Since I
don't yet use a PC myself on a regular basis, its been other peoples'
PCs who have run the RISKS, so that might explain my blithe attitude! :-)

This is all speculation, of course, but we've been thinking and talking
so much about viruses (viri?) lately that it seems natural to view such
things with suspicion. I don't know now if I will ever run this diskette!

Are there any organizations out there who have a codified policy for
dealing with this sort of thing? That is, some clearinghouse or
checkpoint for looking at software or checking diskettes received from
public-domain or random sources, where skilled personnel using isolated
hardware check it out and pronounce it "cleared" before it can be loaded
or used on any of that organization's other machines? It may be costly
and cause delays, but it may become necessary. 

Regards,
Will Martin


Kerberos documentation [Third-Party Authentication]

Jennifer Steiner <steiner@athena.mit.edu>
Thu, 07 Apr 88 16:57:39 EDT
Documentation on MIT Project Athena's authentication service, Kerberos, is
available for anonymous ftp on "athena-dist.mit.edu", in ~ftp/pub/kerberos.

Documents include the paper given at the Winter 1988 Usenix Conference (text or
postscript), a detailed design document (text or postscript), and manual pages.

If you can't ftp, and would like a hardcopy, send your request (and US/PTT mail
address) to info-kerberos@athena.mit.edu.

We are currently running a beta test of the software.  When the beta test has
been completed, we plan to put the code in the public domain (except for the
encryption library, which probably can't be exported out of the U.S.).  I'll
post a pointer when the code is available.

Please post any followup messages to comp.misc.

Jennifer Steiner, Project Leader, Kerberos Development, MIT Project Athena


Below is the abstract from the Usenix paper:

In an open network computing environment, a workstation cannot be trusted to
identify its users correctly to network services.  Kerberos provides an
alternative approach whereby a trusted third-party authentication service is
used to verify users' identities.  This paper gives an overview of the Kerberos
authentication model as implemented for MIT's Project Athena.  It describes the
protocols used by clients, servers, and Kerberos to achieve authentication.  It
also describes the management and replication of the database required.  The
views of Kerberos as seen by the user, programmer, and administrator are
described.  Finally, the role of Kerberos in the larger Athena picture is
given, along with a list of applications that presently use Kerberos for user
authentication.  We describe the addition of Kerberos authentication to the Sun
Network File System as a case study for integrating Kerberos with an existing
application.


Terminals: Why the discussion was interesting

LEICHTER-JERRY@CS.YALE.EDU <"Jerry Leichter>
Mon, 4 Apr 88 18:26 EST
The last couple of RISK's have had articles from me and others on details of
block mode terminals and the risks they do or don't pose.  In all the rush of
detail, what I see as the important point, and the reason I got involved in
the discussion at all, was lost.

"Naive" users of computers have only the most limited idea about how they
work, what their limitations are, and what, if anything, can be done about
those limitations.  Since they have no basis for reaching a deeper under-
standing - if they DID, they wouldn't be "naive" users! - they tend to treat
computer risks in one of two ways:  Either they assume the omnipotent computer
is safe and never goes wrong - an attitude encouraged by salesman, providers
of computer services of all sorts, government - or they are battered by sad
experience to thinking that "computers always screw up and there's nothing
anyone can do about it" - an attitude paradoxically encouraged by all the same
people (except perhaps the salesmen).

As people learn more about computers, they continue to get pulled in both
directions.  On the one hand, they learn more and more about how to make
things work; on the other, they learn more and more about the extraordinary
ways in which things can fail.  There's a certain tendency to just throw one's
hands up in despair and claim things will never be right, so why bother to
try?

The only thing that comes out of such an attitude is poorly designed, risky
systems.  No, we can't eliminate all risks and vulnerabilities; but we can
damn well try to understand what they are and perhaps eliminate enough to
remove our systems from the "clear and present danger" category.  Perrow's
"Normal Accidents" can easily be read as one of those cries of despair, but
the book is valuable exactly because it sometimes rises above that level.
I've heard reports - it would be nice to see a reference - that the Colonel
Murphy of eponymic fame is VERY distressed by the universal invocation of his
"Law" as proof that systems can never work right.  That's missing the point
Murphy wanted to get across:  That if you design errors IN, you will get
errors OUT.  Good engineering means designing errors out - out of the whole
system (human users, with all their complexity, and all), to the greatest
extent you can.

Terminals, ALONG WITH THE SYSTEMS THEY CONNECT TO AND THE TRAINING OF THEIR
USERS, *can* be made secure.  It takes a significant, continuing effort, and
that effort can all too easily be undone by careless "extensions"; but it is
NOT impossible.  The same can be said of many other risks and vulnerabilities:
They can be eliminated, or reduced to acceptable levels, if we are willing to
make the appropriate investments.  We - the societal we, including all those
"naive" users out there - will not be willing to make those investments until
we are convinced that (a) the risks and vulnerabilities are there - something
that recent events have probably gotten across; (b) something can be done
about them.  We "computer sophisticates" are the ones with the responsibility
for making (b) true - and of convincing society at large that it can be true.
We will have a great deal of trouble doing that if we all sit here moaning
about how "impossible" it is!
                            — Jerry

Please report problems with the web pages to the maintainer

x
Top