The RISKS Digest
Volume 9 Issue 63

Wednesday, 31st January 1990

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Vive la difference?
Peter G. Neumann
Airbus crash of June 88
Olivier Crepin-Leblond
AT&T Crash Statement: The Official Report
Don H Kemp via Geoff Goodfellow
Important Lesson from AT&T Tragedy
Bill Murray
Potential Lesson From AT&T
Bill Murray
Sun Sendmail Vulnerability
Kenneth R. van Wyk
GPO Library disk infection (PC)
Kenneth R. van Wyk
Re: Password Sharing
Al Arsenault
Annual Computer Security Applications Conference
Marshall D. Abrams
Virology
Gene Spafford
Info on RISKS (comp.risks)

Vive la difference?

"Peter G. Neumann" <neumann@csl.sri.com>
Sat, 27 Jan 1990 12:45:57 PST
WOMEN ENJOY COMPUTERS MORE THAN MEN, SURVEY SAYS
    — Rockford (Ill.) Register Star.

       Yes, but can computers take out the garbage?
           — The New Yorker, 29 Jan 90, p.89.

              No, but they can generate it faster.
              — PGN, 27 Jan 90.


Airbus crash of June 88

Olivier Crepin-Leblond <zdee699@elm.cc.kcl.ac.uk>
Tue, 30 JAN 90 09:35:42 GMT
    On the 26th June 1988 at 12:45:39, flight ACF 296Q, an Airbus owned by
the German company Durus Verwaltungsgesellschaft MBH Co Mobilien KG, and
exploited by AIR FRANCE crashed in a forest in eastern France, whilst flying at
very low altitude during an Air Show.
    Today, the captain, Michel Asseline has lost his French pilot license for 8
years. He is now flying in Australia.
    One and a half years after this accident it is still not known why this
crash happened. A report written by the commission dealing with the enquiries
about the crash has not yet been released to the public, but "Le Temps de la
Finance", a French financial newspaper is now able to give at least SEVEN major
contradictions in the report. Please note that the report they had in hand was
the final draft.

Le Temps de la Finance. Nr. 59 Thursday, 18 January 1990.

" The first part of the conclusions implicates the organisers of the air
show and the company AIR FRANCE. The planning of the flight was insufficient
because the whole file was submitted too late. The commission doing the
enquiry revealed that the technical flight plan given to the crew of the
aircraft by the regional operations department of AIR FRANCE did not mention
anything about the axis of the runway, or the altitude of the flight.
   One of our informers pointed-out that the different flight plans given to
the commission, including messages to/from the crew were bad quality
photocopies.
   The airport on which the tragedy happened had 2 parallel runways which were
not fit to receive an A320 should an incident occur."

There then follows a reminder that this was a demonstration flight and the
passengers were in a very joyful mood. One argues that this could have been
transmitted to the pilots. The aircraft was one of a new type which might have
triggered an excess of confidence in the pilot's head. In addition, the
size of the airport was so small that the pilot might have thought he was at
a higher altitude.

" The commision has implied NO FAULT concerning the functioning of the aircraft,
or its design. However, certain affirmations of the commission leading the
enquiry can be discussed...

1:  On the day of the accident, the altimeter reference value had been changed
to a value which was not sensible. The commission emphasizes that since that
time, it has never happened again. But an aeronautical specialist has confirmed
to us that it is impossible  to affirm that this anomaly has never happened
since.

2:  when the DFDR (Digital Flight Data Recorder) (A Fairchild 800) was decoded,
some of the data was badly restituted. During 8 seconds, the readings of the
data were incorrect and corrupted, and some errors in the software controlling
the restitution of parameters of the A320 are thought to be the cause of the
anomaly, the report says. The tape was therefore cleaned and then most of the
data was read without problems. But the examining magistrate is now talking
about verifying the authenticity and integrity of the flight recorder data.

3:  The commission noted that the data of the DFDR is incoherent for 3 seconds
starting 12:45:39, which is the exact time of the crash, and then data
relating to an earlier flight is found sonce the DFDR functions in a closed
loop. But a specialist told us that this is technically impossible !
This view agrees with information published in [another French newspaper], the
"Canard Enchaine" on Wednesday 17 January 89 which was talking about
irregularities and interference concerning the DFDR decoding. That paper
disclosed that the tape was cut and information transfered to another tape for
reading and decoding...

4:  The emergency lighting did not work because of a programming error. (this
information is repeated twice in the report, pages 28 and 31). In
aeronautical circles, it is said that this is 'due to a bad power supply
which was corrected in many months because of problems concerning the official
approval' [ !? ]. The organisation which certified the circuit, the DGAC
(Direction Generale de l'Aviation Civile [ French equivalent of FAA - ocl])
is a branch of the same organisation which controls the IGAC, the commission
which led the enquiry.
The signal requesting evacuation of the cabin did not apparently work since
nobody heard it, and the head of the cabin crew did not manage to find the
microphone of the public address system. It had apparently broken-off
from its stand, which was very fragile.

5:  Did the equipment broadcasting the altitude work all right ? (this is
a hypothesis). It was heard at 100 feet. But then the dialogue between the
pilots was interrupted and from that time nothing is really sure.
The pilot told the control tower that he was at 100 feet, and flight plans
indicate a minimum of 100 feet, but it was observed from the ground that the
aircraft was in fact at 30 feet. On page 55 of the report, it is said that
if the airplane drops under the required altitude, this implies that the
captain is piloting by sight, and has not registered the data given by the
vocal equipment. But Michel Asseline cut-out the automatic protection for
thrust (Alpha-floor). This is needed when there is a need to fly over
100 feet and the aircraft is very inclined. But why would he cut it out
when flying under 100 feet, since it is automatically de-activated under that
altitude ?
And then there is also the surprise of the pilots in the cockpit when the
airbus chopped the first trees of the forest: 'Shit, the trees !'
This tends to prove that the pilots were not aware of their low altitude, and
that they were not aware of the presence of this patch of forest, since it was
not marked on the plan, although electric pylons are noted 1.5Km away from
that zone.
[...]
... all of these deficiencies seem to accumulate, on an aircraft which,
according to the commission had NO MECHANICAL AND INSTRUMENTAL DEFICIENCY WHICH
COULD ENDANGER THE AIRCRAFT'S SAFETY.

6:  Did the engines work properly ? As soon as he escaped from the aircraft,
the captain Michel Asseline cast doubts on the response of the engines. These
are built by CFM, a 50/50 between General Electric (USA) and Snecma (France)."

There then follows a number of contradictory statements from Snecma and
Asseline, and the fact that one can feel that an engine is not giving enough
power to 'leap-out' of such a situation under these circumstances, since the
engines cannot instantaneously give maximum thrust.

" Asseline answers that the constructors of the engine CFM56 were aware of the
problems of acceleration of the engine at low altitude but told AIR FRANCE
about that only after the crash."

Along with the article is a photocopy of a bulletin released by Airbus
Industrie Flight Division, entitled:

"Operations Engineering Bulleting - A320.
Validity A320 - CFM only
Bulletin Nr. 19/1  DATE: MAY 88

Reasons for issue:

   It has been observed (recently), so far on test aircraft only, that a
VSV position control deficiency could prevent the engines from accelerating
up to full power under certain flight conditions:

- low altitude
- high aircraft speed
- engine acceleration initiated from N1 between 40% and 70%

Two production aircraft have been checked and no engine presented any sign
of this problem.

Explanation:

[...]
The most likely reason for the problem is a lack of muscle pressure to
control the VSV's to their intended position when aerodynamic loads are
high. Therefore the problem is unlikely to occur at low aircraft speed.

Actions:

A full investigation... [...]
As a precautionary measure, the following procedure it to be applied should the
problem be encountered.

Procedures:

If the "compressor vane fault" warning come-on ECAM at low altitude (below
10000 feet), associated with a lack of N1 response (typical values for throttle
position: 100% N2, 80% N1), the fault can be cleared by decelerating the engine
to idle, and then performing a rapid acceleration from idle speed to full
power. "

The newspaper says that 2 conditions out of 3 were met, and that the engine
throttle positions were the same during the crash. N1 is low power throttle
position (40% - Max 80%).  One second before the crash , N1 was 67%. It then
increased to 83% and 84%.  A failure was possible. Since the aircraft serial
number was 009, it could be considered as nearly being still a prototype.

"7:  The report is incoherent on a few points. It doesn't get alarmed that
although the stick was pulled back fully (written on pages 22 and 43), the
inclination of the Airbus did not change.
[...]
Last problem: Can the DGAC, through the commission leading the enquiry,
admit that an aircraft that it certified earlier is defficient ?
All parties involved, Air France, the French Civil Aviation Authority,
the Snecma, are all owned or controlled by the French State.
Would the French State have any advantage in taking itself to court ?"

There then follows a discussion about the political and economicl factors
at stake. Enormous sources of revenue from exportation of Airbus and of CFM56
engines which also equip some Boeing aircraft.

The article concludes that if nothing was wrong with the aircraft, it is
in everybody's  interest to help the inquiry so that all questions that remain
can be answered, and the public be hopefully reassured why the flight
ACF 296 Q terminated in the woods of Hasheim, in the east of France.

disclaimer: I know, I am a hopeless translator. I am merely relaying to this
list what I read in the paper. I have no opinion on that matter, neither does
KCL or in fact anyone I know... So please: no flames, thanks.

Olivier Crepin-Leblond, Computer Systems & Electronics Eng.
Electrical & Electronic Eng., King's College London, UK.


Re: AT&T Crash Statement: The Official Report

Geoff Goodfellow <geoff@fernwood.mpk.ca.us>
Mon, 29 Jan 90 19:47:42 -0800
>Date: 28 Jan 90 17:24:48 GMT
>From: dhk@teletech.uucp (Don H Kemp)
>Newsgroups: comp.dcom.telecom
>Subject: AT&T Crash Statement: The Official Report
>Organization: TELECOM Digest

Here's AT&T's _official_ report on the Martin Luther King day network
problems, courtesy of the AT&T Consultant Liason Program.

Don
        =========================================================

Technical background on AT&T's network slowdown, January 15, 1990

                             *  *  *

At approximately 2:30 p.m. EST on Monday, January 15, one of AT&T's 4ESS toll
switching systems in New York City experienced a minor hardware problem which
activated normal fault recovery routines within the switch.  This required the
switch to briefly suspend new call processing until it completed its fault
recovery action — a four-to-six second procedure.  Such a suspension is a
typical maintenance procedure, and is normally invisible to the calling public.

As part of our network management procedures, messages were automatically sent
to connecting 4ESS switches requesting that no new calls be sent to this New
York switch during this routine recovery interval.  The switches receiving this
message made a notation in their programs to show that the New York switch was
temporarily out of service.

When the New York switch in question was ready to resume call processing a few
seconds later, it sent out call attempts (known as IAMs - Initial Address
Messages) to its connecting switches.  When these switches started seeing call
attempts from New York, they started making adjustments to their programs to
recognize that New York was once again up-and-running, and therefore able to
receive new calls.

A processor in the 4ESS switch which links that switch to the CCS7 network
holds the status information mentioned above.  When this processor (called a
Direct Link Node, or DLN) in a connecting switch received the first call
attempt (IAM) from the previously out-of-service New York switch, it initiated
a process to update its status map.  As the result of a software flaw, this DLN
processor was left vulnerable to disruption for several seconds.  During this
vulnerable time, the receipt of two call attempts from the New York switch --
within an interval of 1/100th of a second — caused some data to become
damaged.  The DLN processor was then taken out of service to be reinitialized.

Since the DLN processor is duplicated, its mate took over the traffic load.
However, a second couplet of closely spaced new call messages from the New York
4ESS switch hit the mate processor during the vulnerable period, causing it to
be removed from service and temporarily isolating the switch from the CCS7
signaling network.  The effect cascaded through the network as DLN processors
in other switches similarly went out of service.  The unstable condition
continued because of the random nature of the failures and the constant
pressure of the traffic load in the network providing the call-message
triggers.

The software flaw was inadvertently introduced into all the 4ESS switches in
the AT&T network as part of a mid-December software update.  This update was
intended to significantly improve the network's performance by making it
possible for switching systems to access a backup signaling network more
quickly in case of problems with the main CCS7 signaling network.  While the
software had been rigorously tested in laboratory environments before it was
introduced, the unique combination of events that led to this problem couldn't
be predicted.

To troubleshoot the problem, AT&T engineers first tried an array of standard
procedures to reestablish the integrity of the signaling network.  In the past,
these have been more than adequate to regain call processing.  In this case,
they proved inadequate.  So we knew very early on we had a problem we'd never
seen before.

At the same time, we were looking at the pattern of error messages and trying
to understand what they were telling us about this condition.  We have a
technical support facility that deals with network problems, and they became
involved immediately.  Bell Labs people in Illinois, Ohio and New Jersey joined
in moments later.  Since we didn't understand the mechanism we were dealing
with, we had to infer what was happening by looking at the signaling messages
that were being passed, as well as looking at individual switches.  We were
able to stabilize the network by temporarily suspending signaling traffic on
our backup links, which helped cut the load of messages to the affected DLN
processors.  At 11:30 p.m. EST on Monday, we had the last link in the network
cleared.

On Tuesday, we took the faulty program update out of the switches and
temporarily switched back to the previous program.  We then started examining
the faulty program with a fine-toothed comb, found the suspicious software,
took it into the laboratory, and were able to reproduce the problem.  We have
since corrected the flaw, tested the change and restored the backup signaling
links.

We believe the software design, development and testing processes we use are
based on solid, quality foundations.  All future releases of software will
continue to be rigorously tested.  We will use the experience we've gained
through this problem to further improve our procedures.

It is important to note that Monday's calling volume was not unusual; in fact,
it was less than a normal Monday, and the network handled normal loads on
previous weekdays.  Although nothing can be guaranteed 100% of the time, what
happened Monday was a series of events that had never occurred before.  With
ongoing improvements to our design and delivery processes, we will continue to
drive the probability of this type of incident occuring towards zero.

Don H Kemp, B B & K Associates, Inc., Rutland, VT   uunet!uvm-gen!teletech!dhk


Important Lesson from AT&T Tragedy

<WHMurray.Catwalk@DOCKMASTER.NCSC.MIL>
Sun, 28 Jan 90 10:00 EST
[A tragedy is when you learn the wrong thing from a fiasco.]

After AT&T's early and re-assuring statements about the nature of their recent
failure, I complained about the potential to permanently mislead their
employees about the nature of the failure.  Now it turns out that the
statements were premature and misleading.

I comment now at the risk of still being too soon.  However, I will limit my
comments to the community and to what is now already all too clear.

THE FAILURE WAS NOT IN THE FAILURE OF A SINGLE PIECE OF SOFTWARE.
NEITHER IS IT LIKELY TO BE AN ISOLATED CONDITION.

The failure was not, as was first suggested, a propagating alarm condition.
Rather, IT WAS THE INABILITY TO PROPERLY HANDLE THE RETURN TO THE LINE OF A
FAILING COMPONENT.  It was not a COMPONENT problem; it was, as I feared and
suggested a SYSTEM problem.  This is a growing class of complex system failure.

It was the major cause of the NASDAQ outage and it contributed to Hinsdale.
NASDAQ's system behaved as expected when the service from the local utility
dropped.  It failed when the external power came back on line.  The ability of
the operator in down-state Illinois to properly read the situation in Hinsdale
was complicated by the fact that the power system and the fire alarms were
coupled, in a subtle way, so as to confuse the readings, and because the
attempts of automatic systems to compensate for the fire, not only perpetuated
the fire, but confused the alarm information even further.

We are testing to see what happens when a component fails.  WE ARE NOT PROPERLY
TESTING TO SEE WHAT HAPPENS WHEN THE COMPONENT SUCCEEDS IN CORRECTING ITSELF
AND COMES BACK ON LINE.

William Hugh Murray, Fellow, Information System Security, Ernst & Young
2000 National City Center Cleveland, Ohio 44114
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840


Potential Lesson From AT&T

<WHMurray.Catwalk@DOCKMASTER.NCSC.MIL>
Sun, 28 Jan 90 10:17 EST
Potentially, there is a second lesson to be learned from AT&T.  It is that it
is possible to over-automate.  As I noted earlier, the real problem was not
that a component failed, but that the system could not tolerate it coming back
on-line.

Note that AT&T had automated the procedure for restoring a failing component.
This decision contributed to the failure.  Now, if this kind of component
failure was so common that manual intervention would be ineffective and
inefficient, then the decision to automate it would be appropriate.  Otherwise,
it is an example of OVER AUTOMATION.  Such automation adds so much complexity
that it causes failures that would not have taken place in its absence.

AUTOMATE ONLY THOSE THINGS THAT HAPPEN WITH SUFFICIENT FREQUENCY THAT
AUTOMATION IS JUSTIFIED.  AVOID GRATUITOUS AUTOMATION.

Notice that the procedure was inadequately tested, and that this contributed to
the failure.  In fairness to AT&T and others who have had similar problems,
such conditions are difficult to simulate and the procedures difficult to test.

IF YOU CANNOT TEST IT, DO NOT DO IT.

William Hugh Murray, Fellow, Information System Security, Ernst & Young
2000 National City Center Cleveland, Ohio 44114
21 Locust Avenue, Suite 2D, New Canaan, Connecticut 06840


Sun Sendmail Vulnerability

Kenneth R. van Wyk <krvw@SEI.CMU.EDU>
Mon, 29 Jan 90 16:47:21 EST
                CERT Advisory
               29 January 1990
              Sun Sendmail Vulnerability

The Computer Emergency Response Team Coordination Center (CERT/CC) has learned
of, and has verified, break-ins on several Internet systems in which the
intruders have exploited a vulnerability in the Sun sendmail program.  This
vulnerability exists in all versions of SunOS up to and including the current
version, 4.0.3 on Sun 3, Sun 4, and Sun 386i systems (note that 4.0.2 is the
most current version of SunOS on the 386i machines). That is, all current Sun
systems.

The vulnerability has previously been reported to Sun and a solution to this
problem (Sun bug # 1028173) is available via a new version of sendmail supplied
by Sun.  The new sendmail is available directly from the Sun Answer Center
(1-800-USA-4SUN).  Sun 3 and Sun 4 sendmail binaries are also available via
anonymous FTP from uunet.uu.net in the /sun-fixes directory.

This incident underscores the need for system administrators to maintain an
awareness of the steps their vendors are taking to improve the security aspects
of their products, and to seriously consider upgrading system configurations
when solutions to security problems are made available.

Administrators of Sun systems are urged to contact Sun for the new version of
the sendmail program.  Administrators of machines other than Suns are urged to
contact their vendors to verify that they are running the latest version of
sendmail, since there may have been security related fixes to it in the past
year.

If you need further information on this problem, contact your Sun
representative or CERT/CC.  CERT/CC can be contacted by telephone at (412)
268-7090 (24 hours) or email to cert@cert.sei.cmu.edu (monitored daily).

Our thanks to Matt Bishop and Wayne Cripps for their efforts in analyzing and
investigating this problem and its solution.

Kenneth R. van Wyk, Technical Coordinator, Computer Emergency Response Team
Software Engineering Institute, Carnegie Mellon University
cert@CERT.SEI.CMU.EDU                      (412) 268-7090  (24 hour hotline)

  [Perhaps the "secure" version would fix the RISKS sendmail timeout problem
  that occasionally gives some of you multiple copies of RISKS, but then there
  apparently are other things that we need that it does not do!  For this
  reason there are various versions of sendmail floating around, none of which
  is quite what is needed.  Oh, well, what's new?  PGN]


GPO Library disk infection (PC)

Kenneth R. van Wyk <krvw@SEI.CMU.EDU>
30 Jan 90 19:29:04 GMT
I phoned the folks at the GPO and confirmed that the above report is indeed
true.  They faxed me a copy of a letter which they're sending out to the people
that they know have received the disks.  Below is a (transcribed - sorry if
there are typos) copy of that fax.
                                                  Ken

===== Cut Here =====

Dear Depository Librarian:

GPO has just been notified by the Census Bureau that one of the floppy
disks just distributed by GPO with the _County and City Data Book_
CD-ROM is infected with a computer virus AND SHOULD NOT BE USED UNDER
ANY CIRCUMSTANCES.  The floppy disk was listed on shipping list
90-0057-P as C 3.134/2:C 83/2/988/floppy-2.  The title on the floppy
disk reads as follows:

Bureau of the Census
Elec. County & City Data Bk., 1988
U.S. Stats., Inc., 1101 King St.,
Suite 601, Alexandria, VA 22314
(703) 979-9699

PLEASE DESTROY THE FLOPPY DISK AS SOON AS IT IS RECEIVED.  (Do NOT
reformat and reuse the floppy disk.)

The virus has been identified as the Jerusalum-B virus (also referred to as the
Israeli virus).  It infects any .COM or .EXE program on MS-DOS personal
computers and increases program size by approximately 1,800 bytes.  Other
programs are infected when they are executed in an infected system.

The Jerusalum virus can cause significant damage on an infected personal
computer.  It generally slows down the system and some versions destroy all
data on the hard disk.  .EXE files continue to grow in size until they are too
large to execute.

If your computer has already been infected, we recommend that, if possible, you
seek assistance from a computer specialist at your institution immediately.
There are special programs available for detecting and eradicating computer
viruses.  One may be available in your institution or from someone you know.
DO NOT USE YOUR PC TO ACCESS A NETWORK OR PRODUCE FLOPPY DISKS CONTAINING .EXE
OR .COM PROGRAMS FOR BY OTHER PCS.

The _County and City Data Book_ CD-ROM can be used safely with the
software on the other floppy disk disk distributed in that shipment
((C 134/2:C 83/2/988/floppy).

If you have any questions, please call Jan Erickson at GPO (202
275-1003) or the Census Bureau Customer Service at (301 763-4100).

The Census Bureau and GPO regret any problems that this may have
caused.  Appropriate measures will be taken to ensure that it does not
happen again.


Re: Password Sharing [Arsenault, RISKS-9.57]

Al Arsenault <AArsenault@DOCKMASTER.NCSC.MIL>
Tue, 30 Jan 90 12:31 EST
In response to the many comments regarding my previous transaction, about the
student who responded to a test question that passwords are better than
biometrics because one can give out a password to a friend and thus does not
need to be physically present when the friend logs in to one's account:

I gave the student no credit for his answer.  (It made no difference
whatsoever in his semester grade.)  True, I could have worded the question
differently:   "If one is concerned about maintaining individual
accountability, what is one advantage...".  However, I did not believe that
the lack of the qualifier made his answer acceptable:  the student should
have understood the context of the question.

In discussing the question/answer with the student afterward, I described
a situation that happened a couple of years ago in another class I
was teaching.  (The story is true; the names have been changed to protect
the guilty.)

I was teaching a course in Pascal programming to a class of Information
Systems Management majors.   "Joe", a student in the class, was engaged
to "Sue", who had taken the course the previous semester.  Joe told Sue
his password, so that she could help him with his class programming
assignments.  However, in the middle of the semester, Joe dumped Sue
for Sally, another student in the current class.

For Sue, revenge was sweet.  Since she knew Joe's password, she logged in to
his account and 'fixed' his programs.  She didn't delete his files; that would
have been obvious, and fixable from backup tapes.  Instead, she subtly changed
lines of code in the programs (e.g., 'while (x < MaxSize)...'  became 'while
(x<= MaxSize)...').  The net result was that the programs ran, but gave
incorrect answers.

Joe never did find out what happened.  I found out about it because Joe wound
up failing the class (NOT just because the programs didn't work, but that
contributed).  Sue felt guilty; she came to see me after the semester and
explained what she had done.  I made some comment to her about 'he should have
changed his password after changing his personal life'.  She said that he
eventually did.  But, Joe told his new password to three or four of his other
friends; it was not difficult for her to find out his new password from one of
them.

As I explained to the Computer Security student, this is exactly the problem
with sharing your password.  Once you have given it to someone else, you have
lost control of it.  There is no real way of being sure exactly who knows your
password and what is being done with/to your account, unless you can completely
trust everyone you give your password to.

Now, I do understand that sometimes individual accountability is not important.
If the security policy for the system says that individual accountability is
not an issue, and everyone clearly understands what that means, then I have no
problems with sharing passwords.  I just want everyone to understand the
potential consequences before they make that decision.

I also understand that, no matter what security policies/rules say, some people
are going to do what they want - including sharing passwords, etc.  I cannot
stop that, any more than I can stop people from breaking other rules/laws, etc.
As a teacher, what I want to do is make people aware of the possible
consequences of breaking those rules.  That applies to the consequences from
the Academic Computing Center ("we're terminating this account, because you
didn't abide by the rules") and to the consequences from those who get access
to the accounts.  Once users understand the rules, the reasons, and the
consequences, they will make their own decisions.  The only thing I can do is
report anything I personally know about to appropriate authorities, and hope
for the best with the rest.

(To further someone's analogy about giving someone your ATM card and PIN when
you're ill, so they can get you some money: understand before you do it that,
if this person wants to take some extra money from your account and pocket it,
(s)he can.  If this person loses your card and the slip of paper (s)he has
written the PIN on, then someone else can get your money.  Once you understand
that, then you have to make up your own mind about whether the risk is worth
the benefit.)

Al Arsenault


Call for papers: Annual Computer Security Applications Conference

(Marshall D. Abrams) <abrams@soldier.mitre.org>
Wed, 31 Jan 90 09:05:43 -0500
                         CALL FOR PAPERS AND PARTICIPATION
                          Sixth Annual Computer Security
                              Applications Conference
                                December 3-7, 1990
                                 Tucson, Arizona
The Conference

    Operational requirements for civil, military, and commercial systems
increasingly stress the necessity for information to be readily accessible.
The Computer Security Act of 1987 requires that all Federal agencies take
certain actions to improve the security and privacy provided by federal
computer systems. Accomplishing both operational and security requirements
requires the application of the maturing technology of integrated information
security to new and existing systems throughout their life cycle.
    This conference will explore technology applications for both civil and
military systems; the hardware and software tools and techniques being
developed to satisfy system requirements; and specific examples of systems
applications and implementations.  Security policy issues and standards will
also be covered during this five day conference.

Papers, Tutorials, and Vendor Exhibits

    Technical papers and tutorials that address the application of integrated
information security technologies in the civil, defense, and commercial
environments are solicited.  Original research, analyses and approaches for
defining the computer security issues and problems identified in the
Conference's interest areas; secure systems in use or development;
methodological approaches for analyzing the scope and nature of integrated
information security issues; and potential solutions are of particular
interest.  We are also interested in vendor presentations of state-of-the-art
information security products.

INSTRUCTIONS TO AUTHORS:
    Send five copies of your paper or panel proposal to Dr. Ronald Gove,
Program Chairman, at the address given below.  Tutorial proposals should be
sent to Dr. Dixie Baker at the address given below.  We provide "blind"
refereeing; put names and affiliations of authors on a separate cover page
only.  It is a condition of acceptance that manuscripts submitted have not been
published.  Papers that have been accepted for presentation at other
conferences should not be submitted.

    Papers and tutorial proposals must be received by May 18, 1990.  Authors
will be required to certify prior to June 20, 1990, that any and all necessary
clearances for publication have been obtained, that they will attend the
conference to deliver the paper, and that the paper has not been accepted
elsewhere.  Authors will be notified of acceptance by July 30, 1990.  Camera
ready copies are due not later than September 19, 1990.  Material should be
sent to:

Dr. Ronald A. Gove            Dr. Dixie B. Baker
Technical Program Chair       Tutorial Program Chair
Booz-Allen & Hamilton Inc.    The Aerospace Corporation
4330 East-West Highway        P.O. Box 92957, MI/005
Bethesda, MD 20814            Los Angeles, CA 90009
(301) 951-2395                (213) 336-7998
Gove@dockmaster.ncsc.mil      baker@aerospace.aero.org

Areas of Interest Include:

GOSIP   C3I Systems
ISO/OSI Security Architecture Policy and Management Issues
Advanced Architectures        SDNS
Trusted DBMSs and Operating   Risk/Threat Assessments
   Systems                    Network Security
Public Law 100-235 Medical    Records Security
Current and Future Trusted    State-of-the-Art
   System Technology          Trusted Products
Space Station Requirements    Certification, Evaluation, and
                                 Accreditation

Reviewers and Prospective Conference Committee Members

    Anyone interested in participating as a reviewer of the submitted papers,
please contact Dr. Ron Gove at the address given above.  Those interested in
becoming members of the conference committee should contact Dr. Marshall Abrams
at the address below.

Additional Information

    For more information or to receive future mailings, please contact the
following at:

The MITRE Corporation          Marshall Abrams
7525 Colshire Drive Conference Chairman
McLean, VA 22102    (703) 883-6938
         abrams@mitre.org

       Diana Akers or Victoria Ashby
       Publicity and Publication Chairs
(703) 883-5907 or (703) 883-6368
akers%smiley@gateway.mitre.org
ashby%smiley@gateway.mitre.org


Virology

Gene Spafford <spaf@cs.purdue.edu>
Mon, 29 Jan 90 11:01:17 EST
"Computer Viruses: Dealing with Electronic Vandalism and Programmed
Threats" by Eugene Spafford, Kathleen Heaphy, and David Ferbrache.
1989, 109 pages.  Published by ADAPSO.

The book has been written to be an accessible resource guide for computer users
and managers (PC and mainframe).  It presents a high-level discussion of
computer viruses, explaining how they work, who writes them, and what they do.
It is not intended to serve as a technical reference on viruses, both because
the audience for such a work would be limited, and because such a reference
might serve to aid potential virus authors.

The goal of the book is to dispell some common myths about viruses (and worms,
trojan horses, et. al.), and provide simple, effective suggestions for how to
protect computer systems against these threats.  It furthermore stresses that
most systems face greater threats from other areas, so the proper attitude to
take is to strengthen overall security; concrete suggestions for enhancing
overall security are also presented.

The appendices provide extensive references to other publications, security
organizations, anti-viral software sources, applicable (U.S.)  state and
Federal laws against computer crime, and detailed descriptions of all IBM and
Apple Macintosh viruses known as of 1 October 1990.

Although written for ADAPSO members, almost any computer user should find it
instructive.  The appendices are an excellent source of further information,
addresses and phone numbers, and pointers to software.  At least one university
professor has indicated he will use the book in a security course, and some law
enforcement agencies are also considering using the book for instructional
purposes.

The authors are interested in comments and feedback about the book, especially
in areas where information might be added.  You can contact them by sending
mail to "virus-book@cs.purdue.edu"

Table of Contents: Preface, Executive Summary, Introduction,
  Programmed Threats, What is a Computer Virus?, Names,
  Dealing with Viruses, Prevention, Security, Legal Issues, Attitudes,
  Further Information on Viruses,  Information on Anti-Viral Software,
  Further Information on Legal Aspects of Viruses,
  Further Reading and Resources

A copy can be ordered from ADAPSO, 1300 North Seventeenth St., Suite 300,
Arlington, VA 22209 USA,Attn: Mr. John Gracza.  Single copies are $30.  Copies
ordered on university stationary or on stationary of ADAPSO member companies is
only $20, and $16 for the second and subsequent copies.  Requests for review
copies or special considerations should be addressed directly to John Gracza.
Copies have been given away to ADAPSO member companies, and various state and
Federal law enforcement agencies, so check with others in your organization to
see if a copy isn't already available for review.  Overseas orders will be
shipped surface mail.  Overseas air mail $10 extra.

Please report problems with the web pages to the maintainer

x
Top