The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 21 Issue 85

Monday 7 January 2002

Contents

Yokoh Satellite loses control
Paul Saffo
More medical risks
Clay Jackson
Bogus dates for McAfee virus alerts
William Colburn
Re: Harvard admissions e-mail bounced by AOL's spam filters
Simon Waters
Danny Burstein
Gordon Zaft
Re: "Buffer Overflow" security problems
Nicholas C. Weaver
Dan Franklin
Kent Borg
Jerrold Leichter
Henry Baker
Re: Software glitch grounds new Nikon camera
Dave Gillett
REVIEW: "Incident Response", Kenneth R. van Wyk/Richard Forna
Rob Slade
Info on RISKS (comp.risks)

Yokoh Satellite loses control

<Paul Saffo <psaffo@iftf.org>>
Sat, 05 Jan 2002 07:08:39 -0800

An unharmonic convergence of solar eclipse and satellite's "invisible
orbit"...

SKY & TELESCOPE'S NEWS BULLETIN - JANUARY 4, 2002
For images and Web links for these items, visit <http://www.skypub.com>

YOHKOH LOSES CONTROL

On December 14, 2001, the Japanese solar observatory Yohkoh began spinning
out of control. Since then, all scientific operations have stopped, and it
remains unclear when the craft will be operational again.

The problem began during last month's annular eclipse of the Sun.  Yohkoh
uses a Sun-centering system to determine its position at any given
time. During the eclipse, the craft lost contact with the Sun, put itself
into a "safe mode," and slowly began to drift off track and rotate. Normally
this wouldn't have been a problem -- during its decade in orbit, Yohkoh has
seen its share of eclipses. However, this event occurred during a rare
period of the craft's orbit (known as an invisible orbit) when the craft was
out of communication with Earth.  Thus controllers on the ground couldn't
detect (or compensate for) the craft's sudden roll.

Problems only got worse from there. Because of its slow roll, Yohkoh's solar
panels no longer received direct sunlight. By the time ground controllers at
the Kagoshima Space Center regained contact with the observatory, its
batteries were very low and the craft had lost attitude control.

To fix the problem, scientists first established contact and turned off all
the craft's science instruments in order to conserve power.  Currently the
craft is rotating slowly, about one rotation per minute.  According to Loren
Acton (Montana State University), head scientist of Yohkoh's solar X-ray
telescope, in the spacecraft's current state, its solar panels only receive
sunlight in spurts. "During flashes of illumination, electricity is
produced," says Acton. Thus the first step toward recovery is for scientists
to wait until the craft can charge up.

It's currently unclear when, and even if, scientists will regain control of
the craft. But astronomers are hopeful. "It will take clever work to stop
the roll and re-acquire the Sun," says Acton.


More medical risks

<"Clay Jackson" <clayj@nwlink.com>>
Sat, 5 Jan 2002 19:13:40 -0800

In RISKS-21.84,  "Laura S. Tinnel" <ltinnel@teknowledge.com> wrote about
risks of unattended, unlocked computers in patient and examination rooms.

Reminds me of a time a few years back when I visited my local HMO (they have
their own facilities), and discovered a username, password and IP address on
a PERMANENT sticker the side of a system (the monitor, actually) being used
for Patient Registration.  Needless to say, the first thing I did when I got
home was 'ping' that address from my PC.  Of course, it responded.  When I
tried 'telnet', it came back with 'Login:'.  I didn't have the heart to try
the credentials I'd seen. The NEXT thing I did was drop an e-mail to a friend
who worked in IS at the HMO.  I returned to the clinic two weeks later, and
the sticker had been pasted over.  I don't know if they've yet secured their
network (we've since switched providers).

Clay Jackson <clayj@nwlink.com>


Bogus dates for McAfee virus alerts

<"Schlake ( William Colburn )" <schlake@nmt.edu>>
Fri, 4 Jan 2002 11:48:23 -0700

http://www.mcafeeb2b.com/avert/virus-alerts/default.asp

When I go to McAfee virus alerts Web page I read the somewhat disconcerting
line "This page current as of" (and it ends, without even a period).  What
am I to assume about the currentness of the page?

Turning on javascript gives me a slightly different answer that reads "This
page current as of Monday, January 4th, 1971."  So now I know that as of
early January 1971 there are NO virus alerts for any 1980's era DOS boxes
and 1990's/2000's era Windows boxes.  That sure makes me feel a whole lot
better.  Not only are there no virus alerts, but the machines those alerts
would be for haven't even been invented yet!

Oh wait, the clock on my machine is just wrong, and the Web page merely
printed out the local concept of the day and year.

How much can I trust the page now?  The concept of "current" is local to me,
the reader, via javascript.  I don't need to go out onto the Internet to
download a current copy of the page from the McAfee Web site to get an "up
to date" version, I just have to reload my locally cached copy and presto it
has todays date on it, and I will never again have to worry about viruses
alerts because there won't be any.

The risk here is that someone could look at this Web page and see an invalid
date because either their machine has the wrong time or because the Web page
was cached somewhere and not re-downloaded.  The result would be that
someone might not find out about an important (high risk) virus that could
potentially do a lot of damage.

PS: I complained about this to McAfee using their online form about a
month ago, and never heard anything back.


Re: Harvard admissions e-mail bounced by AOL's spam filters (R-21.84)

<Simon Waters <Simon@wretched.demon.co.uk>>
Sat, 05 Jan 2002 22:17:51 +0000

  '"AOL officials could not explain" - why their servers identified these
  e-mail messages as spam.'

Funny, because I can explain this, and have in previous submissions to
comp.risks, as it happens to many mailing lists.

AOL mail servers delete e-mails after accepting them if they think they are
spam, without notifying the intended recipient or the sender. They do this
when they receive bulk mailings, although the exact circumstances that
trigger it remain known only to the AOL mail administrators.

Anybody having set up a large e-mail lists knows this, you have to get bulk
mailing servers white-listed by AOL. Presumably Harvard didn't do this.

AOL doesn't publicise this information, but I've had the basics confirmed by
a former employee of AOL.

The only way to ensure you get the e-mail from lists you subscribe to is not
to use AOL for your e-mail.

Hopefully someone at Harvard will explain the business consequences of such
idiotic behaviour to AOL, in the meantime just use an ISP who knows what
they are doing, not for nothing did they get the nickname "America Offline".

  [Similar comments from Jenny Holmberg.  PGN]


Re: Harvard admissions e-mail bounced by AOL's spam filters (R-21.84)

<danny burstein <dannyb@panix.com>>
Sat, 5 Jan 2002 18:31:22 -0500 (EST)

First, the stories claimed that these e-mails "bounced" back to Harvard.
However, in my own experience with AOL's spam filters, most of the time that
mail is simply sent to /dev/null. So the sender generally does not even get
any notice that their e-mail was undelivered. Since I haven't seen any
actual direct quote from a Harvard spokesrep I doubt any sort of ack was
sent back.

Also note that a "bounce" message would take this whole saga out of the
"risks" venue (or at least move it to the margin). Once the sender is
advised of the problem different steps can be taken. And in this sort of
situation, an e-mail bounce ten minutes after sending is far preferable to a
USPS similar bounceback which would take days or weeks.

The second point is that the stories, again, claim that Harvard was putting
information about this up on their Web page. I've been checking every few
hours since the reports first appeared. Nothing about this has appeared on
their main page nor on any obvious links.


Re: Harvard admissions e-mail bounced by AOL's spam filters (R-21.84)

<Gordon Zaft <zaft@newmonics.com>>
Mon, 07 Jan 2002 13:13:36 -0700

Daniel Smith notes, "Let us hope that organizations do not begin use e-mail
for communications more important than university admissions letters, in the
name of "security" (and cost reduction)."

Alas, it's too late for that.  My alma mater, The University of Arizona(tm)
(really!) now requires all students to have and use an e-mail account for
official correspondence.  While it's true that problems with
university-hosted e-mail are not likely to cause a problem (or to be caught
if they are), many students are likely to forward these accounts to other
accounts where they might run into this problem.  It's disturbing.

UA's e-mail policy is online at
  http://www.registrar.arizona.edu/emailpolicy.htm .


Re: "Buffer Overflow" security problems (Baker, RISKS-21.84)

<"Nicholas C. Weaver" <nweaver@CS.Berkeley.EDU>>
Sat, 5 Jan 2002 13:15:52 -0800 (PST)

I agree with Henry Baker's basic assessment that buffer overflows,
especially in code which listens to the outside world (and therefore
vulnerable to remote attacks) should be classed as legally negligent.
However, it seems to be nigh-impossible to get programmers to write in more
semantically solid languages.

There is another solution: software fault isolation [1].  If the C/C++
compilers included the sandboxing techniques as part of the compilation
process, this would eliminate the most deleterious effects of stack and heap
buffer overflows: the ability to run an attacker's arbitrary code, with a
relatively minor hit in performance (under 10% in execution time).

An interesting question, and one for the lawyers to settle, is why haven't
these techniques been widely deployed?  The techniques were being
commercialized by Colusa Software as part of their mobile code substrate [2]
in the mid 1990s.  In March 1996, Colusa software was purchased by Microsoft
and it seems effectively digested, thereby eliminating another potential
mobile-code competitor, something Microsoft seemed to fear at the time.

The interesting RISK, and one which is probably best left to the lawyers, is
that as a result, for over half a decade, Microsoft has owned the patent
rights and the developments required to eliminate two of their biggest
security headaches: unchecked buffer overflows and Active-X's basic
"compiled C/C++" nature, yet seems to have done nothing with them.

What is the liability involved when a company owns the rights to a
technology which could greatly increase safety, at an acceptable (sub 10%)
performance penalty, but does nothing to use it in their own products?
Especially when the result is serious, widespread security problems which
could otherwise be prevented?

[1] "Efficient Software-Based Fault Isolation", Robert Wahbe, Steven Lucco,
Thomas E. Anderson, Susan L. Graham, in *ACM SIGOPS Operating Systems
Review*, volume 27, number 5, December 1993, pp 203--216,

[2] "Omniware: A universal substrate for mobile code"

Nicholas C. Weaver  nweaver@cs.berkeley.edu


Re: "Buffer Overflow" security problems (PGN, RISKS-21.84)

<Dan Franklin <dan@dan-franklin.com>>
Sun, 6 Jan 2002 11:40:50 -0500

> Perhaps in defense of Ken Thompson and Dennis Ritchie, C (and Unix, for
> that matter) was created not for masses of incompetent programmers, but
> for Ken and Dennis and a few immediate colleagues.

Which only serves to emphasize Henry's point.  The code that those "few
immediate colleagues" wrote also suffered from buffer overflow problems.
Not only did many ordinary commands written at Bell Labs fail given long
enough lines, but in one early version of UNIX, the (written in C) login
command had a buffer overflow problem that permitted anyone to login by
providing sufficiently long input.

In other words, C buffer overflows have caused security problems ever since
the language was created; and even the earliest users of C have been caught
by it.  If software were really an engineering field, we would learn as
engineers do to avoid tools and methods that persistently lead to serious
problems.

Note that gcc, the very popular GNU C Compiler, has experimental extensions
to support bounds checking; see http://gcc.gnu.org/extensions.html.  Let us
hope that one of these extensions makes its way out of the laboratory soon.
If it became a standard gcc option, the current sorry situation might begin
to improve.


Re: "Buffer Overflow" security problems (Baker, RISKS-21.84)

<Kent Borg <kentborg@borg.org>>
Mon, 7 Jan 2002 11:47:16 -0500

A good start, but as we have heard before, be careful what you wish for.

First, take a quick look at our current patent office (full of experts who
still approve the silliest software patents) to judge whether our legal
system (full of someone's peers) will be able to handle the concept "buffer
overflow".

Second, even though most open source software is written in C, it is
still easier to assemble a reasonably secure Internet server out of
common open source software than it is from the dominant proprietary
options.  And open source software is getting better in this regard.

On its face, however, it seems your proposal would have the effect of
outlawing open source software.

Rather, we might consider putting the liability on those who use the
software -- for even good software can be misapplied -- encouraging users to
choose well written products, and creating a market for well written
products.

Put another way, consider this example: instead of banning the "book of
spells" we might instead sanction the "sorcerer's apprentice" who plugs an
unprotected computer into the Internet and so lets the broomsticks fly.  I
don't think you'll get the results you desire.

Sure, maybe let the user pass on some blame to negligent companies, but
let's not have the blame *start* at the individual programmer's keyboard.

Or, put in Roman terms, have the various management (including those
who commission it) stand under the new bridge, not the stone carvers.

-kb, the programmer Kent who admits he has a conflict of interest here.


Re: "Buffer Overflow" security problems (Baker, RISKS-21.84)

<Jerrold Leichter <jerrold.leichter@smarts.com>>
Mon, 7 Jan 2002 12:00:29 -0500 (EST)

Henry Baker complains about the continuing stream of problems due to buffer
overflows, and blames the C language.  PGN repeats a number of common
defenses for C:

- It's perfectly possible to write bad, buggy code in the best languages;
- It's perfectly possible to write good code in the worst languages;
- It's wrong to blame Ken Thompson and Dennis Ritchie (who, BTW, Mr. Baker
  did not) because they never intended for C and Unix to be used the way
  they are today;
- Expanding on this, spreading the blame for the use of inappropriate
  Microsoft systems in life- and mission-critical applications to just about
  every one who's ever touched a computer.

I've been a C programmer for some 20 years, a C++ programmer for 6.  I know
well the advantages of the languages.  But I'm really tired of the excuses.

No, Thompson and Ritchie are not to blame.  Anyone who actually reads what
they've written over the years - papers or code - will know that they
understand the tradeoffs and make them very carefully.  I wish my code could
be as good as theirs!

Unfortunately, I can't say the same about much of the C and C++ culture that
grew up around their inventions over the years.  A programming community
develops its own standards and styles, its own notions of what is important
and what isn't important.  These standard, styles, and notions are extra-
ordinarily influential.  Some of the influence is transmitted through
teaching; much is transmitted through the code the community shares.  The
most pernicious influences in the C/C++ community include:

- An emphasis on performance as the highest goal.  For the most recent
  manifestation of this, you need only look to the C++ Standard Template
  Library (STL).  It has many brilliant ideas in it, but among the stated
  goals, from the first experiments, was to produce code "as efficient as
  the best hand-tuned code".  "As *safe*" or "as *reliable*" were simply not
  on the table.  The STL has attained its stated goals.

  Yes, there are debugging versions with things like bounds checking, but
  "everyone knows" that these are for testing; no real C++ programmer would
  think of shipping with them.

- A large body of code that provides bad examples.  Why are there so many
  buffer overflows in C code?  The C libraries are, to this day, full of
  routines that take a pointer to a buffer "that must be large enough to
  contain the result".  No explicit size is passed.  I'm told that the guys
  at AT&T long ago removed gets(), a routine like that which reads input,
  from their own library.  It persists in the outside world - an accident
  waiting to happen.  Some routines have only very recently even appeared in
  alternative versions that have buffer length arguments - like sprintf()
  and its relatives.  Until snprintf() became widespread (no more recently
  than the last 5 years), it was extremely difficult to write code that
  safely wrote arbitrary data to an in-memory buffer.  (If you think it's
  easy, here's a quick question: How large must a buffer be to hold the
  result of formatting an IEEE double in f format with externally-specified
  precision?  Hint: The answer is *much* larger than the "about 16" that
  most people will initially guess.)

  As part of a C++ system I work on, I have a vector-like data structure.
  The index operation using [] notation is range- checked.  For special
  purposes, there's an UnsafeAt() index operation which is not.  Compare
  this to the analogous data structure in the C++ library, where [] is *not*
  range checked and at() is.  When the choice is between a[10] and a.at(10),
  which operation will the majority of programmers think they are supposed
  to use?  Which data structure would you rather see taught to the
  programmers who will develop a system your life will depend on?  (BTW,
  extensive profiling has yet to point to []'s range checking as a
  bottleneck, with the possible exception of the implementation of a hash
  table, where unsafeAt() could be used in a provably-correct way.)

- A vicious circle between programmers and compiler developers.  C and C++
  programmers are taught to write code that uses pointers, not indices, to
  walk through arrays.  (The C++ STL actually builds its data structures on
  the pointer style.)  So why should C/C++ compiler developers put a lot of
  effort into generating good code for index-based loops?  C/C++ programmers
  are taught not to expect the compiler to do much in the way of common
  sub-expression elimination, code hoisting, and so on - the earliest C
  compilers ran on small machines and couldn't afford to.  Instead, C/C++
  programmers are taught to do it themselves - and the C language allows
  them to.  So why should C/C++ compiler developers bother to put much
  effort here?

  Put this together and you can see that checking your array accesses for
  out-of-range accesses can be a really bad idea: Your check code could run
  every time around the loop, instead of being moved out to the beginning as
  a FORTRAN programmer would expect.  I'm sure there are some - perhaps many
  - C/C++ compilers today that would provide such optimizations.  Given the
  generality of C and C++, it can be a challenge, but the techniques exist.
  However, it's an ingrained belief of C/C++ programmers - and a
  well-founded one - that they can't *rely* on the availability of such
  optimizations.  (A FORTRAN programmer can't point to a standard in his
  reliance on such optimizations, but no one today would accept a FORTRAN
  compiler that didn't do them.)

I haven't even touched on the closely related issue of the dangers of manual
memory management, and the continuing refusal of the C/C++ community to
accept that most programs, and certainly most programmers, would be better
off along every significant dimension with even a second-rate modern memory
allocator and garbage collector -- especially in the multi-threaded code that's
so common today.

Is it *possible* to write reliable, safe code in C or C++?  Absolutely --
just as it's *possible* to drive cross-country safely in a 1962 Chevy.  Does
that mean the seat belts, break-away steering columns, disk brakes, air
bags, and many other safety features we've added since then are unnecessary
frills?

Programming languages matter, but even more to the point, programming
*culture* matters.  It's the latter, even more than the former, that's given
us, and will continue to give us, so much dangerous code.  Until something
makes it much more expensive than it is now to ship bad code -- and I
believe that Mr. Baker is right, and the only thing that will do it is a few
big liability judgments - nothing is likely to change.  Unfortunately,
liability judgments will bring other changes to the programming world that
may not be nearly so beneficial.


Re: "Buffer Overflow" security problems (Baker, RISKS-21.84)

<Henry Baker <hbaker1@pipeline.com>>
Sun, 06 Jan 2002 09:08:42 -0800

Ari Ollikainen wrote [to HB]:

> And hardware with separate instruction and data space would not
> necessarily solve the buffer overflow problem but at the very least would
> avoid the inevitable compromise of reliability...  and the possibility of
> corrupting running code.

> There was a time when ANY flavor of unix was considered an oxymoronic
> concept in regard to reliability and security.

> Ari Ollikainen, OLTECO, Networking Architecture and Technology,
> P.O. BOX 20088, Stanford, CA 94309-0088 1-415 517 3519  Ari@OLTECO.com

Good point re separate I & D spaces ("Harvard" architecture).

One of the reasons why people like network "appliances" (i.e.,
non-programmable devices, except for firmware) is that they think that they
are secure from viruses.  But if they have _any_ capability of executing
code out of data space, then they are just as vulnerable to "buffer
overflow" attacks.  In fact, because of their supposed invulnerability, they
are probably _more_ susceptible, because no one bothers to run virus
checking software on them.  Henry


Re: Software glitch grounds new Nikon camera (Mautner, RISKS-21.83)

<Dave Gillett <dgillett@deepforest.org>>
Wed, 26 Dec 2001 16:14:32 -0800

About 18 months ago, I managed to get my Kodak digital camera into a state
where it would not properly complete the power-up cycle, nor power down.  I
was able to clear this state by removing the batteries; on re-insertion, a
normal power-down state presented itself, and power-up proceeded normally.

The jammed condition was the result of timing of some user interactions
while other actions were in progress; I have never been able to reproduce it.


REVIEW: "Incident Response", Kenneth R. van Wyk/Richard Forna

<Rob Slade <rslade@sprint.ca>>
Mon, 7 Jan 2002 10:01:47 -0800

BKINCRES.RVW   20011001

"Incident Response", Kenneth R. van Wyk/Richard Forna, 2001,
0-59600-130-4, U$34.95/C$52.95
%A   Kenneth R. van Wyk ken@incidentresponse.com
%A   Richard Forna rick@incidentresponse.com
%C   103 Morris Street, Suite A, Sebastopol, CA   95472
%D   2001
%G   0-59600-130-4
%I   O'Reilly & Associates, Inc.
%O   U$34.95/C$52.95 800-998-9938 fax: 707-829-0104 nuts@ora.com
%P   214 p.
%T   "Incident Response"

Incident response has, in the past, received short shrift in security
literature.  It is also a rather vague term: what type of an incident are we
talking about?  how big?  What type of response are we considering?
protective?  defensive?  offensive?  The authors have provided us a starting
point for consideration and the benefit of some years of experience, but
this work is, unfortunately, less detailed than it might have been.

Chapter one does not do a good job of defining incident response: the
examples are instructive, but the material wanders through a number of
topics without developing any central focus.  There is an examination of the
strengths and shortcomings of various types of response teams, such as those
internal to companies, related to vendors, or established by security
management companies, in chapter two.  Planning, in chapter three, has some
good points to consider, but doesn't offer a lot of guidance.  Chapter four,
entitled "Mission and Capabilities," seems to be the core of the book,
touching on staff, positions, training, legal considerations, procedures,
and other issues.  A wide-ranging list of attack types, albeit with very
terse descriptions, is given in chapter five.  The incident handling model
presented in chapter six is vague but reasonable.  Chapter seven contains
quick overviews of a number of detection tools, mostly software.  A few
resources, generally Web sites, are given in chapter eight.

This book is the result of considerable background and practice.  While
there are no obvious errors and the material presents good advice, it is
hard to be excited about the result.  Overall, the book seems to lack
direction, and fails to present a structured and clear guide to the
preparations necessary for dealing with computer incidents.  However, in the
absence of other material it is better than nothing, and does raise the
issues to be addressed.

In response to the first draft of this review, one of the authors has
responded that the intent of the book was not to address the techniques of
incident response, but to provide management with an understanding of the
subject.  That statement fits with the text, but is in some opposition to
the assertion in the preface that the book is aimed at all who would need to
respond to incidents, including systems administrators and other technical
people.

copyright Robert M. Slade, 2001   BKINCRES.RVW   20011001
rslade@vcn.bc.ca  rslade@sprint.ca  slade@victoria.tc.ca p1@canada.com
http://victoria.tc.ca/techrev    or    http://sun.soci.niu.edu/~rslade

Please report problems with the web pages to the maintainer

Top