The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 22 Issue 75

Friday 30 May 2003


Algeria earthquake cuts Internet connectivity of major Greek ISP
Diomidis Spinellis
Diving computer flaw allegedly covered up
Craig S. Bell
"Computer glitch" causes false dam failure warning
Rich Mintz
ISP resets password to an easily guessed one
Dawn Cohen
Ballot scanning problems in New York City
Doug Kellner
Sensitive data on Web sites reflects lack of security awareness
Rick Weiss
Re: OpenBSD ... protects against buffer-overflow
Paul Karger
Re: Modern Computers, Unsafe at any speed?
Bill Stewart
Re: BMW/MSFT failure reported
Geoff Kuenning
No call list preventing 911 notifications
Robert Franchi
University of Calgary going to teach virus writing
Klaus Brunnstein
REVIEW: "Hack Attacks Testing", John Chirillo
Rob Slade
Info on RISKS (comp.risks)

Algeria earthquake cuts Internet connectivity of major Greek ISP

<Diomidis Spinellis <>>
Fri, 23 May 2003 12:45:32 +0300

OTEnet, the Greek ISP associated with the incumbent telephone operator,
appears to be cut off from the (non-Greek) Internet for almost two
consecutive days, due to damage from the Algerian earthquake.  According to
a note on their home page ( - May 23rd - in Greek), the
May 21st Algerian earthquake damaged a number of international cables (Flag,
SMW2, SMW3, Columbus2) that passed through the area, cutting-off OTEnet's
international IP connectivity.  Dial-up customers are advised to use their
Web proxy server as a temporary measure.

A different page <> boasts that
OTEnet's network is the largest and most important [sic] Greek network
featuring 65 points of presence, a 155Mbps backbone, a 100Mbps peering
connection, and 310Mbps international connections.  Apparently physically
redundant international connections or peering agreements that would cover
such an emergency were not foreseen.

A number of Greek companies serviced by the particular ISP appear also
to be cut off.  Other ISPs and the Greek academic network were not
affected, probably because they depend on different cables.  The risk?
Although IP technology supports the rerouting of packets around failed
links, short-sighted network deployment architectures and peering
agreement practices often fail to exploit this capability.

Diomidis Spinellis -

Diving computer flaw allegedly covered up

<"Craig S. Bell" <>>
Tue, 27 May 2003 04:14:06 GMT

I am not a diver, myself; however, I found this story somewhat alarming.
This report discusses a lawsuit surrounding the alleged coverup of serious
problems with a certain Aladin diving computer, beginning in 1995.  The
safety of Aladin diving computers was discussed in RISKS 7.60, several years
before this particular product debuted.

Summary: If you make several short dives in relatively quick succession, the
Aladin Air X Nitrox may dangerously overestimate how much dive time you have

The primary risk is conventional: Despite reports of serious injury, the
Swiss company's two founders successfully covered up knowledge of this flaw
for seven years.  This subterfuge overcame the efforts of some within the
company to recall the defective computer, or otherwise make the public aware
of the problem.

"Computer glitch" causes false dam failure warning

<Rich Mintz <>>
Fri, 23 May 2003 12:39:29 -0400

A "computer glitch" at Santee Cooper, the quasi-public agency that operates
dams and power generation facilities in the South Carolina lowcountry,
resulted in the broadcast of a false public warning that the Santee Dam on
Lake Marion had failed.  Apparently, a flood watch went out electronically
at 3:16 AM, and then the sirens and loudspeakers began broadcasting a verbal
warning of dam failure throughout the area about 8:30 AM.  A Santee Cooper
spokesman said, "A computer program kicked in gear that wasn't supposed to
kick in gear," he said. "We're trying to get our arms around what caused
this so it doesn't happen again."  Santee Cooper also noted that not all the
sirens that should have gone off in such a scenario actually went off, which
is a separate problem they are investigating.

The Santee Dam, which is part of the Santee Cooper lake system comprising
Lakes Marion and Moultrie, is just upriver from the city of St. Stephen,
about 50 miles NW of Charleston, in a region supported by fishing and lake
tourism but also increasingly suburbanized. U.S. 52 connects the area to
Moncks Corner and Charleston.  From the article:

>The threat of a dam break is no laughing matter on the Santee. Engineers
>estimate that the wave from a collapse would hit the U.S. Highway 52
>bridge in four hours. After eight hours, that bridge would be submerged by
>a river level 25 feet above normal, and 14-foot floodwaters would have
>reached St. Stephen. The flood would reach the sea within 48 hours.

From the Charleston (South Carolina) Post & Courier:
Detailed lake maps:
Santee Cooper press release:

Thanks to Wes Singletary for the referral.

ISP resets password to an easily guessed one

<"Dawn Cohen" <>>
Fri, 23 May 2003 10:14:58 -0400

Here's one from the I-can't-believe-they-would-do-such-a-thing department.

Our local broadband provider was RCN, but due to various mismanagement or
economic issues, was essentially ousted by our community.  The service was
picked up by Patriot Media, which doesn't seem to be doing much better.

We were notified around the same day as the switchover took place that
e-mail accounts would be changing to the <old-user-name>,
though we would have a grace period of a couple of months to transition.

My husband tried to log in to his e-mail this morning, and couldn't manage
to get in.  After I turned the switch on for him (:-) he still couldn't get
on, and after a lengthy wait for access to a Patriot Media customer service
person, he found out that his user name had been changed to his old user
name with a "1" appended, and his password had been reset to "rcnrcn".

Need I say more?

Ballot scanning problems in New York City

<Doug Kellner <>>
Thu, 22 May 2003 14:49:42 -0400

The NYC Board of Elections' system for scanning absentee ballots miscounted
at least 19 ballots in a recent closely contested special election for a
city council seat apparently because the scanner improperly sensed the blank
voting oval.  Because the voters had properly marked another oval, the
computer voided the ballots as overvotes.

Douglas A. Kellner, Commissioner, Board of Elections in the City of New York
200 Varick Street, New York, New York 10013  Tel. (212) 889-2121

  [This is not the first time I have heard of a blank oval being detected
  as over threshold for a marked oval.  Just one more risk!  PGN]

Sensitive data on Web sites reflects lack of security awareness

<Rick Weiss <>>
Thu, 29 May 2003 11:06:34 -0700 (PDT)

My health insurance contracts with for lower- cost
prescription drugs.  I recently wanted to order refills, and found that they
had changed their Web site, and that I needed to register to be able to
order refills.  The registration process was mostly just me authenticating

The next day I received 2 e-mail messages from (sent 5
seconds apart).  One said "your username is blah".  The other: "your
password is blah".  Each said that they were sent separately for security

So I logged in.  The login process ONLY required the username and password
--- no other authentication was required for that first login!  And I wasn't
forced to change my password.  In fact, there was no ability to change my

Once logged in, I (or anyone who intercepted the e-mail) could view my full
name, address, birth-date, social security number, doctors' names,
prescriptions etc.  Not only are they putting my identity at risk of theft,
but they are violating HIPAA too (by not taking reasonable steps to keep
others from viewing my private information).

After complaining alot, they put me in touch with their head of IT, David.
David told me that their system (e-mailing both username & password without
any protection, not requiring further authentication to login, not requiring
immediate password change) was indeed secure, and accepted practice too!

Re: OpenBSD ... protects against buffer-overflow (Cowan, RISKS-22.74)

Thu, 29 May 2003 12:46:53 -0400

This is a long comment, but I think it is very important to correct
some myths about segmentation hardware.  I have removed some of
Crispin Cowan's text to make this shorter, and I hope that in doing
so, I didn't change the intent of his remarks.  If I did, please
accept my apology in advance!

>>What is not so apparent is why technology that was developed and operating
>>over 30 years ago is just being re-invented in software.

>Because what was developed in operating systems over 30 years ago was use of
>heavily segmented architectures. Over 20 years ago (the Intel 432) it was
>discovered (the hard way) that such architectures run horribly slowly
>compared to RISC architectures. Since the debacle of the 432, even CISC
>processors such as the x86 have migrated towards RISC style instruction

Crispin Cowan's remarks that the Intel 432 had horrible performance are
absolutely correct.  Unfortunately, the horrible performance had absolutely
NOTHING to do with the segmentation architecture, and his conclusion to
avoid segmentation is incorrect.

The performance lessons of the Intel 432 are EXTREMELY important for anyone
in the OS or security field to learn, both to see what was an utter failure
on that machine as well as what was NOT an utter failure!  The following two
references cover these lessons extremely well.  Please pardon the BibTeX

	author = "Colwell, Robert P. and {Hitchcock III}, Charles Y. and
		  Jensen, E. Douglas and Brinkley Sprunt, H.M. and
		  Kollar, Charles P.",
	title = "Computers, Complexity, and Controversy",
	journal = "Computer",
	volume = 18, number = 9, month = sep, year = 1985, pages = "8--19"}

	author = "Colwell, Robert P.",
	title = "The Performance Effects of Functional Migration and
		 Architectural Complexity in Object-Oriented Systems",
	type = "Ph.D. thesis",
	school = "Department of Computer Science, CMU-CS-85-159,
		  Carnegie-Mellon University",
	address = "Pittsburgh, PA, USA", month = aug, year = 1985}

It is indeed true that that x86 architecture was designed to be used with
segmentation, and almost all operating systems that have been written for it
do NOT use the segmentation.  It is also true that on the x86 that execute
protection without segmentation is a royal pain.  But this does not mean
that using execute protection hardware is bad or that segmentation is bad.
It only means that the x86 is badly designed for operating systems that do
not use segmentation.  The GEMSOS operating system (the only system that I
know of to use x86 segmentation) uses that kind of execute protection with
very good performance!  Operating systems that don't use segmentation need
to have read, write, and execute permission bits on every page of memory.
If you don't have that, you will have trouble!

By the way - I think Crispin's work on solving buffer overflows on x86 to be
very good.  I am NOT criticizing that work at all - only the generalization
that segmentation is bad in all cases.

>>The Burroughs 6700 implemented a hardware solution to the problem by
>>assigning 3 bits of very 51 bit memory location to the type of data

The Burroughs B6700 (still sold by Unisys as the A-series today) is a good
example of how to use segmentation in an operating system and get very good
performance as well as very good buffer overflow protection.  Multics did
exactly the same thing to great success.  See my paper:

Karger, P.A. and R.R. Schell. Thirty Years Later: Lessons from the Multics
Security Evaluation. in Proceedings of the 18th Annual Computer Security
Applications Conference. 2002, Las Vegas, NV IEEE Computer
Society. p. 119-126. URL:

>The 432 did something similar, and the performance penalty was

Colwell's papers make very clear that the horrible performance penalties on
the 432 did NOT come from segmentation.  They came from several other bad
ideas including allowing instructions to start on arbitrary bit boundaries
making instruction decode extremely hard, using full cross-domain calls for
EVERY subroutine call, and numerous other performance atrocities.

In addition to the Burroughs 6700 and Multics, the IBM System 38, AS/400,
and iSeries servers all use segmentation and capabilities to great success.
For segmentation to be useful, you have to have LOTS of segments in the
architecture and each segment must be large enough.  The 80286 chip didn't
have either enough segments or big enough segments, and that started the
myth that segments were bad. The 80386 pretty much fixed those problems.
Multics segments were only 1 megabyte in size, and that was known to be too
small all the way back in 1970!  My personal opinion is that for a segmented
architecture to succeed, you would need at a minimum 64K of segments, with
each segment allowed to grow to at least 4gigs.  Of course bigger numbers
would be preferable, and with 64-bit processors today, that is not a
problem.  The IBM AS/400 and iSeries have (I think - I might have this
wrong) 128-bit addresses!

So in conclusion — segments are most definitely not evil!  Only badly
designed segments are evil!  It is important to design operating systems
that work the same way that the hardware designers intended.  If you don't
do that, and on the x86, almost no one has done that, then you will have
problems.  That is part of why we suffer from buffer overflows today.  There
are lots of other reasons as well, of course.  The C programming language is
certainly another very big culprit.  Almost any other language is better
than C when it comes to buffer overflows - even FORTRAN!

One final comment - segmentation can give a lot of security benefits, but
segmentation is not the one and only true way - just as I believe that
segmentation is not inherently bad, it is also the case that you can get
most of the same benefits with a non-segmented machine IF you have the right
protection bits on every page.  My point is that it is crucial to have a
match between the CPU architecture and the OS architecture, and we don't
have that on the x86 for almost all operating systems available today.

Paul A. Karger, Ph.D., Cantab., IBM, T. J. Watson Research Center

  [A subsequent response from Crispin and Paul's response to that
  are not included herein,  That discussion was very interesting to
  me personally (as an old Multician), but probably of less interest
  to RISKS readers generally.  PGN]

Re: Modern Computers, Unsafe at any speed?

<Bill Stewart <>>
Thu, 29 May 2003 02:31:35 -0700

I was startled by "Len Spyker" <>'s assertion in
RISKS-22.74 that "all that software now wasting CPU time checking for
overflows is no longer needed" because hardware can protect us against

Hardware can't protect you against wrong answers, and while it can detect
some kinds of overflows and halt a program rather than let it dangerously
stomp on other space, that isn't always the right way to respond to a
problem - you might want to do other things like giving the user or
administrator an error message rather than stopping.

Also, hardware protection against stack overflows is easier than protection
against overflows of individual arrays that don't go outside the segment,
and setting up protection for arrays, at least on most hardware, is a lot
more work.  Yes, this will generally stop many kinds of potential security

But back in the mid-70s, when I was learning to program well in college (as
opposed to learning to program haphazardly in high school), one of the first
and most critical lessons was to always check your program's input and
*never* trust it.  It might be bad input by accident, or malicious input on
purpose, and the input data we had to run our class programs on was always
malicious, particularly designed to catch off-by-one errors, which are a
common problem with arrays.  Empty-input errors are fun too, and are often
caused by input data that's out of sync, or by input data that's the wrong
type (e.g. letters when you need numbers.)

Some computer languages will help a lot with bounds checking, while others,
like C, will let you shoot yourself in the foot, though they make it hard to
shoot somebody else in the foot.  Cornell's PL/C compiler (for their dialect
of PL/I) not only detected syntax errors, it tried to correct them.
Sometimes it did it right, sometimes it did it wrong, but it at least let
you try to run the program so you could find as many bugs per set of
keypunch exercise as possible.

Re: BMW/MSFT failure reported (Opie, RISKS-22.73)

Wed, 28 May 2003 22:40:06 -0700 (PDT)

Perhaps I'm clueless for only owning a cheap Toyota, but on my car, I'm not
stuck depending on electronics (and their associated power) to lock and
unlock the doors.  The power locks are only an assist.  No matter what a
terrorist does, I can hand-operate the mechanical locks in either direction.

The true RISK is falling so in love with computerization and power
assists that one forgets simple, reliable design.  Doors that open
themselves?  Unless you're severely disabled, give me a break.

Geoff Kuenning

  [Przemek Klosowski recalled the old chestnut about the thief drop-kicking
  the collision/deceleration sensor, which deploys the airbags and opens
  the car doors.  Just a reminder.  PGN]

No call list preventing 911 notifications

<"Franchi, Robert" <>>
Fri, 30 May 2003 11:21:33 -0400

People who are on the MASS "Do not Call" list were also not included on the
911-emergency notification list (for emergency evacuations etc.).
Apparently, the company that provides the list to the Massachusetts 911
system, Reverse 911, uses commercially available lists that have already had
"Do Not Call" list people removed.

Bob Franchi, FB&RS-Tech FTPS Accounts - Merrimack  (603) 791-5833

University of Calgary going to teach virus writing

<"Klaus Brunnstein" <>>
Fri, 30 May 2003 15:18:01 +0200

RF readers are well aware about many cases where malicious code (aka
viruses, worms, trojans) has adversely influenced proper work in enterprises
and organisations. It may therefore come as surprise that a renown Canadian
university - University of Calgary - is going to teach how to write viruses:

This Web page is a result of rather controversial disputes in several fora
which lead the department to rephrase their earlier announcement which
revealed the reason for the course in full naivety. I quote (without
permission) from UoCs earlier Web site (after a paragraph quoting experts on
damage of malware):

  "Dr. John Aycock, professor for this course, convinced the Department to
  support his idea for offering a course in this area. He says that in order
  to develop more secure software, and countermeasures for malicious
  software, you first need to know how malicious software works and the
  mindset of its creators. By looking through the eyes of the people who
  develop these viruses, our students will learn what their targets actually
  are and what needs to be protected. It's a case of being proactive rather
  than reactive. This attitude is similar to what medical researchers do to
  combat the latest biological viruses such as SARS. Before you can develop
  a cure, you have to understand what the virus is and how it spreads - why
  should combating computer viruses be any different?"

As far as I understand biologists, they DO NOT generate new viruses but try
to extract parts of existing virus code to develop counter-measures. In
Informatics, the equivalent technique is called "Reverse
Engineering". Indeed, you can analyse malicious code WITHOUT THINKING LIKE A
VIRUS AUTHOR, as Dr. Aycock and his faculty seem to think.

Of course, there have been biologists, chemists and physicists who
developed - "proactively" - some new specimen of related kinds but their
goal was NOT to protect humankind but to the contrary to develop new
weapons. In the Information Society, malware are weapons against those
institutions which depend upon proper work of IT. In order to defend against
such weapons, one must NOT THINK as attacker but in terms of the attacked.

  In this sense, it is UNETHICAL (if not yet illegal in some
  parts of the world) to write viruses.

Now, UoC includes discussions of ethics in their course. As if this makes
the dubious goal more honorable: it is inherently "unethical" to write
malicious code. The inclusion of some paragraphs of ethics in UoC course is
no more that an alibi.

After 15 years of teaching how to detect, cure and possibly prevent
malicious code in my courses at Hamburg university since 1988), I have NEVER
seen any need nor has any of my students wished to write harmful code.
Instead, we teach Reverse Engineering (which is legal for purposes of
defense in Germany).

My hope is that University of Calgary experts instruct their students in
methods how to detect, cure and prevent contemporary IT rather than wasting
times in teaching methods how to potentially generate harm.

Klaus Brunnstein (Faculty for Informatics, University of Hamburg)

REVIEW: "Hack Attacks Testing", John Chirillo

<Rob Slade <>>
Thu, 29 May 2003 14:19:02 -0800

BKHKATTS.RVW   20030330

"Hack Attacks Testing", John Chirillo, 2003, 0-471-22946-6,
%A   John Chirillo
%C   5353 Dundas Street West, 4th Floor, Etobicoke, ON   M9B 6H8
%D   2003
%G   0-471-22946-6
%I   John Wiley & Sons, Inc.
%O   U$50.00/C$77.50/UK#34.95 416-236-4433 fax: 416-236-4448
%P   540 p. + CD-ROM
%T   "Hack Attacks Testing"

The description in the introduction seems to indicate that this text
might be similar to SATAN (Security Administrator's Tool for Analyzing
Networks), in that it explains how to build a set of utilities in
order to identify vulnerabilities.  As such, there is the possibility
that the work is open to a charge of being more useful to attackers
than to defenders.  Fortunately, the book does not provide a great
deal of information that could be used to break into systems.
Unfortunately, it doesn't help much with defence, either.

Part one is supposed to describe how to build a multisystem "Tiger
Box," similar to SATAN, and the overview outlines the components of a
penetration test.  Chapters one to four, however, simply narrate the
installations for Microsoft Windows NT and 2000, Red Hat Linux,
Solaris, and Mac OS X, using the installation programs provided.  The
material is heavy on screen shots, and light on explanations of what
is going on and why.  There is no provision for specific security
testing requirements, or even multiboot systems.

Part two lists penetration analysis tools for Microsoft Windows, and
the introduction tabulates common vulnerability classes.  Chapter five
explains how to install the Cerberus Internet scanner, enumerates the
possible reports, and gives one (eight page) sample report.  Much the
same is true for the Cybercop Scanner, Internet Scanner, Security
Threat Avoidance Technology (STAT), and TigerSuite products in
chapters six through nine.  All of these systems do multiple probes
and analysis.

The description of UNIX and OS X tools, in part three, starts with a
twenty page list of UNIX commands.  UNIX utilities tend to be more
single purpose: hping/2 is for IP spoofing and nmap is for port
scanning, but Nessus, SAINT (Security Administrator's Integrated
Network Tool), and SARA (Security Auditor Research Assistant) are

Part four is entitled "Vulnerability Assessment," but contains only
chapter fifteen, which contains checklists for securing various
systems, primarily relying on outside sources.

Despite the introduction, this book does *not* describe how to set up
a "Tiger Box."  It lists a few vulnerability scanners and utilities.
There is little in the way of help or explanations, and the material
seems to be based primarily on product documentation and commonly
available guides.  The content actually by Chirillo often seems so
oddly written that it is difficult to parse any meaning from the text.

The book does provide you with a list of vulnerability scanners.  But
then, so would any decent Web search.

copyright Robert M. Slade, 2003   BKHKATTS.RVW   20030330    or

Please report problems with the web pages to the maintainer