The RISKS Digest
Volume 21 Issue 98

Friday, 29th March 2002

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Friendly Fire deaths traced to dead battery
Jamie McCarthy
KNHaw
British Air Traffic Control system outage
Alistair McDonald
Clinton cartoon carries virus
NewsScan
Low-tech election risks: mice
Mike Martin
Black box or Pandora's box?
Monty Solomon
eBay identity theft
Scott Nicol
Software "glitch" changes the colour of the universe
Pete Mellor
Bioinformatics start-of-the-art
Richard A. O'Keefe
Windows XP disables own firewall
Scott Miller
Re: LED lights can reveal computer data
Anthony DeRobertis
Re: Disclaimers
Malcolm Cohen
Re: PayPal's tenuous situation
Ray Todd Stevens
Alun Jones
Re: The RISK of ignoring permission letters
Gene Spafford
Ray Blaak
Pearl Harbor Dot Com, by Winn Schwartau
PGN
REVIEW: "Authentication: From Passwords to Public Keys", R.E. Smith
Rob Slade
Info on RISKS (comp.risks)

Friendly Fire deaths traced to dead battery

<Jamie McCarthy <jamie@mccarthy.vg>>
Tue, 26 Mar 2002 10:47:52 -0500

In one of the more horrifying incidents I've read about, U.S. soldiers and
allies were killed in December 2001 because of a stunningly poor design of a
GPS receiver, plus "human error."
  http://www.washingtonpost.com/wp-dyn/articles/A8853-2002Mar23.html

A U.S. Special Forces air controller was calling in GPS positioning from
some sort of battery-powered device.  He "had used the GPS receiver to
calculate the latitude and longitude of the Taliban position in minutes and
seconds for an airstrike by a Navy F/A-18."

According to the *Post* story, the bomber crew "required" a "second
calculation in 'degree decimals'" — why the crew did not have equipment to
perform the minutes-seconds conversion themselves is not explained.

The air controller had recorded the correct value in the GPS receiver when
the battery died.  Upon replacing the battery, he called in the
degree-decimal position the unit was showing — without realizing that the
unit is set up to reset to its *own* position when the battery is replaced.

The 2,000-pound bomb landed on his position, killing three Special Forces
soldiers and injuring 20 others.

If the information in this story is accurate, the RISKS involve replacing
memory settings with an apparently-valid default value instead of blinking 0
or some other obviously-wrong display; not having a backup battery to hold
values in memory during battery replacement; not equipping users to
translate one coordinate system to another (reminiscent of the Mars Climate
Orbiter slamming into the planet when ground crews confused English with
metric); and using a device with such flaws in a combat situation.


Friendly Fire deaths traced to dead battery

<<knhaw@rockwellcollins.com>>
Tue, 26 Mar 2002 14:35:01 -0800

  [...]

The article states: "Nonetheless, the [anonymous, senior defense department]
official said the incident shows that the Air Force and Army have a serious
training problem that needs to be corrected.  "We need to know how our
equipment works; when the battery is changed, it defaults to his own
location," the official said.  "We've got to make sure our people understand
this."

It also states: "...it is not a flagrant error, a violation of a procedure,"
the official said. "Stuff like that, truth be known, happens to all of us
every day — it's just that the stakes in battle are so enormously high."

  [Full article submitted by several others.  TNX.  PGN]


British Air Traffic Control system outage

<Alistair McDonald <alistair@inrevo.com>>
Thu, 28 Mar 2002 06:52:50 +0000

One of the British air-traffic control systems crashed on 27 Mar 2002, and
affected airports across Britain.  Hundreds of flights were canceled or
delayed.  A spokesman said that this computer was not connected with the
computers at the new Swanwick ATC centre in Hampshire (which opened six
years late and millions of pounds over budget).  ["connected with" is of
course ambiguous in this context.  PGN-ed]
  http://uk.news.yahoo.com/020327/80/cvck5.html

    [Simon Waters reported this case also at
      http://news.bbc.co.uk/hi/english/uk/newsid_1897000/1897885.stm
    PGN]


Clinton cartoon carries virus

<"NewsScan" <newsscan@newsscan.com>>
Wed, 27 Mar 2002 08:17:36 -0700

McAfee, the anti-virus software company, says a new virus called MyLife.B.,
is being circulated as an e-mail attachment featuring a cartoon about former
president Bill Clinton. A McAfee executive says, "If this one does reach
large proportions, it will be a very costly virus because most consumers
don't have good backup methods for their operating system or important files
on the C drive." The virus e-mails itself to everyone in a user's Microsoft
Outlook address book or MSN Messenger contact list. The virus will cause
damage only if you open the attachment — so don't open it! (*USA Today*,
26 Mar 2002; NewsScan Daily, 27 March 2002)
  http://www.usatoday.com/life/cyber/tech/2002/03/26/viruses.htm


Low-tech election risks: mice

<"Mike Martin" <mike_martin@altavista.net>>
Wed, 27 Mar 2002 12:58:46 +0900

Those concerned about the risks of high technology voting methods should
remind themselves that low-tech methods (ballot papers marked with a pencil,
transported and counted under the watchful eye of scrutineers) present their
own risks.  *The Bangkok Post* reports
  http://www.bangkokpost.com/270302_News/27Mar2002_news06.html
that, following voting in a by-election on March 3, mice managed to climb
into one of the ballot boxes and chew up ballots.

The winning candidate had a 65 vote lead when the undamaged votes were
counted, and it was estimated that scraps left by the mice represented
another 40 papers. "This result still has to be endorsed by the Election
Commission," reports *The Post*.

  [Revised URL fixed in archive.  PGN]


Black box or Pandora's box?

<Monty Solomon <monty@roscom.com>>
Sun, 24 Mar 2002 17:19:07 -0500

Black box or Pandora's box?

Most new vehicles come equipped with data recording technology that
can help accident investigators. But the computer device has its
critics, who fear the overstepping of "Big Brother."  [...]
  http://www.phillyburbs.com/intelligencerrecord/article1.asp?F_num=1484073


eBay identity theft

<Scott Nicol <sbnicol@mindspring.com>>
Wed, 27 Mar 2002 13:13:49 -0500

Interesting article at <http://zdnet.com.com/2100-1106-868306.html>.

Summary: You can run a dictionary attack on an eBay account, because eBay
doesn't lock an account for invalid logins, no matter how many invalid
login attempts are made.

eBay doesn't lock accounts for invalid login attempts because "unscrupulous
bidders might try to sabotage their competitors by locking out their
accounts or that legitimate users may find themselves unable to log in after
an attempted dictionary attack".  So I guess identity theft is not as bad as
these other possibilities?

Then there's this quote: "We're trying to figure out a way that we can adopt
it without disclosing how the process works".  It's a pretty simple
processes - 3 strikes (or whatever) and you're out.  I assume from this
quote that they mean they want to implement something that is not so simple,
i.e. locking only if it appears to be a dictionary attack.  This is security
through obscurity - it won't take much time until somebody figures out what
constitutes a dictionary attack pattern, then modifies their dictionary
attack to avoid the pattern.

Scott Nicol <sbnicol@mindspring.com>


Software "glitch" changes the colour of the universe

<Pete Mellor <pm@csr.city.ac.uk>>
Wed, 13 Mar 2002 00:35:43 +0000 (GMT)

As reported on the "Broadcasting House" programme on BBC Radio 4,
Sunday 10th March:-

Scientists at John Hopkins University have spent several years calculating
the weighted average of the electromagnetic frequency of emissions from all
galaxies in the observable universe.  They concluded their research by
announcing last month that, on average, the universe is turquoise.

Last week, they announced that, due to a software "glitch", they had
miscalculated, and that the universe is, in fact, beige.

Broadcasting House are threatening legal action, claiming that they have
just had their studio painted turquoise in order to be in harmony with the
rest of the universe.

Peter Mellor, Centre for Software Reliability, City University,
Northampton Square, London EC1V 0HB UK  NEW Tel.: +44 (0)20 7040 8422


Bioinformatics start-of-the-art

<"Dr Richard A. O'Keefe" <ok@cs.otago.ac.nz>>
Wed, 13 Mar 2002 18:26:28 +1300

Bioinformatics is a hot topic at this university, and the computer science
department is just starting to get involved.  As part of trying to learn
about this field, I thought I'd read a couple of the better-known programs.
To be honest, I thought I'd run splint (formerly known as lclint) over them
and find a minor buig or two.  I'm not going to name either of these
programs, but one of them was particularly interesting because we were
thinking of having a student make a parallel version to try out a parallel
architecture one of our people is interested in, because normal runs of this
program on recent PCs can take about 3 weeks.

I don't know what art these programs are state-of; possibly macrame.
They certainly aren't even 1970's state of the programming art.

* indentation inconsistent, crazy, or both (fix with indent) - lines up
  to 147 columns wide (fix with indent)
* lots of dead variables (fix with quick edit)
* array subscripts that could go negative (use unsigned char rather than
  char in a couple of dozen places, phew)
* failure to comprehend that C++ prototypes and C prototypes are
  different (fix by changing () to (void) in too many places)
* #define lint ... so that lint falls over (rename lint to Lint in a
  dozen files)
* assumption that long int = 32 bits (one program) or that int = 32 bits
  (the other) (not yet done, but use inttypes.h with a local backup)
* string->integer code that gets INT_MIN wrong (rip out, plug in code
  known since 60s)
* using %ld format with int arguments (*printf and *scanf), a real
  problem because the machines
  (I have access to are 64-bitters and it'd be nice if the programs ran
  in LP64 mode.)
* gcc, liint, splint find variable used before initialised (see next
  item)
* technically legal syntax with no semantics: double matrix[][] as
  function argument,. (Scream, bang head on wall, write this message.)

Is it reasonable to expect people with a biochemistry or mathematics
background to write clean well-engineered code?  No.  For the importance of
the topic, and the sums of money involved, is it reasonable to expect that
they'll have their programs cleaned by someone else before release?  I think
it is.  With the pervasive lack of quality I'm seeing, I don't trust _any_
of the results of these programs.  I have to wonder how many published
results obtained using these programs (and fed back into databases that are
used to derive more results which are ...) are actually valid.


Windows XP disables own firewall

<Scott Miller <scottamiller@usa.net>>
Thu, 14 Mar 2002 05:43:05 -0500

Caveat: this is third hand, I've no way to test. However, the original
reporter seems to have done sound observation and corroboration, and this
could be important. Extract from a report from Tim Loeb posted on Jerry
Pournelle's mail page (www.jerrypournelle.com/mail/currentmail.html):
...What happens is this: either during the initial setup of Earthlink as a
network connection via the Network Connection Wizard or later as an explicit
modification the user attempts to activate XP's Internet Connection
Firewall.  All seems to go well. If the user is on-line with the target
connection active at the time he/she will be advised that not all features
can be implemented until sign-off and a fresh log-on. Reviewing the status
of that active connection will show that the check box for Enable Internet
Connection Firewall IS checked, and the user would naturally think the
protection is in place. Running the tests on Steve Gibson's site right then,
with the active connection unbroken since enabling the firewall, will show
that the machine is indeed in full "stealth" mode, and naturally most people
would now assume the issue has been successfully addressed. WRONG! A fresh
log-on (via Earthlink using their dialer at least - I have no way to test
other ISP connections and/or associated software) DISABLES the firewall, and
the machine is completely open to probes and hacks! I've spent hours testing
this scenario, and the result is always the same: while I can enable the
Internet Connection Firewall and have it work ONCE, as soon as I log off the
network and back on again the protection disappears, and the "enable
Internet Connection Firewall" box reverts to being unchecked. Frankly I
don't know what's happening here, but it is happening on two separate
machines that have never been on a network together...


Re: LED lights can reveal computer data (Simicich, RISKS-21.95)

<Anthony DeRobertis <asd@suespammers.org>>
Wed, 13 Mar 2002 22:17:53 -0500

My Lego Mindstorms set communicates with both infrared and visible LED's. So
does your television remote control.  And many other things.

If you want to communicate with LEDs, you can. However, I doubt very much
that it is easy to read data passing over a modem from the activity light!
Unless it only lights for, e.g., 1's. Even then, at reasonable data rates
--- especially since you have no ECC coding or even clock sync --- it seems
nearly impossible.


Re: Disclaimers (Bacon, RISKS-21.95)

<Malcolm Cohen <malcolm@nag.co.uk>>
Wed, 13 Mar 2002 11:15:21 +0000 (GMT)

>...  "Was the original e-mail monitored - WITHOUT the recipient's consent?"

The recipient has nothing to do with it.  It is the sender who has copyright
on the email (the recipient is not entitled to publish it).

Just like letters and other "ordinary" correspondence, the sender can show
it to anyone else.  Obviously, by choosing to use a monitored email system,
he has chosen to let it be monitored.

>How long before the 'thought police' in the BBC extend their monitoring...

Well, actually, it is normal company policy at most places to open all
incoming correspondance (e.g. letters) as a matter of course!  And it's not
unheard of for certain companies (e.g. mail-order ones) to record all
telephone calls as a matter of course.

And how you imagine that postcards and faxes are "not examined" by anyone
involved with the delivery, I don't know!  They have to start reading the
thing to see who to give it to, it's human nature to look at the rest.

Use of company phones (some places run a "no private calls" policy), company
fax machines, and the company mail service is obviously all down to company
policy.  Why one would imagine that these things are provided for the personal
benefit of employees rather than to conduct company business, I don't know.

As long as the employees know what the policy is I see no grounds for
complaint (other than to grumble about the policy being strict).

Malcolm Cohen, NAG Ltd., Oxford, U.K.  (malcolm@nag.co.uk)


Re: PayPal's tenuous situation (Max, RISKS-21.94)

<"Ray Todd Stevens" <raytodd@kiva.net>>
Wed, 13 Mar 2002 11:04:32 -5

I use PayPal from the vendor side, and I can assume you that you did not
quite understand the system.  Actually, most people I know of who make
extensive use of PayPal end up with a "fraud investigation hold" on their
accounts from time to time.  PayPal seems to have a system that monitors
transactions for weird activity and automatically puts such a hold on
accounts.  Then it appears that a human reviews the activity and
investigates.  If you recieve a drastic increase in the number of
transactions you get flagged.  What got me was having money arrive and then
immediately sent somewhere else.  So a fraud hold does not mean that there
is fraud, but that there appears there may be.

A fraud hold does mean that you don't have access to the funds coming in.
You can not issue bills to people.  (That is, you can't use the PayPal
system to ask people to pay you.)  More important, a person with a fraud
hold can't access the funds.  They can issue refunds, but may not send the
money to other people.  They also may not withdraw funds.  This means that
PayPal probably has your money and you will get it back.  It also means
that, if their automatic system flags your account, you can continue to do
business for the period of time the investigation takes.  In my case one
hold was about 2 hours and another was about 8 hours.

I hope this helps you and the group understand this system better.

Ray Todd Stevens, Senior Consultant, Stevens Services, Suite 21
3754 Old State Rd 37 N,  Bedford, IN 47421  1-812-279-9394  Raytodd@kiva.net


Re: PayPal's tenuous situation (Bayley, RISKS-21.94)

<Alun Jones <alun@texis.com>>
Tue, 12 Mar 2002 20:26:35 -0600

As a merchant myself, accepting credit cards for some time, I can state
quite categorically (and with some rancour) that the approval from the
credit-card company is by no means whatsoever a guarantee of payment.  It
provides a merchant with pretty close to no protection at all.  I've been
provided with chargebacks (which are automatically deducted from my
business' accounts) on transactions where I have meticulously verified that
the credit-card company gave me authorisation.

As far as I can make out, the only "guarantee" is that the checksum
matches, the card hasn't expired through old age, and probably hasn't been
reported as stolen any time longer than a week ago.

Oh, and chargebacks may get submitted to you many months after the original
purchase.  I had one bank try to process a chargeback about two years after
the original purchase.  The number of chargebacks submitted works against
you, as well, as the credit-card company will increase your "discount rate"
(the percentage of the transaction that they take from you) if you have too
many.  Is this to cover their expenses in handling those chargebacks?  Why,
no, of course not.  After all, every chargeback is not only charged at that
same discount rate both coming and going, but also has the helpful addition
of a "service charge" of $25 added on for your convenience.

For as much as credit-card holders may feel concerned about whether their
money is safe in an online transaction (in the USA, law requires it to be
so), the merchants are _always_ the ones left holding the bag.

Texas Imperial Software, 1602 Harvest Moon Place, Cedar Park TX 78613-1419
Fax/Voice +1(512)258-9858


Re: The RISK of ignoring permission letters (Knox, RISKS 21.94)

<Gene Spafford <spaf@cerias.purdue.edu>>
Tue, 12 Mar 2002 21:57:30 -0500

I have a fairly simple response to spam e-mail that claims I requested it,
or can only opt out, or whatever.

I determine the actual sending address, and the domain of any associated
URLs in the message, and I add them to my "black hole" list.  Any future
mail from that address is bounced.  Domains that offer repeated abuses are
added, too.  E-mail in languages other than English with embedded prices,
porno, or URLs to commerce sites automatically go into the list. I also have
added addresses collected in like manner by several friends; I have not used
any of the major anti-spam sites (yet).

It doesn't matter what they claim — I no longer see their spam.

Based on a multi-year history of e-mail, I now completely block any e-mail
from msn.com, any version of yahoo.com, and hotmail.com --- I have had
(literally) thousands of spam messages from there, but only 6 legit
correspondents.  I am also blocking 3200 separate addresses and over 6400
other domains.

My spam load is down to only about 10-15 new pieces per day. :-(

And by the way, if this is being read by the pinheads who keep sending out
ads for reconditioned printer cartridges, please know that we will *never*
do business with your firms.  We're keeping a list.


Re: The RISK of ignoring permission letters (Slade, RISKS-21.95)

<Ray Blaak <blaak@telus.net>>
Thu, 14 Mar 2002 04:39:16 GMT

> Does a failure to respond to this type of message constitute a legitimate
> "acceptance" on my part?  (Particularly for those of us from outside the US?)

This can't be right. So, if the e-mail is lost in cyperspace and you never
even recieve it, is that the same as implicitly consenting?

Does this not have direct precedence with snail mail? I am imagining CD
clubs here. You can't be legally obligated by anything that you receive in
the mail and just throw away.


Pearl Harbor Dot Com, by Winn Schwartau

<"Peter G. Neumann" <neumann@csl.sri.com>>
Sun, 24 Mar 2002 16:36:15 PST

Pearl Harbor Dot Com
A novel by Winn Schwartau
Interpact Press
Seminole, Florida, 1-727-393-6600
2002
ISBN 0-9628700-6-4
512 pages

We do not normally review or analyze RISKS-relevant fiction, but this book
seems to make a rather compelling novel out of a surprisingly large number
of security and reliability risk threats that we have discussed here over
the years.  The story echoes one of the fundamental problems confronting
Cassandra-like risks-avoidance protagonists and agonists alike, namely,
that, because we have not yet had the electronic Pearl Harbor, people in
power perceive that there is little need to fix the infrastructural
problems, so why bother to listen to the doom-sayers who hype up the risks?
Well, in this novel, one man's massive craving for vengeance reaches major
proportions, and significant effects result on critical infrastructures.  In
the end, the good hackers contribute notably to the outcome.

The book is somewhere within the genre of technothrillers, with a typical
mix of murder, mayhem, intrigue, computer-communication surveillance, and
non-explicit s*x.  I enjoyed it.  It is entertaining, and the convoluted
plot is quite consistent, fairly tight, and to RISKS readers, each incident
is technologically quite plausible — because many of the attacks seem
almost reminiscent of past RISKS cases, sometimes just scaled up a little.

If you read the book, try not to let the sloppy proof-reading bother you;
there are too-frequent typos and grammar glitches, and lots of mispelingz --
for example, Naugahyde is subjected to two different versions, each with at
least two letters wrong, and Walter Reade is mispelt twice, differently, on
the same page!  Incidentally, the author and his previous writings make
several self-referential appearances throughout the story, which might seem
rather self-serving, but does draw attention to the author's long-standing
role in trying to combat what has now become known as cyberterrorism.


REVIEW: "Authentication: From Passwords to Public Keys", R.E. Smith

<Rob Slade <rslade@sprint.ca>>
Mon, 18 Mar 2002 11:57:50 -0800

BKAUTHNT.RVW   20020220

"Authentication: From Passwords to Public Keys", Richard E. Smith,
2002, 0-201-61599-1, U$44.99/C$67.50
%A   Richard E. Smith
%C   P.O. Box 520, 26 Prince Andrew Place, Don Mills, Ontario M3C 2T8
%D   2002
%G   0-201-61599-1
%I   Addison-Wesley Publishing Co.
%O   U$44.99/C$67.50 416-447-5101 fax: 416-443-0948 bkexpress@aw.com
%P   549 p.
%T   "Authentication: From Passwords to Public Keys"

Chapter one looks at the history and evolution of password technology,
and introduces a system of discussing attacks and defences that
provides an easy structure for an end-of-chapter summary.  A more
detailed history appears in chapter two, while chapter three discusses
the enrolling of users.

Chapter four is rather odd: it brings up the concept of "patterns" as
defined in the study of architecture, but doesn't really explain what
this has to do with authentication or the book itself.  The closest
relation seems to be the idea of determining a security perimeter.
The material poses a number of authentication problems and touches on
lots of different technologies, but the various difficulties are not
fully analyzed.

Chapter five is supposed to be about local authentication, but mostly
examines encryption.

Strangely, chapter six inveighs against the complex rules for password
choice and management that are commonly recommended--and then adds to
the list of canons the requirement to assess the security of a system
when choosing a password.  Ultimately the text falls back on the
traditional suggestions, with a few good suggestions for password
generation.  This place in the text also marks a change in the volume:
the content moves from a vague collection of trivia to a much more
practical and useful guide.

Chapter seven is a decent overview of biometrics, although there is an
odd treatment of false acceptance and rejection rates, and some
strange opinions.  Authentication by address, emphasizing IP spoofing,
is covered in chapter eight, while hardware tokens are discussed in
chapter nine.  Challenge/response systems are reviewed in chapter ten,
as well as software tokens.  Indirect or remote authentication,
concentrating on the RADIUS (Remote Authentication Dial In User
Services) system, is examined in chapter eleven.  Chapter twelve
outlines Kerberos, and has a discussion of the Windows 2000 version,
albeit with limited analysis.  The study of public key (asymmetric)
cryptography in chapter thirteen would be more convincing with just a
few more sentences of explanation about how keys are established.
Chapter fourteen talks about certificates and signing, while fifteen
finishes with some vague thoughts on password storage.

After a slow (but interesting) start, the book does have a good deal
of useful material in the later chapters.  Long on verbiage and a bit
short on focus, this text does have enough to recommend it to security
practitioners serious about the authentication problem.

copyright Robert M. Slade, 2002   BKAUTHNT.RVW   20020220
rslade@vcn.bc.ca  rslade@sprint.ca  slade@victoria.tc.ca p1@canada.com
http://victoria.tc.ca/techrev    or    http://sun.soci.niu.edu/~rslade

Please report problems with the web pages to the maintainer

x
Top