The Risks Digest

The RISKS Digest

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Volume 23 Issue 80

Wednesday 23 March 2005

Contents

Procurement risks and nonverifiable code
Tim Panton
DEA agent shoots self while demonstrating gun safety
Arthur T.
Boston College loses thousands of SSNs
Geoff Kuenning
Yes, we know what that means!
Tim Connors
Risks of long and short URLs
Arthur T.
GPS
Martyn Thomas
Snowplow fraud and GPS devices
David Tarabar
Re: More uses of satnav/GPS
Roland Giersig
Re: Website hijackings, 302 redirects, and security issues
Drew Dean
Re: Remote physical device fingerprinting
Markus Roth
REVIEW: "The Information Security Dictionary", Urs E. Gattiker
Rob Slade
Info on RISKS (comp.risks)

Procurement risks and nonverifiable code

<Tim Panton <thp@westhawk.co.uk>>
Fri, 18 Mar 2005 13:54:35 +0000

In today's *Independent* newspaper:
  [http://news.independent.co.uk/business/news/story.jsp?story=621293]

> Edward Leigh, the Conservative chairman of the committee, said: "It is
> simply disgraceful that the MoD has spent a quarter of a billion of
> taxpayers' money on the botched procurement of eight Chinook
> helicopters that cannot be flown because the MoD can't determine if
> they're safe."
>
> The problem has arisen because the MoD cannot validate the software
> codes used by Boeing in the helicopters' avionics system and flight
> controls. The US company is not prepared to release these for security
> reasons.

Apparently the contract doesn't specify that the MoD has the right to
see the code.

Open-source avionics anyone ?


DEA agent shoots self while demonstrating gun safety

<"Arthur T." <myspamtrap01@yahoo.com>>
Wed, 23 Mar 2005 10:34:18 -0500

The subject doesn't sound like a computer-related risk?  The follow-up may
be.

A year after the incident, home video of the incident is on various web
sites.  From Snopes:

  Experts in the field said that the undercover agent should never have been
  videotaped because it could put the agent's life at risk.

  "It puts a lot of undercover agents in jeopardy if their faces are
  videotaped," a masked agent told Local 6 News.  "His identity is
  burned. His identity is known as a police officer and its a potential
  personal safety hazard to himself as well as his family members."

As is often the case with "computer error", this is actually human error.
Even if not being videotaped, why was an undercover agent appearing in
public as himself?  Admittedly, computers my have exacerbated the problem.

[Source: http://www.snopes.com/photos/accident/gunsafety.asp]


Boston College loses thousands of SSNs

<Geoff Kuenning <geoff@cs.hmc.edu>>
19 Mar 2005 00:37:16 +0100

  Hackers have invaded a Boston College database of alumni, compromising
  data on up to 100,000 people.  The data includes Social Security Numbers.
  In a choice quote, Jack Dunn of BC ``noted that Boston College will
  hereafter delete Social Security numbers from its records, despite their
  usefulness in maintaining accurate records.''

Question: If every organization that currently stores SSNs waits until
*after* they are hacked before they decide that maybe it's not smart to
expose sensitive data, how many Americans will be left with uncompromised
SSNs?

Liability laws are desperately needed.

More information at:
http://news.zdnet.com/2100-1009_22-5623084.html

Geoff Kuenning   geoff@cs.hmc.edu   http://www.cs.hmc.edu/~geoff/


Yes, we know what that means!

<Tim Connors <tconnors+risks@astro.swin.edu.au>>
Wed, 16 Mar 2005 17:44:46 +1100 (EST)

Whilst booking a ticket for an event in the Food and Wine Festival, I came
across this: http://www.ticketmaster7.com/help/privacy.asp

  4: Data Security Ticketmaster7 will endeavour to take all reasonable steps
  to keep secure any information that we hold about you.  Ticketmaster7 has
  security measures, proprietary secure algorithms, in place to protect the
  loss, misuse and alteration of the information under our control. Our
  secure server software is the industry standard and among the best
  software available today for secure commerce transactions.

Am I too cynical?  Why do people always thing "proprietary" means "best"?
Am I just meant to say "I don't think that word means what you think it
means"?  Oh, and "industry standard".  You mean, run by microsoft, insecure,
buggy, and plain does not do its job, but it's at least the normal state of
affairs!

TimC -- http://astronomy.swin.edu.au/staff/tconnors/


Risks of long and short URLs

<"Arthur T." <myspamtrap01@yahoo.com>>
Sun, 13 Mar 2005 10:32:23 -0500

We all know the Risks of long URLs.  They include line-wrap problems and
trying to find an "@" about 100 characters in.

To combat the line-wrap problem, some sites are providing short URLs for any
arbitrary page.  One such is http://tinyurl.com .  The problem here, though,
is that you can't know where you're going until you get there.  This hampers
the anti-phishing advice to type in a URL sent in e-mail.  It could be used
for a range of nefarious or hoax uses.

I looked at the tinuyrl site and didn't find any way to expand a compressed
URL.  Since they specifically suggest using their service to hide affiliate
URLs, this is probably on purpose.


GPS

<"Martyn Thomas" <martyn@thomas-associates.co.uk>>
Sun, 13 Mar 2005 18:39:44 -0000

Proposals to use GPS for functions that are essential to critical
infrastructure (rail, for example) seem to appear every week.

GPS is a system that relies on weak signals detected by aerials that are not
focused on the position of the satellites. It is therefore extremely
vulnerable to in-channel jamming.

How can we get some reality into these proposals? Or are we destined to be
at the mercy of denial of service attacks by anyone who knows enough
electronics to be able to generate a few watts of RF noise?


Snowplow fraud and GPS devices (Re: RISKS-23.67)

<David Tarabar <dtarabar@acm.org>>
Sat, 12 Mar 2005 09:19:15 -0500

As I reported in RISKS-23.67, for the past several years, the Massachusetts
Highway Department (MassHighway) has required that private snow-removal
contractors carry a GPS equipped mobile phone.

On 11 Mar, the Mass. Attorney General indicted a contractor and an employee
on charges that they did privately-paid work during state time and used
state road salt on this private work.

The contractor was responsible for a section of state highway. During a
heavy snowstorm, the contractor got road salt from a state depot, left his
GPS device in a snow bank on the side of the highway, and drove off to do
his non-state plowing. Later he gave his GPS device to his employee who was
plowing the state highway - so it appeared that two trucks were plowing in
tandem. The contractor confessed to double dipping during this winter and
the 2003-2004 season.

It should be noted that this scheme was NOT discovered via the GPS device.
The State Police got a tip and followed the contractor during a recent
storm. When GPS records were examined, they found that the contractor's GPS
device showed periods of excessive inactivity during most storms this year.
Officials at MassHighway said that it was human error that this fraud was
not detected by examining GPS records.

[Source: *The Boston Globe*,  12 Mar 2005, John Element,
Mass. charges 2 in alleged snowplow scam]


Re: More uses of satnav/GPS (Bacon, RISKS-23.78)

<Roland Giersig <roland+risks23.78@giersig.org>>
Tue, 15 Mar 2005 15:19:41 +0100

In RISKS-23.78, Michael Bacon gave a good discussion about the risks of
using GPS in commercial safety-critical applications, namely railroads.  I
want to answer some open points and discuss them more deeply:

Determining the position of a train is only one point, the second crucial
point is communicating this position to other trains. Currently this works
via track-side sensors and track-side signalling, where data is exchanged
between the on-board unit, the track-side systems and the track
operators. Critical failure of some part of this system triggers a set of
actions that brings the whole system (i.e. all trains) into a safe state:
signals go to red (luckily, trains, unlike airplanes, do have a simple, safe
state :-) ). If the failure is not too severe, e.g.  track-side
communication to the on-board units fails, then the system falls back to a
state where the train pilot has to navigate by sight instead of relying on
the electronic systems. And this is perfectly safe, unlike in
air-traffic-control.

And this fall-back also would be the case for GPS-based positioning: the
danger does not lie in GPS failure (outage), which is detected easily and
the same fall-backs are possible as with current systems.

>>> The danger lies in the unknown accuracy of the GPS signal!! <<<

This is where EGNOS and GALILEO come into play: EGNOS is an additional
information system that will provide up-to-date accuracy information about
the GPS signal. This enables safety-critical systems to react to increased
error in the GPS signal. GALILEO on the other hand, being a commercial,
multi-national system, is supposed to be completely independent of the GPS
system (and of foreign influence, hostile actions not withstanding).

My conclusion: from a technological point of view, the use of GPS-based
positioning systems does not produce an additional risk that cannot be
handled.

Still, most criticism from the former article holds: increasing traffic
density via new technology is a trade-off: if the new high-tech system
(GPS-based) has to fall back to a technologically lower state (line-of-sight
navigation), all of a sudden the whole infrastructure is overloaded, which
will result in a severe break-down or even deadlock.

If this trade-off is worthwhile should be closely examined. But I have a
hunch that the decision to install such a system will not be based on such
elaborations...


Re: Website hijackings, 302 redirects, and security issues

<Drew Dean <ddean@csl.sri.com>>
Mon, 14 Mar 2005 12:09:04 -0800
  (Chmielewski, RISKS-23.78)

I co-wrote an early paper on related to Tim's RISKS posting: see
  http://www.cs.princeton.edu/sip/pub/spoofing.php3


Re: Remote physical device fingerprinting (Kohno, RISKS-23.77)

<Markus Roth <atempest@bigfoot.com>>
Sun, 13 Mar 2005 00:32:05 -0500

> Together with Andre Broido and kc claffy from CAIDA, I have been working
> on methods for remote physical device fingerprinting, or remotely
> fingerprinting a physical device without any modification to or known
> cooperation from the fingerprintee.

I'd like to summarize the meat of the paper and point out that this is not
as big a privacy RISK as one might initially assume. Remember, this is just
the gist of the paper; I have simplified many things.

First, a definition of "clock skew": A clock with skew is gaining or losing
time. For example, a wall clock with a 2-minute skew that correctly shows
12:00 at noon, will show 1:02 when it is one o'clock, then 2:04 when it is
two o'clock, next 3:06 at three, and so on.  Similarly, a clock with a -2
minute skew loses 2 minutes every hour.

This is different from a clock running fast or slow. A clock running 2
minutes fast would show 12:02 at noon, 1:02 at one o'clock, 2:02, 3:02, etc.

The authors' experiments demonstrate that the various clocks found on a
computer have tiny skews. The skews range from roughly -50 to 50
microseconds every second, and they stay constant for a particular
computer. The authors say that there is enough statistical variation among
skews to tell apart one computer from another if you can somehow watch a
targeted computer's system clock.

How do you watch the clock on a remote computer? It turns out that most
implementations of TCP/IP put a 32-bit timestamp into each TCP packet. The
authors' trick is to monitor thousands of packets from a targeted computer
over the course of minutes or hours; then, using some linear algebra, they
determine the targeted system's skew.

For example, a laptop accessing the Internet from New York may have its skew
measured as 45 microseconds per second. Later, the same laptop connecting to
the 'net from Berlin would again show a skew of 45 microseconds per second.

The authors claim that their method will allow you to learn 6 bits of
information about a device. Well, 2^6 is only 64 different devices.  If
there are 200 million computers on the Internet, their method would divide
the world into 64 groups of 3 million computers each.  Your computer would
look identical to 3 million other computers!

This technique would be useful to show negative but not positive results. If
a laptop in Berlin gives a skew value of 26 microseconds, you can conclude
that it is a different laptop than the one in New York.  But if an arbitrary
laptop in Berlin shows a 45 microsecond skew, you can only say that there
are 3 million other computers like it. You cannot conclude that it is the
same laptop that was once in New York.


REVIEW: "The Information Security Dictionary", Urs E. Gattiker

<Rob Slade <rslade@sprint.ca>>
Mon, 14 Mar 2005 08:08:57 -0800

BKINSCDI.RVW   20041222

"The Information Security Dictionary", Urs E. Gattiker, 2004,
1-4020-7889-7, U$145.00/C$203.50
%A   Urs E. Gattiker dictionary@weburb.com
%C   233 Spring St., New York, NY   10013
%D   2004
%G   1-4020-7889-7
%I   Springer-Verlag/Kluwer
%O   U$145.00/C$203.50 212-460-1500 800-777-4643
%O  http://www.amazon.com/exec/obidos/ASIN/1402078897/robsladesinterne
  http://www.amazon.co.uk/exec/obidos/ASIN/1402078897/robsladesinte-21
%O   http://www.amazon.ca/exec/obidos/ASIN/1402078897/robsladesin03-20
%O   tl n rl 1 tc 0 ta 2 tv 1 wq 0
%P   411 p.
%T   "The Information Security Dictionary"

A good dictionary of information security terms is seriously needed by
the security community, and by the computer and communications
industry as a whole.  The "Internet Security Dictionary" (cf.
BKINSCDC.RVW), by Phoha, was a good start, but needs to be expanded
and updated.

I have been working on a security glossary myself, so this might be
yet another case of bias or conflict of interest.  I should also note
that, although it is widely believed that I enjoy trashing books, I am
actively looking for works that I can recommend.  Oh, it's easier to
point out flaws in a work than it is to say why someone writes well.
However, I take no particular pleasure in having to savage a work as
thoroughly as this one requires.

Far too many of the definitions contain misleading, incomplete, or
outright false information.  Anomaly-Based Intrusion Detection Systems
are said to discover known attacks, which might be true, but
signature-based systems would normally be considered better for that
purpose: you want anomaly-based detection to discover previously
unknown attacks.  The entry for Authentication does not list the
standard factors of something you know, have, or are.  The definition
for the Bell-La Padula security model doesn't provide any details of
the pattern itself, does not mention confidentiality (a central
concept), and does not refer to the Trusted Computer System Evaluation
Criteria and other outcomes of the paradigm.  The Biba integrity model
is listed as "Bibra."

Patent mentions the ability of the patent holder to restrict use, but
doesn't mention that patent is only applicable to devices and that the
device must be novel, useful, and non-obvious.  Reference is made to
copyright (the definition of which is equally flawed) and to Tables
16A and B, neither of which alludes to intellectual property laws.  No
listing is given for trade secrets or trade marks.  Both the entry for
patent and the account of copyright state that patents protect ideas,
which is specifically untrue.

There is a listing for Illegal Software (software used without a
licence), although there isn't one for piracy.  There is one for
Software Piracy, but neither of the two cross-references points to
Illegal Software.  There is an entry for Cable, as in cable TV, but
nothing for cabling as in network media, which has much greater
importance in terms of information security.  Challenge Handshake
points to Handshake (there is no listing for challenge/response) and,
for some completely inexplicable reason, also to Circuit-Level
Gateway.

The sub-listing for Content Filtering (which comes under filtering,
rather than content) makes no mention of the origin of the practice in
restricting access to objectionable material.

"DoS on the 13 Internet Root Servers" is not the title of a famous
Cultural Revolution artwork, but a reference to the October, 2002
attack against the top-level DNS servers.  Almost no details of the
event are provided (and this was actually a *distributed* denial of
service attack).

Digital Versatile Disk (generally used as an update to Digital Video
Disk, the original expansion of the DVD acronym) is defined as using
both sides of the disk (almost unknown in commercial DVDs) and also
notes a capacity of 17 gigabytes, which would actually require both
sides and both depths.

One of the sub-entries under Disinfection is Generic Scan String,
which has nothing to do with disinfection of computer viruses.

"Activity monitor" is defined solely in terms of employee
surveillance, and ignores the specialized use in malware detection.

The entry for Cookies states (incorrectly) that they can only be used
by the originating site.  However, there is a cross-reference to table
18A (a mere 140 pages from the entry).  Table 18A has no mention of
the term.  Table 18B does have a listing for Java Cookies--which
contradicts the earlier assertion, and says that other parties can
read cookies.  Defence-In-Depth has a reference to Table 6A.  There is
no 6A, although there is a 6.  Table 6 contains no reference to
defence-in-depth.

Urs isn't always certain of his definitions: an Application Level
Gateway "could" be a type of firewall.  However, in that case, he is
certain that it re-addresses traffic--which is actually the function
of network address translation (NAT), generally considered a type of
circuit-level proxy firewall.  Phishing is equated with "carding"
(obtaining or trading in credit card numbers for fraudulent use) while
the more definitive practice of obtaining banking information is
ignored.  (We are told that avoiding the running of attachments
prevents phishing.  Phishing scams seldom make use of attachments or
executable code.)

Cross references are not always accurate.  On page 12 the listing for
"Anti-Virus Researcher" points to the entry for "Research."  There is
no material for Anti-Virus Researcher in that entry, but there is in
the later entry for "Researcher."  Ethics points to Justice, which
doesn't say anything about ethics.

Some of the terms included are rather odd.  "Binders" are supposed to
be utilities that bind multiple code modules together.  Most people
refer to these utilities as linkers.  "Derf" was used as a term for
hijacking sessions on logged in terminals, but in a limited setting
and quite a while back: the term is pretty much unknown today.

The definitions given for some entries don't seem to have any real
meaning.  For example, "Virus Algorithm means a set of operations or a
procedure designed to create a virus problem."  Many long definitions
appear to have been patched together from disparate and unrelated
sources, not listing additional meanings, just appending disjointed
verbiage.

Some of the definitions given are correct.  Heck, some are copied
straight out of government documents.  But Gattiker has included a
number of terms which are either generic, or have only the most
tenuous of connections to security.  There is an entry for Computer
Mouse.  There is a listing for the fictional cyberpunks, but no
mention of the real-world cypherpunk community.  The definition for
Virology deals only with biology.  The entry for Virus is only
relevant to (pretty much obsolete) file infectors.

As could be expected with a work of this calibre, a number of terms
are simply missing.  There are entries for false positive and false
negative, but none for false acceptance or false rejection (the more
widely known terms for similar concepts).

It is difficult to give a complete picture of the unreliability of
this text.  It would be easy for me to simply do an exhaustive search
of every minor error, and in a few pages collect all that might be
wrong with an otherwise great work.  But in this volume we have
spurious listings, missing entries, definitions that make no sense to
the reader, explanations that are erroneous, and even opinion stated
as fact.  (The man, or manual, pages of the UNIX system, incorrectly
identified as "main" pages, are said to be technobabble, presumably
because Urs doesn't understand their cryptic nature.)  Slang is
included and technical terms are left out.

Probably the best way to give a flavour of the quality of this work is
to reproduce some listings.  (I have tried to be as careful as
possible in copying the exact writing and punctuation of the entries
as they appear in the book.)

A listing that sounds good but makes no sense (as well as being a non-
sequitur) provides a good feel for the quality of language and logic
representative of the work as a whole:

    Homomorphic Encryption is a cryptographic technique in which
    the sum of two encrypted values is equal to the encrypted sum
    of the values.  The signature operation in public key
    cryptography is an exponentiation operation using the private
    key as the exponent.

According to "Algebraic Aspects of Cryptography" by Neal Koblitz (cf.
BKALASCR.RVW), and a number of other references, homomorphism refers
to groups or sets rather than express algorithms or techniques.
Homomorphic encryption can be useful for signature or authentication
systems where anonymity is important (such as in voting procedures)
but it probably isn't necessary to specify exponentiation.

The sub-entry for "Anti-Virus Researcher or Security Assurance
Researcher" on page 270 is lengthier, and requires a bit more
dissection:

    Anti-Virus Researcher or Security Assurance Researcher may
    conduct his or her research in many ways.  An example might be
    a lawyer searching among old court cases for legal precedents
    regarding Privacy and Hacking.

    An epidemiologist studying age groups or cohorts and hip-
    fracture incidents to an Anti-Virus Researcher studying
    malicious code to discover programming patterns and
    characteristics (see Theory).

    Often Anti-Virus Researcher is used synonymously with "product
    development."  Sometimes, a "bonafide antivirus researcher's"
    role within his or her organization might be documented by
    independent examination (see also Appendix 3 and badguys
    website).

It should be reasonably obvious that the specialized activity of
antivirus research and the more general undertaking of security
assurance research are not exactly synonymous.  In addition, very
little antivirus research involves case law.  If you are confused by
the meaning of the sentence about an epidemiologist, you are not
alone.  Again, very little antivirus research involves hip-fractures.
Some AV researchers are also product developers, but the two
activities are hardly identical.  The reference to "badguys website"
is to the "Bad Guys" Website (www.badguys.org) run by Sarah Gordon,
which does have some information about legitimate virus research, in
opposition to the blackhats who write viruses and call themselves
researchers.

If, following the cross reference to Theory, we flip to page 324, we
find a sub-entry for "Anti-Virus Theory":

    Anti-Virus Theory if it would exist would be based on
    Inductive or Deductive Research outline phenomena and their
    relationship to other issues.  Hence, investigation of the
    subject aimed at uncovering new information in a systematic
    way, while permitting a group of statements about how some
    part of the world works, in this case Computer Viruses.  A
    good Anti-Virus Theory would allow us to generalize from one
    virus to the next (see Tables 19A and 19B).

The wording here would seem to imply that Anti-Virus Theory does not
exist, which raises the immediate question of why you would include an
entry for a non-existent entity.  Induction and deduction are fairly
broad tools: the first sentence doesn't really appear to say anything
useful about the type of theory or research.  Tables 19A and B are
nowhere near that entry.  In fact, you will find them on pages 207 and
209-11.  Neither do the tables have anything to do with viruses: they
talk about the costs and prevalence of various forms of Internet
access.  In any case, that entry doesn't appear to say anything about
any theory to do with computer viruses, beyond the definition of a
theory in general.

(If we follow the further cross-reference to "Methodology," we find no
allusion to antivirus research at all.)

Errors in formatting (particularly indenting) are rife, and make it
difficult to follow the structure of entries, or the book as a whole.
Bold text sometimes means that the term is another entry, but
sometimes it doesn't seem to mean anything.  Sometimes the formatting
problem might explain entries that appear to be out of place, but I'm
not sure that they explain the sequential listings of Autopsy,
Authorization, and Auto Dial-Back.

There are numerous typographical errors, mistakes in spelling and
grammar, and tremendous inconsistencies in capitalization.  Even the
most cursory copy and style edit would have improved things
enormously.

The security community and industry deserves better than this.
Students of security need more accurate information than is provided
in this work.  Society as a whole is relying on information security
and requires more credible content than this book contains.

copyright Robert M. Slade, 2004   BKINSCDI.RVW   20041222
rslade@vcn.bc.ca      slade@victoria.tc.ca      rslade@sun.soci.niu.edu
http://victoria.tc.ca/techrev    or    http://sun.soci.niu.edu/~rslade

Please report problems with the web pages to the maintainer

Top