Forum on Risks to the Public in Computers and Related Systems
ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
Volume 22: Issue 23
Friday 6 September 2002
Contents
Appeals court overturns own Web site ruling- Monty Solomon
Citibank e-mailing raises privacy concern- Monty Solomon
Greek government bans electronic games- Phil Pareas via Max
Background checks are more important than education- Adam Shostack
EDIS bulletin on power outages- Dave Stringer-Calvert
Infrastructure risks and Cyberterrorism- Fred Cohen
Re: Homeland Insecurity- Stephen Fairfax
Excellent quote about wireless security- Al Rizutto
Re: Warchalking the Networks- Michael Cook
MS02-050: Certificate validation flaw could enable identity spoofing- Monty Solomon
Info on RISKS (comp.risks)
Appeals court overturns own Web site ruling
<Monty Solomon <monty@roscom.com>>
Wed, 28 Aug 2002 22:24:49 -0400A lawyer for online privacy-rights group the Electronic Frontier Foundation said a certain amount of inconvenience for police is often the price of protecting privacy. Heeding prosecutors' pleas, the federal appeals court in San Francisco has overturned its own ruling that would have made it much harder to peek at private Web sites. The unusual reversal by the Ninth U.S. Circuit Court of Appeals came after federal and state prosecutors warned that the ruling would hamper investigations of child molesters who recruit victims online. In its earlier ruling, the court said an airline's furtive entry into a pilot's personal Web site, where criticism of the company was collected, was a possible violation of the federal wiretap law. The 1986 version of that law prohibits any unauthorized interception of an electronic communication. [Bob Egelko, 28 Aug 2002, http://www.newsfactor.com/perl/story/19210.html]
Citibank e-mailing raises privacy concern
<Monty Solomon <monty@roscom.com>>
Tue, 3 Sep 2002 19:43:58 -0400In a move that has raised privacy concerns, Citibank used an outside company to gather e-mail addresses of its credit-card customers and then sent e-mails offering recipients access to sensitive financial data without verifying each address actually belonged to the customer. Citibank is reviewing the program and said there is a roadblock in place to prevent sensitive information from reaching the wrong people. Still, the matter, which grew out of a pilot Citibank initiative seeking more effective electronic communications with its customers, may raise questions about whether federal regulation is needed to ensure consumers' online privacy is protected. ... [Source: Messages sent to customers without address verification, Yochi J. Dreazen, The Wall Street Journal, 3 Sep 2002] http://www.msnbc.com/news/802701.asp
Greek government bans electronic games
<Max <max7531@earthlink.net>>
Wed, 04 Sep 2002 17:28:39 -0700Damn. I didn't believe this message from Phil Pareas at first. Should be an interesting test of taking DMCA to the extreme. I just don't think that making everyone a criminal is a good way to reduce crime. :) Max > In Greece, use a Game Boy, go to jail > By Matt Loney and Rupert Goodwins > Staff Writers, CNET News.com > September 3, 2002, 11:18 AM PT > In Greece, playing a shoot-'em-up video game could land you in jail. The > Greek government has banned all electronic games across the country, > including those that run on home computers, on Game Boy-style portable > consoles, and on mobile phones. Thousands of tourists in Greece are > unknowingly facing heavy fines or long terms in prison for owning mobile > phones or portable video games. Greek Law Number 3037, enacted at the end > of July, explicitly forbids electronic games with "electronic mechanisms > and software" from public and private places, and people have already been > fined tens of thousands of dollars for playing or owning games. The law > applies equally to visitors from abroad: "If you know these things are > banned, you should not bring them in," said a commercial attaché at the > Greek Embassy in London, who declined to give her name. Internet cafes > will be allowed to continue to operate, providing no games-playing takes > place. If a customer is found to be running any sort of game, including > online chess, the cafe owner will be fined and the place closed. The > Greek government introduced the law in an attempt to prevent illegal > gambling. According to a report in the Greek newspaper Kathimerini, Greek > police will be responsible for catching offenders, who will face fines of > 5,000 to 75,000 euros (about $4,980 to $74,650) and imprisonment of one to > 12 months. "The blanket ban was decided in February after the government > admitted it was incapable of distinguishing innocuous video games from > illegal gambling machines." ... > http://news.com.com/2100-1040-956357.html?tag=dd.ne.dht.nl-hed.0
Background checks are more important than education
<Adam Shostack <adam@homeport.org>>
Sun, 1 Sep 2002 18:25:47 -0400
> Thousands of teachers will not be able to take classes at the start of the
> new term because character checks on them will not have been completed,
> the government has admitted. [...] Leicestershire was one of the first
> areas of the country to be affected by the vetting backlog as pupils
> returned to school last Thursday, with schools being told to turn away
> teachers who had not yet been checked.
http://news.bbc.co.uk/2/hi/uk_news/education/2229196.stm
The mind boggles. Perhaps there's some reason to believe that Britain's
teachers have suddenly become a particularly questionable lot. That it is
both worth spending money on checking into their backgrounds and keeping
them out of classrooms until that's done. That keeping what I'd guess is
around 140,000 students away from class for a few days is a good trade off.
Can someone enlighten me as to the particular threat? (Also, I'm curious
how much the government is spending to keep teachers out of classrooms?)
(And there are proposals to do this for all 'critical infrastructure
workers' in the US: "I'm sorry, Mike can't remove the squirrel from the
transformer until his background check finishes.")
EDIS bulletin on power outages
<Dave Stringer-Calvert <dave_sc@csl.sri.com>>
Tue, 03 Sep 2002 16:26:21 -0700It doesn't take a hacker to shut down the power grid, mother nature is quite capable. D_SC Date: Tue, 3 Sep 2002 16:18:46 -0700 >From: EDIS E-mail Service <edismail@incident.com> Subject: [EDIS] Law Enforcement Bulletin [Urgent: Statewide] Emergency services personnel (law enforcement, fire, EMS and local OES) throughout Southern California should be advised that the California Independent System Operator, the entity that coordinates statewide flow of electrical supply, has notified state OES there will be rotating blackouts in Southern California with in the next hour due to damage to major power lines from fires. [...] OES Sacramento/Director Dallas Jones/SM [EDIS is operated by the Governor's Office of Emergency Services, State of California. This e-mail relay service is offered by incident.com on a non-commercial, subscription-only basis. Because of the complexity of this system and its dependence on other systems, we cannot be responsible for delays or failures in forwarding or transmission.] [Upper-case only message lowered to avoid antispam tools. PGN]
Infrastructure risks and Cyberterrorism (Re: Norloff, RISKS-22.22)
<Fred Cohen <fc@all.net>>
Sun, 1 Sep 2002 21:22:55 -0700 (PDT)This is a rather complex issue, but one that can be understood in reasonably straight forward terms. At the risk of excessive length, I will proceed as simply as I can... 1) My background: I did some of the initial risk assessments that led to the PCCIP work, some studies as part of the PCCIP study, and some of the subsequent studies - as well as doing work for many of the critical infrastructure providers - some Y2K-related roll-up work on reporting out on these issues, and on and on. 2) A reasonable view: My most reasoned view of the true situation is primarily based on the work related to Y2K in which we considered the potential for all IT failing in the worst possible ways. The goal of much of my effort was to assure that if IT went really bad, national and large-volume human survival would not be impaired. In essence, this has more to do with what happens when it fails than whether it will fail. The disaster planning associated with critical infrastructures is, by all that I have seen, ADEQUATE TO PREVENT severe loss of life, serious national security losses, loss of overall military and governmental capability, and unrecoverable economic collapse. This is not to say that large-scale events cannot happen and that they cannot have large effect. They can. But the severity is not as horrific as many would have others believe. There are some pretty scary scenarios that can be cooked up, and some of them can probably even be made to happen IF WE POSTULATE a large enough and sophisticated enough attacker. But from all I can tell, there is no such attacker, and certainly there are none in the ranks of terrorists I know about or in the ranks of nation states I am aware of - the one exception being the US government itself. 3) SCADA systems in particular: Most SCADA systems are not directly connected to the Internet, and many are not even indirectly connected to it, but this does not mean that there is no risk associated with information attack against these systems. The real questions to ask, however, are not whether some SCADA systems can be defeated and induced to cause serious consequences - they can. The more important question is how complex it would be to coordinate these events across enough of these systems to induce dire consequences that could not be mitigated without severe consequences. This is a much harder thing to accomplish, it takes far more effort, better intelligence, better coordination, and greater and more focussed resources than typical terrorist groups have - even those with a few hundred million dollars to focus on the effort. Fred Cohen 1-925-454-0171 http://all.net/ Sandia Natl Labs 1-925-294-2087 The University of New Haven http://www.unhca.com/ fc@all.net [RISKS's default disclaimer specifically invoked here.]
Homeland Insecurity (RISKS-22.20 to 22)
<Stephen Fairfax <fairfax@mtechnology.net>>
Sat, 31 Aug 2002 13:18:55 -0400Peter Ladkin charges that I made "bogus" claims (Homeland Insecurity, Risks 22.20, 22.21, 22.22) in a classic example of rejecting formal, quantitative analysis because the sparse data make such work messy and inconvenient. I have encountered objections of this nature throughout my career and feel compelled to respond. The thrust of my comments is that formal thinking and quantitative evaluations appear to be very rare in the introduction of new air transport security measures. Ladkin makes no mention of this point, but seizes on my statement that "Guns in the cockpit represent an independent layer that does not automatically fail when screens fail. While there is heated debate about the possibilities of negative consequences, a dispassionate analysis of the probabilities of both success and failure offers rather overwhelming evidence that on balance, armed pilots will reduce both the likelihood and consequences of hijacking attempts." Without acknowledging the basic logic that adding an additional layer of defense holds at least the possibility of improving the odds of success, Ladkin characterizes this statement as "bogus" and "sound-bite rhetoric." (Aside: I used guns in the cockpit because of the current debate on the issue, but the principle holds true for any defensive measure. The tone and ad hominem nature of Ladkin's arguments suggest that I may have offended some deep-rooted beliefs about firearms in general. I apologize for any inadvertent offense but stand by my argument.) After a brief explanation of PRA and fault trees, Ladkin goes on to report that the techniques get much more difficult when applied to software or human negotiations. I most heartily agree. As one progresses from hardware to software to "wetware," the complexity of the analysis increases, data grows sparser and more difficult to collect, and the sensitivity of the results to changes in input data increases. Ladkin then makes the classic mistake of assuming that a large number of events without failures constitutes "no data." He writes "On hijackings in the US, there is no data, none, for the last, oh, thirty years until September 11 last year." First, even if the implied statement that there have been no US hijackings in 30 years is true, that does not constitute "no data." For example, one can ask "Is this record more likely to be attributable to the effectiveness of the present security measures, or does it indicate that the rate of attempted hijacking is very low?" If there had been many attempted hijackings, but they had all been prevented, there would still be no hijackings, but that hardly constitutes a lack of data. Careful study of the records would most likely reveal that there were very few attempts, and one would therefore conclude that it was difficult to assess the efficacy of the current security measures based solely on historic records. (I am generally familiar with aviation safety statistics, but I do not claim to have performed a full study of this issue.) That does not prevent one from doing the assessment in other ways. Ladkin does not justify excluding data from the rest of the planet. This week's Aviation Week and Space Technology includes a graph from a Boeing Airplane Upset Recovery Training aid that shows 8 fatal accidents, with several hundred fatalities, attributed to hijacking, in the period 1987-96. This hardly constitutes "no data!" Furthermore, as Ladkin surely knows, when data regarding failures is sparse, one searches for "close calls" and other instances where failure was imminent but averted by corrective action or even sheer luck. Indeed, as systems become more and more reliable, failure data gets more and more sparse and therefore more valuable. Lastly, as Ladkin should know, PRA techniques include methods to derive estimates of failure and success probabilities from expert opinion. Experts often have relevant experience with the precursors to failure even if they have not personally experienced the consequences of a failure. Experts also can assist in applying lessons from other fields to the problem at hand. For example, law enforcement and military personnel should be able to produce credible, defensible estimates of the ability of pilots, with some defined level of training, to defend the cockpit, with or without a firearm, from hijackers for a given period of time. The experts need not be commercial airline pilots in order to apply the lessons of one or two defenders against attackers approaching via a narrow corridor. What's more, the estimates derived from expert opinion can be formally compared to historical data, however sparse, to determine the most likely distribution of outcomes. (One point I have ignored for the sake of brevity is that PRA generally deals with probability distributions rather than simple failure rates. Understanding what shapes the distribution is just as important as estimating the magnitude.) One of the tragedies of September 11 is that there was ample, publicly available data to predict not only the method of attack, but the likely targets. Algerian terrorists hijacked Air France Flight 8969 during the Christmas holidays in 1994. (see http://www.msnbc.com/news/635213.asp for a recent summary of those events.) Their plan was to crash a fully fueled airliner into the Eiffel Tower. The plan failed, in large part due to the courage and resourcefulness of the crew, but also because the terrorists were not trained to fly the aircraft. The terrorists (who may have been associated with Osama Bin Laden) learned from their mistakes; in the US the FAA did not. There were other warning signs, but I already RISK being PGN'ed. The point is that there was a well documented but unsuccessful attempt by Islamic terrorists to attack a symbolic structure using commercial aircraft as a flying bomb, nearly 7 years before September 11, 2001. Years of inaction by the FAA in the face of a new, very serious threat enabled the success of the later attacks. Ladkin goes on to demonstrate another pitfall of this admittedly difficult type of analysis. The unchallenged assumption is one of the most dangerous mistakes in any field, as RISKS readers will surely appreciate. Ladkin asserts that "Dealing with hijacking is an almost pure negotiation situation." It was precisely this incorrect assumption that resulted in the FAA mandating that flight crews be trained to cooperate with the terrorists. In an era where well-publicized accounts of suicide bombers successfully attacking civilians appeared on a weekly, sometimes daily basis, the idea that one can negotiate with all hijackers is ludicrous. The unchallenged assumption prevented the people in charge of security from asking obvious "what if" questions that should be an integral part of any high-stakes policy decision. Ladkin concludes with an assertion that my actions bring the entire field of PRA into disrepute, citing scientific societies that no longer endorse the exclusive use of these techniques in certain areas. A careful reading of my original comments will show no instance where I suggest that PRA techniques are the ONLY ones that should be used. On the contrary, I lament the utter disregard for formal thinking, quantitative analysis, and public review and discussion of the incredibly expensive measures being forced on the American public in the name of security. PRA is one of many tools that could be used to improve this situation, but I would never suggest that it is the only one, and I reject the charge that I have somehow sullied the field with my observations. Issues of security and safety in complex man/machine systems are certainly difficult to analyze. Controlled Flight Into Terrain (CFIT) continues to be the number one cause of commercial airline fatalities despite decades of effort. This problem, like security, involves hardware, software, human actions and errors, all in a complex, dynamic environment. The fact that it is difficult to analyze the problem does not obviate the need to do so, nor does it relieve those who make policies from the responsibility to use all available tools and techniques in arriving at decisions that literally mean life and death for thousands. PRA is one such technique, and I stand by my recommendation that it be used in the analysis of airline security systems design and operations. Stephen Fairfax, President, MTechnology, Inc., 2 Central Street Saxonville, MA 01701 1-508.788.6260 fairfax@mtechnology.net www.mtechnology.net [Mispelingz of Ladkin corrected in archive copy. PGN]
Excellent quote about wireless security
<AlRizutto@cs.com>
Wed, 28 Aug 2002 08:38:36 -0400I found the following line about a wireless security book to be quite interesting: Writing a book on wireless security is like writing a book on safe skydiving -- if you want the safety and security, just don't do it. See http://www.unixreview.com/documents/s=7459/uni1030461766479/
Re: Warchalking the Networks (Leeson, RISKS-22.18)
<Michael Cook <MLCook@aJile.com>>
Mon, 29 Jul 2002 10:20:02 -0500Here's more than you probably want to know about "warchalking" wireless networks. Links to articles are included. Plus, a variety of symbols and their meanings. http://www.warchalking.org/ Michael L. Cook, Technical Staff, aJile Systems, Inc. http://www.aJile.com/ MLCook@aJile.com 319-378-3946
MS02-050: Certificate validation flaw could enable identity spoofing
<Monty Solomon <monty@roscom.com>>
Thu, 5 Sep 2002 02:15:29 -0400
Title: Certificate Validation Flaw Could Enable Identity Spoofing (Q328145)
Date: September 04, 2002
Software: Microsoft Windows, Microsoft Office for Mac, Microsoft
Internet Explorer for Mac, or Microsoft Outlook Express for Mac
Impact: Identity spoofing.
Max Risk: Critical
Bulletin: MS02-050
Microsoft encourages customers to review the Security Bulletin at:
http://www.microsoft.com/technet/security/bulletin/MS02-050.asp .
Issue:
The IETF Profile of the X.509 certificate standard defines several optional
fields that can be included in a digital certificate. One of these is the
Basic Constraints field, which indicates the maximum allowable length of the
certificate's chain and whether the certificate is a Certificate Authority
or an end-entity certificate. However, the APIs within CryptoAPI that
construct and validate certificate chains (CertGetCertificateChain(),
CertVerifyCertificateChainPolicy(), and WinVerifyTrust()) do not Check the
Basic Constraints field. The same flaw, unrelated to CryptoAPI, is also
present in several Microsoft products for Macintosh.
The vulnerability could enable an attacker who had a valid end-entity
certificate to issue a subordinate certificate that, although bogus, would
nevertheless pass validation. Because CryptoAPI is used by a wide range of
applications, this could enable a variety of identity spoofing
attacks. These are discussed in detail in the bulletin FAQ, but could
include:
- Setting up a web site that poses as a different web site, and
"proving" its identity by establishing an SSL session as the
legitimate web site.
- Sending e-mails signed using a digital certificate that
purportedly belongs to a different user.
- Spoofing certificate-based authentication systems to gain
entry as a highly privileged user.
- Digitally signing malware using an Authenticode certificate
that claims to have been issued to a company users might trust.
Mitigating Factors:
Overall:
- The user could always manually check a certificate chain, and
might notice in the case of a spoofed chain that there was an
unfamiliar intermediate CA.
- Unless the attacker's digital certificate were issued by a CA
in the user's trust list, the certificate would generate a
warning when validated.
- The attacker could only spoof certificates of the same type as
the one he or she possessed. In the case where the attacker
attempted an attack using a high-value certificate such as
Authenticode certificates, this would necessitate obtaining
a legitimate certificate of the same type - which could
require the attacker to prove his or her identity or
entitlement to the issuing CA.
Web Site Spoofing:
- The vulnerability provides no way for the attacker to cause the
user to visit the attacker's web site. The attacker would need
to redirect the user to a site under the attacker's control
using a method such as DNS poisoning. As discussed in the
bulletin FAQ, this is extremely difficult to carry out in
practice.
- The vulnerability could not be used to extract information from
the user's computer. The vulnerability could only be used by an
attacker as a means of convincing a user that he or she has
reached a trusted site, in the hope of persuading the user to
voluntarily provide sensitive data.
E-mail Signing:
- The "from" address on the spoofed mail would need to match the
one specified in the certificate, giving rise to either of two
scenarios if a recipient replied to the mail. In the case where
the "from" and "reply-to" fields matched, replies would be sent
to victim of the attack rather than the attacker. In the case
where the fields didn't match, replies would obviously be
addressed to someone other than ostensible sender. Either case
could be a tip-off that an attack was underway.
Certificate-based Authentication:
- In most cases where certificates are used for user
authentication, additional information contained within the
certificate is necessary to complete the authentication. The
type and format of such data typically varies with every
installation, and as a result significant insider information
would likely be required for a successful attack.
Authenticode Spoofing:
- To the best of Microsoft's knowledge, such an attack could not
be carried out using any commercial CA's Authenticode
certificates. These certificates contain policy information
that causes the Basic Constraints field to be correctly
evaluated, and none allow end-entity certificates to act as CAs.
- Even if an attack were successfully carried out using an
Authenticode certificate that had been issued by a corporate
PKI, it wouldn't be possible to avoid warning messages, as trust
in Authenticode is brokered on a per-certificate, not per-name,
basis.
Risk Rating:
Microsoft Windows platforms:
- Internet systems: Critical
- Intranet systems: Critical
- Client systems: Critical
Microsoft programs for Mac:
- Internet systems: None
- Intranet systems: None
- Client systems: Moderate
Patch Availability:
- A patch is available to fix this vulnerability for Windows NT
4.0, Windows NT 4.0, Terminal Server Edition, Windows XP, and
Windows XP 64 bit Edition.
Please read the Security Bulletin at
http://www.microsoft.com/technet/security/bulletin/ms02-050.asp
for information on obtaining this patch.
[The information provided in the Microsoft knowledge base is provided "as
is" without warranty of any kind. Microsoft disclaims all warranties, either
express or implied, including the warranties of merchantability and fitness
for a particular purpose. In no event shall Microsoft Corporation or its
suppliers be liable for any damages whatsoever including direct, indirect,
incidental, consequential, loss of business profits or special damages, even
if Microsoft Corporation or its suppliers have been advised of the
possibility of such damages. Some states do not allow the exclusion or
limitation of liability for consequential or incidental damages so the
foregoing limitation may not apply.] [ALL-CAPS in this paragraph knocked
down to avoid antispam tools and annoyed readers. PGN]

Report problems with the web pages to the maintainer