Forum on Risks to the Public in Computers and Related Systems
ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
Volume 24: Issue 70
Tuesday 19 June 2007
Contents
Gripen: Risks of safety measures in military jet aircraft- Tony Lima
EFF: Court Protects Email from Secret Government Searches- Kevin Bankston via David Farber
Blogger unmasked, court case upended- Jonathan Saltzman via Monty Solomon
"Deleted" children in Japan- Rodney Van Meter via Dave Farber
More on the Space Station problem- PGN
Improving reliability of health critical software- Marc Auslander
Search Engine Dispute Notifications: Request For Comments- Lauren Weinstein
Extending Google Blacklists for Dispute Resolutions- Lauren Weinstein
Re: USAF F-22 jets grounded by software glitch- Gregory Chapelle
Info on RISKS (comp.risks)
Gripen: Risks of safety measures in military jet aircraft
<Tony Lima <tony.lima@csueastbay.edu>>
Tue, 29 May 2007 17:25:12 -0700Swede's Perfect Spontaneous Ejection, 29 May 2007 http://www.strategypage.com/htmw/htmurph/articles/20070529.aspx Last month, a Swedish Gripen fighter crashed when the pilot suddenly ejected. The pilot insisted that he had not activated the ejection system. After intense investigation, and lots of computer simulation of flight systems, investigators concluded that the pilots account of events was accurate. Turns out that if enough g-force is applied to the aircraft, the pilot ejection system automatically activates. This leaves the aircraft without a pilot, right after it has performed a stressful maneuver (to produce the high g-force.) This sort of thing is increasingly common with modern weapons systems. That because these systems are increasingly more complex systems of systems, where it has become impossible to forecast all of the possible unpleasant, and unwanted, events that could occur under certain situations. Tony Lima, Professor of Economics, California State University, East Bay Hayward, CA 94542 1-510-885-3889 http://www.cbe.csueastbay.edu/~alima [Gripen grep'n grabs gripin': RISKS-8.32,49, 14.81,82,85, 15.04,19,25,26. PGN)
EFF: Court Protects Email from Secret Government Searches
<David Farber <dave@farber.net>>
Mon, 18 Jun 2007 17:33:04 -0400Electronic Frontier Foundation Media Release, Monday, June 18, 2007 Contact: Kevin Bankston, Staff Attorney, Electronic Frontier Foundation bankston@eff.org, +1 415 436-9333 x126 Court Protects Email from Secret Government Searches Landmark Ruling Gives Email Same Constitutional Protections as Phone Calls San Francisco - The government must have a search warrant before it can secretly seize and search emails stored by email service providers, according to a landmark ruling Monday in the 6th U.S. Circuit Court of Appeals. The court found that email users have the same reasonable expectation of privacy in their stored email as they do in their telephone calls -- the first circuit court ever to make that finding. Over the last 20 years, the government has routinely used the federal Stored Communications Act (SCA) to secretly obtain stored email from email service providers without a warrant. But today's ruling -- closely following the reasoning in an amicus brief filed the by the Electronic Frontier Foundation (EFF) and other civil liberties groups -- found that the SCA violates the Fourth Amendment. "Email users expect that their Hotmail and Gmail inboxes are just as private as their postal mail and their telephone calls," said EFF Staff Attorney Kevin Bankston. "The government tried to get around this common-sense conclusion, but the Constitution applies online as well as offline, as the court correctly found. That means that the government can't secretly seize your emails without a warrant." Warshak v. United States was brought in the Southern District of Ohio federal court by Steven Warshak to stop the government's repeated secret searches and seizures of his stored email using the SCA. The district court ruled that the government cannot use the SCA to obtain stored email without a warrant or prior notice to the email account holder, but the government appealed that ruling to the 6th Circuit. EFF served as an amicus in the case, joined by the American Civil Liberties Union and the Center for Democracy & Technology. Law professors Susan Freiwald and Patricia Bellia also submitted an amicus brief, and the case was successfully argued at the 6th Circuit by Warshak's counsel Martin Weinberg. For the full ruling in Warshak v. United States: http://eff.org/legal/cases/warshak_v_usa/ 6th_circuit_decision_upholding_injunction.pdf For EFF's resources on the case, including its amicus brief: http://www.eff.org/legal/cases/warshak_v_usa/ For this release: http://www.eff.org/news/archives/2007_06.php#005321 About EFF: The Electronic Frontier Foundation is the leading civil liberties organization working to protect rights in the digital world. Founded in 1990, EFF actively encourages and challenges industry and government to support free expression and privacy online. EFF is a member-supported organization and maintains one of the most linked-to websites in the world at http://www.eff.org/ [Whereas this is an appealing ruling to privacy advocates, it seems likely to be appealed. PGN]
Blogger unmasked, court case upended (Jonathan Saltzman)
<Monty Solomon <monty@roscom.com>>
Sat, 2 Jun 2007 00:17:19 -0400
Pediatrician Robert P. Lindeman was defending himself in a malpractice suit
involving the death of a 12-year-old patient, in Massachusetts Suffolk
Superior Court. The opposing counsel asked him whether he was the blogger
("Flea") who had been writing about a trial that sounded very similar,
ridiculing the plaintiff's case and the lawyer, revealing the defense's
strategy, and accusing members of the jury of "dozing". Lindeman admitted
he was indeed Flea. He then wound up paying a "substantial settlement" and
the case was closed. [Source: Jonathan Saltzman <jsaltzman@globe.com>, *The
Boston Globe*, 31 May 2007; PGN-ed. No one seems to have noticed various
opportunities for puns: Lindeman was copping a flea and fleaing the coup.]
http://www.boston.com/news/local/articles/2007/05/31/blogger_unmasked_court_case_upended/
"Deleted" children in Japan (via Dave Farber's IP)
<Rodney Van Meter <rdv@sfc.wide.ad.jp>>
May 30, 2007 10:44:22 PM EDTThis tidbit bothers me because it speaks to the entire future of our history in the world in which "If Google can't find it, it doesn't exist." A little background: in Japan, you don't have a birth certificate. Each family has a family registry, and children who are born are entered into the registry. I think the same holds for proof of marriage. Generally, the registry has a family name on it (just one -- making it difficult for women to keep their maiden names, but that's not the point here) and a head of household. Then underneath that are the members of the household -- wife and kids. Normally, kids stay on their parents' registry until they marry or the parents die. When you marry, you move off your parents' registry and start your own. You do your registration at the city office, but it's a national registry run by the Justice Department. In the normal progress of things, of course, the last entry for each child is a notation that they moved off of this registry and onto another one. But if a child dies, then a notation is made of that fact. An article in yesterday's Daily Yomiuri http://www.yomiuri.co.jp/dy/features/culture/20070530TDY02009.htm says that they are still in the process of digitizing the registry, and that some deceased children are being "deleted" in the process, simply to keep down the amount of data input work (which undoubtedly has to be done by hand). While it certainly makes sense to prioritize the digitization of currently-active families, as opposed to the historical records of deceased grandparents whose registers consist of no one alive, this choice has the effect of creating an apparently complete registry of an active family that portrays an inaccurate picture of the family history. From the article: According to the [Justice] ministry, the names of family members who died before the digitization have been included on the original hard copy of the family registry as one who has been "removed." But the names, the ministry said, have been stricken from the data files. The reasoning behind this was an attempt to reduce the data input into the system--by even only a bit--during the digitization process. Family members who died following the move of data files are still represented in the electronic registers. "You've got it backward if you think digitizing family registers will result in more work," a ministry official said. "Even if the name of the deceased disappears from the data, you can still see it on the original, so it isn't a problem." "isn't a problem"! A hundred years from now no one will know that the families in question ever *had* children. Looking at a particular digital record, you wouldn't even know to *ask* to see the original hard copy. Statistics on births and deaths from various causes will no doubt be skewed, let alone the impact on genealogy. While it seems likely that eventually they will get around to digitizing historical records, this particular gap in the data seems unlikely to be fixed -- or even fixABLE, without a second by-hand check of every registry comparing the original hard copy with the digitized version. There are gaps in my family history where, e.g., the courthouse holding birth certificates burned down. But at least we *know* that there are gaps. Is this a bigger loss than, oh, say, the burning of the library at Alexandria, or the one at Bukhara (~650 and again 1920), or the burning of Mayan texts by the Spanish? Nah. But I mourn the loss of every bit(!) of our collective history. IP Archives: http://v2.listbox.com/member/archive/
More on the Space Station problem (Re: RISKS-24.69)
<"Peter G. Neumann" <neumann@csl.sri.com>>
Tues, 19 Jun 2007 10:10:53 PDTRe: Glitch Blamed for Fire Alarm on Orbiter (RISKS-24.69) The problem on the Space Station turned out to be a faulty switch. Each of two sets of computers has three redundant channels ("lanes"), at least one of which must work for each system. All six lanes crashed and could not be restarted. The patch involved hooking up jumper cables and managing to get at least two pairs of lanes working again. This is considered a temporary fix, with astronauts working externally to "isolate the computers from connections with newly deployed solar panels that may have set off the problem." [Source: John Schwartz, *The New York Times*, 16 Jun 2007, National Edition A8; PGN-ed] Beginning after the installation of a 17.2-ton truss a week ago, crashes disrupted the Russian computers that control environmental systems and the thrusters that regulate the Space Station's orientation. Over the weekend, Russian astronauts isolated the problem to the surge protector circuitry, which they were able to bypass. Systems are once again working -- although the original cause is still unknown and being sought. [Source: Kenneth Chang, *The New York Times*, 19 Jun 2007, National Edition A14; PGN-ed]
Improving reliability of health critical software
<"Marc Auslander" <marcausl@optonline.net>>
Thu, 14 Jun 2007 19:08:26 -0400(Re: Cook, FDA recall, RISKS-24.68) The article about a faulty algorithm in a laser eye surgery unit (Alcon Refractive Horizons and FDA notified healthcare professionals and patients of a Class I Recall...) got me wondering about how to reduce the chances of such disasters. It seems to me that the technique of redundant independent implementations might be useful. We all know the idea - give the specs to two (or more groups) and get software from each. In operation, run all the versions, compare the results, and do something special if they mismatch. The space shuttle software has used this technique for quite a while. It lead to one famous mission scrub because of a problem with the comparison logic, but that's OK. And in cases like the above, you should be able to do the check before you commit to the procedure, so the special thing you do is to stop.
Search Engine Dispute Notifications: Request For Comments
<Lauren Weinstein <lauren@vortex.com>>
Fri, 15 Jun 2007 13:33:46 -0700
Search Engine Dispute Notifications: Request For Comments
http://lauren.vortex.com/archive/000253.html
Greetings. I'd appreciate feedback from the Internet community regarding
the following issue.
Search engines have of course become the primary means by which vast numbers
of people find all manner of information. For many firms, if you don't have
a high rank with Google, it's as if you don't exist (or at least, many
companies appear to feel that way).
Increasingly, cases are appearing of individuals and organizations being
defamed or otherwise personally damaged -- lives sometimes utterly disrupted
-- by purpose-built, falsified Web pages, frequently located in distant
jurisdictions. Search engine results are typically the primary means by
which such attacks are promulgated and sustained by providing a continuing
stream of viewers to those Web pages. Due to ranking algorithms, attempts
to counter such attacks with other Web pages may not be widely seen since
they are not directly associated with the attacking pages.
Courts appear generally reluctant to order offending Web page take downs in
such cases, except where intellectual property (e.g. DMCA orders) are
involved, and take downs do not necessarily inform viewers of the ongoing
controversy in a logically connected manner. Additionally, "remedies" that
result in suppression of information, rather than providing additional
information, are generally ineffective and counter to the "open information
whenever possible" philosophy that many of us share.
Question: Would it make sense for search engines, only in carefully limited,
delineated, and serious situations, to provide on some search results a
"Disputed Page" link to information explaining the dispute in detail, as an
available middle ground between complete non-action and total page take
downs?
Search engine firms have generally taken the view that they are akin to
telephone directories, and bear no responsibility for the content of the
pages that they reference. Similarly, when ostensibly aggrieved parties
approach these firms with concerns about "offending" pages, the usual
response is that the search firms can do nothing about those pages, and that
any complaints need to be taken to the Web page owner or associated ISP.
From a practical and jurisdictional standpoint, this turns out to be
impossible in many cases.
We clearly do not want to hold search engines responsible for other sites'
content, even when locally cached. To do so would likely obliterate the
entire search engine model and industry under a storm of litigation, to
everyone's detriment. It must be noted, however, that increasing calls for
holding search engines responsible in just such a manner are being heard in
some political and judicial circles, likely out of frustration with the
status quo, which currently tends not to offer reasonable dispute resolution
paths in most situations. This is a serious warning sign, suggesting that
we should consider some new approaches on our own, or risk draconian and
damaging legislation.
The telephone directory argument also has some problems. Unlike typical
phone books, search engines are not passive publishers of information. In
addition to third-party ads tied to the core listings, a key facet of search
engines is intensive ranking and decision-based ordering of content
listings, usually via highly proprietary algorithms. Such ranking provides
a high percentage of the value-added represented by search engine results.
So while search engines are not responsible and should not be held
responsible for the content of the outside pages and data they index, they
are very much directly involved as decision-making gatekeepers (albeit,
usually through fully automated algorithms) that determine to a major extent
which individual Web pages are likely -- or unlikely -- to be discovered by
Internet users.
More questions: Given the power that search engines possess in these
regards, do they bear any responsibility for helping to untangle serious
disputes regarding the pages they reference and often profit from? If
search engines do not voluntarily move in this direction, do they risk
damaging legislation written without a genuine understanding of the complex
technical and business issues involved?
In my view, an evolution by search engines to deal with these
situations should be predicated on that key concept of maximizing the
availability of information. Page take downs -- which are likely to
be ineffective in the long run as noted -- should be a last resort.
Similarly, a total laissez faire approach is also unlikely to be
tolerated indefinitely by the political and judicial establishments.
So returning to where we started... Could some sort of "dispute link" --
tied directly to information regarding particularly serious page disputes --
provide a reasonable means to help ameliorate these situations without
risking the more destructive alternatives? If so, how would such a system
be effectively implemented in a practical fashion? How could such a system
be structured to avoid being swamped by relatively trivial complaints?
Would providing related dispute links only to persons with court orders make
sense to limit potential abuse of the mechanism, or would requiring the use
of the expensive and delay-prone courts be far too restrictive a
qualification? Could such a dispute system operate purely on a voluntary
basis? (Voluntary would be very much preferred in my opinion.) What are
the cost factors involved in such a system and how could they be reasonably
addressed?
Overall then, is it possible to structure such a system along these lines so
that it is practical, workable, and also palatable to the major search
engine firms, as an alternative to barreling along toward an onerous and
likely politically motivated crackdown down the line?
Or would this concept just never work -- and that crackdown is inevitable?
Your thoughts would be appreciated. Thanks very much.
Lauren Weinstein http://www.pfir.org/lauren +1(818) 225-2800 lauren@pfir.org
Co-Founder, PFIR: People For Internet Responsibility - http://www.pfir.org
PRIVACY Forum - http://www.vortex.com Lauren's Blog: http://lauren.vortex.com
Extending Google Blacklists for Dispute Resolutions
<Lauren Weinstein <lauren@vortex.com>>
Sun, 17 Jun 2007 16:56:45 -0700
Extending Google Blacklists for Dispute Resolutions
http://lauren.vortex.com/archive/000254.html
Greetings. In ( http://lauren.vortex.com/archive/000253.html ) I discussed
some issues regarding search engine dispute resolution, and posed some
questions about the possibility of "dispute links" being displayed with
search results to indicate serious disputes regarding the accuracy of
particular pages, especially in cases of court-determined defamation and the
like.
While many people appear to support this concept in principle, the potential
operational logistics are of significant concern. As I originally
acknowledged, it's a complex and tough area, but that doesn't make it
impossible to deal with successfully either.
Some others respondents have taken the view that search engines should never
make "value judgments" about the content of sites, other than that done
(which is substantial) for result ranking purposes.
What many folks may not realize is that in the case of Google at least, such
more in-depth judgments are already being made, and it would not necessarily
be a large leap to extend them toward addressing the dispute resolution
issues I've been discussing.
Google already puts a special tag on sites in their results which Google
believes contain damaging code ("malware") that could disrupt user
computers. Such sites are tagged with a notice that "This website may
damage your computer." -- and the associated link is not made active (that
is, you must enter it manually or copy/paste to access that site -- you
cannot just click).
Also, in conjunction with Google Toolbar and Firefox 2, Google collects user
feedback about suspected phishing sites, and can display warnings to users
when they are about to access potentially dangerous sites on these lists.
In both of these cases, Google is making a complex value judgment concerning
the veracity of the sites and listings in question, so it appears that this
horse has already left the barn -- Google apparently does not assert that it
is merely a neutral organizer of information in these respects.
So, a site can be tagged by Google as potentially dangerous because it
contains suspected malware, or because it has been reported by the community
to be an apparent phishing site. It seems reasonable then for a site that
has been determined (by a court or other agreed-upon means) to be containing
defaming or otherwise seriously disputed information, to also be potentially
subject to similar tagging (e.g. with a "dispute link").
Pages that contain significant, purposely false information, designed to
ruin people's reputations or cause other major harm, can be just as
dangerous as phishing or malware sites. They may not be directly damaging
to people's computers, but they can certainly be damaging to people's lives.
And presumably we care about people at least as much as computers, right?
So I would assert that the jump to a Google "dispute links" mechanism is
nowhere near as big a leap from existing search engine results as it may
first appear to be.
In future discussion on this topic, I'll get into more details of specific
methodologies that could be applicable to the implementation of such a
dispute handling system, based both within the traditional legal structure
and through more of a "Web 2.0" community-based topology.
But I wanted to note now that while such a search engine dispute resolution
environment could have dramatic positive effects, it is fundamentally an
evolutionary concept, not so much a revolutionary one.
More later. Thanks as always.
Lauren Weinstein http://www.pfir.org/lauren +1(818) 225-2800 lauren@pfir.org
Co-Founder, PFIR: People For Internet Responsibility - http://www.pfir.org
PRIVACY Forum - http://www.vortex.com Lauren's Blog: http://lauren.vortex.com
Re: USAF F-22 jets grounded by software glitch (R 24 58)
<"Dr. Gregory Chapelle" <chapelle@ieee.org>>
Wed, 30 May 2007 09:04:39 -0700 (PDT)This letter is to comment on a Risks to the Public article published in May 2007 ACM SIGSOFT Software Engineering Notes. In particular my comments will address the article "USAF F-22 jets grounded by software glitch (R 24 58)". I believe your comment at the end of the article "However, the F-22 Raptor was presumably unwrapped without the benefit of raptor simulation, testing, and other preflight analyses. Perhaps the quality control is going downhill" was out of line. I personally worked on the Raptor Integrated CNI (Communications Navigation and Identification) system and can attest that extensive 4+ years of testing and analysis went into that system. I think the fundamental "take away" from this is not "what a bunch of stupid idiots", but rather what was the basic development/testing process problem that allowed this issue to slip through. I think I can shed some light on this. Basically with a complex system like this, the government (the Air Force in this particular instance) has detailed specific performance requirements that the system must meet. A great deal of design and testing go into verifying that the system meets these functional requirements. Even failure modes are addressed when resource sharing and detailed studies/testing are performed to keep classified operational computer data from being accidentally released into the unclassified processors. Extensive operational scenarios were developed, and detailed Rate Monotonic Analysis were performed for each of these. The hardest part in trying to meet these large number of requirements is to step back and say "what have I forgotten to test for". It's easy to test and identify the written requirements in front of you, but much more difficult to identify less obvious failure modes. The real kicker for most people when learning of this reported error is how obvious it is in hindsight. Why didn't we test for this obvious operational mode of crossing the international date line? This is probably the second "take away" from this error. For Navigation, an accurate time reference is the key that unlocks everything. We tested for timing errors in GPS, requisition of time if it was lost, and numerous other "time" type of errors. We were confident that we had addressed any time reference errors, but we never specifically addressed the International Date Line. In the rush to verify functional requirements, we did not look carefully at our testing coverage, and because we danced around similar failure modes, we were confident that we had "covered all our bases". Again, I would say if we had stepped back and took a careful look at the "completeness" of our testing, we might have identified this hole. So where does that put us today. I think today the pressures on software development to produce and test faster prevents a "stand down" moment. The fast pace does not allow reflective contemplation for an overall view of a project's objectives and to confirm adequate design and testing. As systems become more complex there comes a need for a group/person separate from the design group to cast an impassionate eye over things and independently verify that there are no "unresolved operational issues". While I no longer work for the Integrated CNI group, I continue to work on government programs and even with 20+ years of experience strive to improve and do better. I always find your "Risks to the Public" engaging and unnerving at the same time. I share pertinent articles my design teams and will be discussing the F-22 one with them too! Thank you for helping to make software systems better. Dr. Gregory Chapelle <chapelle@ieee.org> 858.676.7361 (work)

Report problems with the web pages to the maintainer