It seems to me (and I may be wrong) that recent editions of Risks have contained much more about the everyday-organisational effects of our societies becoming more dependent on fallible digital systems, and less than Risks used to have about engineering anomalies so engendered. I would like to note that this is not because there are fewer such anomalies! 15 years ago, there were occasional reports in Aviation Week and Space Technology of anomalies attributable to SW or programmable-electronic HW design. Now, there are many more. I have just read p14 (of 58), the first page of the technical "news briefs", of the September 6, 2010 edition. There are two. This seems to me to be typical, and this time I am motivated to note the situation. The date for the launch of the Space-Based Infrared System (Sbirs) early missile warning satellite has slipped a little to Spring 2011. "Flight software for Lockheed Martin-built spacecraft is still being tested following an anomaly that cropped up in a classified satellite with a similar architecture." An unmanned, remotely piloted rotary, wing aircraft called the Fire Scout flying out of NAS Patuxent River apparently lost communications, and headed out of its planned airspace and into restricted airspace used primarily for military helicopters transporting government officials around Washington and to Andrews AFB. The Fire Scout lost communications and detected it, "squawked" the appropriate code on its transponder, but "due to a software flaw, ignored preprogrammed commands that directed it to return to base under those conditions". This is not the first time that such things have happened with remotely-piloted vehicles. There was a well-known incident near Nogales with one of the Predator vehicles used by the Border Patrol. (Only a brief report in Risks, but still a report, in Risks-24.29 in 2006 by Mark Newton: http://catless.ncl.ac.uk/Risks/24.29.html#subj4.1 . Full report available through the synopsis at: http://www.ntsb.gov/ntsb/brief.asp?ev_id=20060509X00531&key=1 ) My own experience is that such incidents in aviation, put down to anomalies with programmable-electronic systems, are increasing. I regret that there aren't enough Risks contributors to keep up with the news! Peter Bernard Ladkin, Causalis Limited and University of Bielefeld www.causalis.com www.rvs.uni-bielefeld.de
On the evening of Monday 13 Sep 2010, the Chase Bank online banking service (which serves almost 17 million customers—including me) stopped working; service wasn't restored until two days later. >From a report in *Computerworld*: <http://www.computerworld.com/s/article/9186238/JPMorgan_Chase_deposits_blame_sort_of_for_outage_> The bank, the nation's second largest, said in a separate statement that a "third party database company's software caused a corruption of systems information disabling our ability to process customer log-ins to chase.com." It added that the problem "resulted in a long recovery process." Despite that statement, being unable to log in was far from the only problem. Chase was also unable to process any online payments during the outage; payments scheduled for Monday and Tuesday weren't made until Wednesday, likely resulting in lots of customers seeing late fees from their creditors. (Chase has promised to reimburse customers for any late fees.) One thing that struck me as very poorly worded in the *Computerworld* article was this sentence: "During the outage, customers couldn't use the online banking site, but the bank's ATMs, branches and call centers were unaffected." Call centers most definitely were affected—they were overwhelmed by the huge number of online users who reached for their phones when they couldn't log in to the Chase website. I'm not a risks expert, and I admit I'm finding it difficult to assess the risks given how little information I know about the failure. But I wouldn't be surprised to learn the "third party database" had been upgraded just before the outage. There is at least one risk they anticipated and mitigated: A student at my university tried to register the domain name f*ckchase.com, but found out it was already registered by Chase!
"The Society of Broadcast Engineers has alerted members that a radio ad for an oil company [ARCO, a BP subsidiary] is tripping EAS units." (EAS is the modern incarnation of what used to be the "Emergency Broadcast System".) [EAS header tones at the beginning of the ad apparently sometimes trigger the EAS, at the proper frequencies and approximately correct data rate. PGN] http://www.televisionbroadcast.com/article/106294
http://www.freedom-to-tinker.com/blog/felten/why-did-anybody-believe-haystack Haystack, a hyped technology that claimed to help political dissidents hide their Internet traffic from their governments, has been pulled by its promoters after independent researchers got a chance to study it and found severe problems. This should come as a surprise to nobody. Haystack exhibited the warning signs of security snake oil: the flamboyant, self-promoting front man; the extravagant security claims; the super-sophisticated secret formula that cannot be disclosed; the avoidance of independent evaluation. What's most interesting to me is that many in the media, and some in Washington, believed the Haystack hype, despite the apparent lack of evidence that Haystack would actually protect dissidents. [...] Jeremy Epstein, Senior Computer Scientist, SRI International, 1100 Wilson Blvd, Suite 2800, Arlington VA 22209 703-247-8708, firstname.lastname@example.org
At 1300 EDT, 9 Sep 10, the US Computer Emergency Readiness Team (US CERT) received a report from open source media (via the Department of Homeland Security Office of Public Affairs) regarding fast-spreading malicious e-mail activity, resulting in distributed denial-of-service type impacts against e-mail enterprise servers. US CERT has since received multiple reports from both Federal agencies and private sector entities experiencing similar impacts and is now in the process of collecting and analyzing samples of the malware and developing mitigation strategies. These attacks have the potential to prevent, at a minimum, the efficient operations of US Government e-mail systems. The e-mail titled "Here you have" or "Just for you" has a PDF attachment which, when clicked/opened, e-mails the malware (from the worldwide web domain name "members.multimania.co.uk") to the addresses contained in the user's local/Global Address List. At 1830 EDT 9 Sep 10, the Department of Homeland Security (DHS) National Cyber Security Center (NCSC) and the National Cybersecurity and Communications Integration Center (NCCIC) conducted a conference call to report 6 Federal agencies and an unknown number of private sector companies have been affected by a computer virus that exploits e-mail address books/Global Address Lists. The NCSC reports a major media outlet is reporting on this story, and Public Affairs guidance is being put together by the DHS National Protection and Programs Directorate (NPPD). The DHS NOC conducted a component and interagency blast call at 1858 EDT 9 Sep 10 with no amplifnue [sic] to monitor this event and will publish further reports as warranted.
This is a lovely tale of confusion in junkmail software. A parametric salutation (such as "Resident" or "to our neighbor at ...") was intended to be defined by the junkmail sender, but its absence defaulted to "The Slug", resulting in mass mailings going out "To THE SLUG at" the designated address. [PGN-ed] http://thedailywtf.com/Articles/Similar-to-Snail-Mail.aspx
Recently, I was trying to diagnose a problem that prevented my Windows XP computer from running Java applications. (It turned out that the problem was an interaction between the Java runtime and the ClearType gamma setting, but that's a story for another time...) Anyway, I was searching for clues, and tried a Google search on "javaw.exe" (the Java runtime executable, which in my case was starting up and then immediately exiting without any indication of why). The search turned up a number of websites where it appears that people "vote" to rate the safety and security of various system files. For example: http://www.neuber.com/taskmanager/process/javaw.exe.html I found it intriguing that much like the news these days, computer security was coming down to little more than popular opinion. On this site, in particular, we find about javaw.exe that: 156 users rated it as not dangerous. 17 users rated it as not so dangerous. 63 users rated it as neutral. 33 users rated it as little bit dangerous. 25 users rated it as dangerous. 21 users didn't rate it ("don't know"). Along with the votes were comments like, "It's a memory hog," "It will eat memory like crazy on a powerful computer if left alone" and "It is a gateway for pop-up ads," in addition to ones like, "Part of the SUN Java runtime. Needed for Java programs." Another more amusing example is dwwin.exe. This is the executable for Dr. Watson, Microsoft's post-mortem bug logger. Since Dr. Watson launches automatically after an application crashes with an unhandled exception, people conclude that it's Dr. Watson itself that is crashing or is causing the crash. Steve Schafer
Noam Cohen, *The New York Times*, 17 Sep 2010 At the end of each performance of the Agatha Christie play "The Mousetrap," the person revealed to be the murderer steps forward and tells the audience to "keep the secret of whodunit locked in your heart." Even after 58 continuous years of performances in the West End of London, the play's twist ending has been largely preserved by reviewers, guidebook writers and the great bulk of the estimated 10 million people who have seen the play. There is one notable exception: Wikipedia. The encyclopedia's article about the play succinctly summarizes its two acts and then, in a single sentence, even more succinctly explains who the killer is. That shocked Christie's grandson, Matthew Prichard, who told the British newspapers late last month that he was dismayed to learn that Wikipedia could not keep a secret. ... http://www.nytimes.com/2010/09/18/business/media/18spoiler.html
I guess when everything published on the web is visible to all there is a risk of someone obtaining your financial details if they are released into such a widespread pubic domain. Such is this file: http://www.cpanelskindepot.com/scripts/paypal/Download.csv I found this when searching Yahoo for someone with an unusual Dutch name. Yahoo Search listed this link in the results. I then downloaded the CSV, imported it into Excel, did a Select All and parsed the data records. Then I did a Find for the person's name. He/she was there - listed quite a few times. But against the Yahoo listing was the link: Cached - which when clicked upon listed the CSV file's contents again, all nicely parsed. Even less privacy was discovered when I went to Archive.org - to search the Wayback db. http://web.archive.org/web/*/http://www.cpanelskindepot.com/scripts/paypal/Download.csv And there was the file again, actually two versions: http://web.archive.org/web/20080306014309/http://www.cpanelskindepot.com/scripts/paypal/Download.csv http://web.archive.org/web/20080416011242/http://www.cpanelskindepot.com/scripts/paypal/Download.csv When private financial details and transactions are uploaded to the Internet, it doesn't take much for them to be captured and archived for all to access - for all time. This represents a HUGE risk of trust on companies such as CPanelSkinDepot.com NOT to publish such details. Sadly that trust is betrayed all too frequently.
"Intel Threatens to Sue Anyone Who Uses HDCP Crack" http://www.wired.com/threatlevel/2010/09/intel-threatens-consumers/ [If your technology is flawed or has been compromised, try legal remedies, e.g., under the DMCA.] Paul Kocher, chief scientist at Cryptography Research in San Francisco, said in a recent interview that somebody in the business of making HDCP-compatible devices, who had access to at least 50 individual device keys, would have been able to reconstruct the master key by analyzing "mathematical similarities" in the individual device keys. He said that was a vulnerability in the technology that was bound to be exploited. See also http://www.tomshardware.com/news/hdcp-master-key-copy-protection,11311.html
So, we say that pilots are mis-trained if they are allowed to use overly vigorous rudder actions that could lead to structure failure in real life. Should we require the simulators to include finite element models of each screw and strut to predict structural failure? I think that's unrealistic. Should we include a disclaimer at the start of every simulator training session? "Warning - simulations may be inaccurate." That would have as little positive effect as my GPS does when it warns me not to use it for navigation every time I turn it on. I think that simulators could include simple and practical tests to detect when the operating state goes outside the realm of proven validity. When the tests fail, I think that the simulation should halt. Figuratively speaking, a deliberately placed blue screen of death is the appropriate response. Under the current culture, it is considered unacceptable that the simulator should stop or crash in any circumstances. That policy should be reversed. An abrupt and unexpected blue screen of death would provide a memorable lesson that "this is not reality."
> The obvious RISK of this design is that your privacy is dependent on > John Smith's decisions, not yours. If you send an e-mail to a third party, they generally acquire your e-mail address. After that you have no control over what they do with it. If you have privacy concerns in this respect you should not send such third parties e-mail without obscuring your address. That is your decision, not theirs. Your privacy is always dependent on your own decisions (except where legislation applies). Understanding complex systems well enough to make those decisions correctly in daily life is another matter. Undoubtedly there are Risks there.
Curt Sampson's comment on the Randy Stross column in the New York Times (Sep. 4, 2010) contains several dangerous statements, in my not-so-humble opinion. He states "clearly I'm preaching to the converted in this forum." Well, sorry. I am not one of his converted. In fact, his views are based on false premises, namely, that complex passwords are essential for security and that the security problem is due to the lack of proper education: maybe we should have a "Passwords for Dummies" book he suggests. Sampson claims to be a security expert. It is security experts who need to change, not the general public. Would anyone be surprised to learn that I resent being called a "dummy"? Security administrators need to understand human behavior, not engineering technology. Worse, Sampson relies on the article alone without doing the simple homework of reading the articles by the people Randy Stross was quoting and relying upon. Rule: Don't criticize unless you do your homework. Disclaimer: I am one of the sources. The *NY Times* column was a necessary simplification, but I think it was well done. As part of the disclaimer, I also had lunch with Stross (about a mile from our esteemed moderator's place of work) where we discussed these issues among others (after the article had been written, after he had done a phone interview with me for the article, but before publication). The other major contributor to Stross's opinion piece was Cormac Herley from Microsoft Research. Both of us also cited many other research studies that buttress our points. Both of us have papers that are really easy to find: http://research.microsoft.com/pubs/132623/WhereDoSecurityPoliciesComeFrom.pdf http://www.jnd.org/dn.mss/when_security_gets_in_the_way.html http://www.jnd.org/dn.mss/with_safety_and_secu.html Bruce Schneier: Only amateurs attack machines; professionals target people. Sampson seems to think that faulty education is why we have bad security. So he himself seems to use a unique password for every system he uses, intermixing case, digits and letters, etc. How does he remember them? He writes them down. In the safety business, this is called "single point of failure." In my articles and books, I show that overly secure passwords (and locked doors) get in the way of the work, so people bypass them. But many other people (e.g., Herley and Schneier) show that password weakness is NOT the major cause of break-ins. Who cares how secure the password if there is a key logger in place? Or if the password is written down and pasted on the front of the monitor? Or if people will help you log in when you have "forgotten" your password. (The solution, by the way, is NOT to make people less helpful.) Most security breaches are inside jobs, USB drives (See RISKS 26.16), key loggers, etc. Although RISKS digest is not a reviewed publication and its coverage is clearly biased toward the spectacular, it still is a useful guide: I don't recall many instances of guessed passwords or brute-force attacks. (Brute-force attacks are the easiest to circumvent through simple technology, which is why you don't see them. The onerous password requirements are mainly directed against brute-force attacks. Duh.) The correct solution requires several things. 1. Do not think of people who use computer systems as "dummies." Some of the biggest breaches have happened to the senior management who, trying to get their work done, circumvented the onerous security rules (that they themselves probably authorized). 2. Recognize that extra-strong security gets in the way of legitimate users of the system, so it is apt to be compromised by the most dedicated workers (as well as by many not so dedicated). 3. Use secondary and tertiary checks. Two independent weak methods are likely to be stronger than any single strong method, after people's natural behavior is taken into account. 4. Recognize that this is a difficult problem and that no simple solution will be adequate. 5. Read the National Research Councils' report "Toward Better Usability, Security, and Privacy of Information Technology" (URL below). (Yes I was on that committee.) And note the three topics: Usability, Security. Privacy. They are tightly linked. And attend (and listen to) the yearly SOUPS Symposium On Usable Privacy and Security (URL below). http://www.nap.edu/catalog.php?record_id=12998 http://cups.cs.cmu.edu/soups/2010/ Each field alone has its problems. Security experts think primarily about more onerous security requirements. Usability experts want everything to be really easy. Privacy experts want privacy and visibility of how personal information is used, while some (e.g., medical researchers) need access to everyone's records (anonymity is OK, as long as individuals can be tracked). These views seem mutually contradictory. So any solution requires members of all three communities to work in tandem. You can't solve the problem in isolation. It is not a technical problem. It is technical, social, psychological, political. Don Norman. Nielsen Norman Group email@example.com http://www.jnd.org
The discussion about overly complex password rules reminds me of sage advice that Digital once published in a VAX security manual. I'll paraphrase: The definition of security must be broad. Security aims to see that authorized users, and only authorized users, succeed in doing their jobs. The modern definition of computer security seems much narrower. It focuses on preventing unauthorized uses, and malware. If security procedures hinder authorized users from doing their jobs, security still succeeds under the narrow definition, but fails under Digital's broader definition. An onerous password policy is a form of denial of service attack. Might things improve if we made security people responsible for productivity of the good guys as well as denial of the bad guys?
An additional irony of keyloggers is that the bad guys can typically see your password better than you can, since they don't have every character replaced by a black blob. Only a very few programs (7-Zip, when asking for a password on a protected archive, springs to mind) allow you to check a box to say "I do not fear Tempest scanning, and there is nobody else in the room. Please let me see this password as I type it." To impose passwords like fH%JK43-oe9 and then prevent people from seeing what they're typing is just sadism. It must cost millions per year in password reset costs, even with automated delivery of new passwords to e-mail addresses. I've added this functionality to the Web applications which I maintain. I suggested its addition to a site which I use frequently, where I have contact with the development team, and which has no major, banking-style security issues. Their reply was, "We've decided not to do this, because it's not an industry-standard practice".
Please report problems with the web pages to the maintainer