Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
In the light of the Buncefield Oil Depot explosion, we should all consider what local events beyond our control may do to our precious computer and control systems. A computer system at a Cambridge hospital used for patient information such as admissions and discharges experienced some problems because of a fire at the Buncefield oil depot in Hertfordshire. A company providing some IT services to Addenbrooke's Hospital was based at the industrial park near the depot and was destroyed in the fire. It was expected to take a week to get the computer system up again, although reportedly no medical services were affected. [BBC report; PGN-ed] Paul E. Bennett, Systems Engineer, UKAEA-JET, Culham Science Centre Abingdon, Oxon OX14 3DB Tel: 01235-464884
The explosion and fire at the fuel depot near Hemel Hempstead, Hertfordshire: http://images.thetimes.co.uk/TGD/picture/0,,250768,00.jpg Connection with computers? Well, several nearby installations were wrecked (amazingly, no-one was seriously injured), one of which contained the electronic patient records of Addenbrooke's Hospital, Cambridge. The hospital reported that it would have to rely on paper records for several days until the computer files could be restored. On the positive side, at least they had back-up. On the other hand, their disaster recovery planning seems to be a bit slack. Peter Mellor, Centre for Software Reliability, City University, London EC1V 0HB +44 (0)20 7040 8422 Pete Mellor <firstname.lastname@example.org>
Fascinating... Per the attached clip from the *Wall Street Journal*, Florida (FLORIDA!) courts have been agreeing that defendants in "driving while under the influence" cases have a right to full disclosure of the software used in the equipment doing the measuring. Imagine if this logic followed through to the equipment being slid into election vote counting! "A court fight in Florida over the software used in the instruments that detect alcohol in breath could threaten the ability of states and localities to prosecute drunk drivers. "The battle is over the source code of breath analyzers made by CMI Group, a closely held maker of breath-alcohol instruments. Defense lawyers have challenged the use of the device and asked to see the original source code that serves as its computer brain, saying their clients have the right to examine the machine that brings evidence against them. "Last February, a state appeals court in Daytona Beach ruled that Florida had to produce 'full information' about the test that establishes the blood-alcohol level of people accused of driving under the influence, or DUI. Otherwise, the court said, the evidence is inadmissible... rest at: http://online.wsj.com/article_print/SB113470249958424310.html
Ivars Peterson, The Risky Business of Spreadsheet Errors *Science News*, Week of 17 Dec 2005 http://www.sciencenews.org/articles/20051217/mathtrek.asp Spreadsheets create an illusion of orderliness, accuracy, and integrity. The tidy rows and columns of data, instant calculations, eerily invisible updating, and other features of these ubiquitous instruments contribute to this soothing impression. At the same time, faulty spreadsheets and poor spreadsheet practices have been implicated in a wide variety of business and financial problems. [PGN-excerpted from a nice article with a bunch of references, including Ivars' 1996 book, Fatal Defect: Chasing Killer Computer Bugs, which itself cited some earlier RISKS reports. The last two references are particularly relevant: The European Spreadsheet Risks Interest Group (EuSpRIG) has a Web site at http://www.eusprig.org/. Spreadsheet Research, maintained by Ray Panko of the University of Hawaii, is a repository for research on spreadsheet development, testing, use, and technology: http://panko.cba.hawaii.edu/ssr/.]
Here is an interview that is very suitable for passing on to your non-technical friends who don't understand why you are so morbidly fascinated with risks. The interviewee is James Reason, Emeritus Professor of Psychology, University of Manchester in the U.K. Professor Reason appears in RISKS a few times (4.52, 10.31, 21.48, 23.24) and is well known for the "Swiss Cheese Model". The interview was released by the Australian Broadcasting Corporation (ABC) this morning, a repeat from 16th May 2005, and covers; * Absentmindedness, * the Tenerife disaster (1977, two Boeing 747s collide), * no remedial benefit from blame, * root cause analysis, * the Gimli Glider. It's available as an MP3 file: http://abc.net.au/rn/podcast/feeds/health_20051219.mp3 A transcript: http://www.abc.net.au/rn/talks/8.30/helthrpt/stories/s1529677.htm James Cameron http://ftp.hp.com.au/sigs/jc/
Last month my wife got a CAN 0.03 credit on her Toronto Dominion Visa bill, labelled "Leap year — interest credit." The note says "The leap year interest credit on your statement is a correction for an over charge in the 2004 leap year." I don't remember seeing one of these for 2000. Interesting that they would get that right and 2004 wrong. Incidentally, the bill has our US ZIP code printed with Canadian spacing: "940 25". email@example.com Tel: +1 650 485 2818 Fax: +1 650 485 1103 Agilent Technologies MS 24M-A, 3500 Deer Creek Road, Palo Alto CA 94303
The same three numbers (5-0-9) came up in the same order on 16, 17, and 18 Dec 2005 in the Kansas Lottery Pick Three. On the third night, many people apparently chose 5-0-9, costing the lottery nearly twice what was paid in. Lottery security officials insist that the system was working normally. (Perhaps the random-number generator had gone to seed?) [PGN-ed. Thanks to Lauren Weinstein for spotting this one.] http://abcnews.go.com/US/story?id=1425383
Re: Trading Error Leads to $225 Million Loss for Japanese Firm As per the information in the Reuters item http://asia.news.yahoo.com/051211/3/2c7vk.html the actual loss may be lower or more than the $225 million as the amount of the premium that will need to be paid to by back shares is still to be determined. The sale order was for about 41 times the actual number of shares actually outstanding, incidentally. It turns out that the Tokyo Stock Exchange's own software was responsible for part of the problem, as it prevented the cancellation of the order from being processed! See http://www.yomiuri.co.jp/dy/business/20051210TDY08010.htm which says: Observers also said the TSE held some responsibility for the incident because it accepted the unusual sell order. The TSE does not have a system to automatically detect an unusual order, and the bourse will come under pressure to remedy this situation. Also note that this is NOT the first time this has happened at the TSE, and they have yet to fix their system! RSH [From the same article...] Incident not without precedent Thursday's incident was not the first time a large-scale errant order was placed in the nation. In November 2001, UBS Warburg (Japan) Ltd. (now UBS Securities Japan Ltd.) issued an order to "sell 610,000 shares at 16 yen each," instead of "sell 16 shares at 610,000 yen each," for newly listed Dentsu Inc. stock on the First Section of the TSE. It is believed that UBS Warburg incurred significant losses due to this erroneous order. In December of the same year, Deutsche Securities Ltd. made a massive sell order for Isuzu Motors Ltd. stock, but the order was not processed because it was made soon before the market closed. In both cases, the price and amount of shares were inadvertently mixed up. Note added Mon, 26 Dec 2005: [More fallout from the error... with a better number on the loss actually suffered.] Exchange chief resigns over 'fat finger' error, From Leo Lewis in Tokyo The Times, 21 Dec 2005 http://business.timesonline.co.uk/article/0,,13133-1948579,00.html The president of the Tokyo Stock Exchange resigned yesterday to take responsibility for the ``fat-finger'' trading error that sparked a day of mayhem on Tokyo markets earlier this month. Takuo Tsurushima resigned along with Sadao Yoshino, the bourse's managing director, and Yasuo Tobiyama, its head of computer systems. The incident has left considerable turmoil in its wake: Mizuho Securities lost 40 billion yen (Â£195 million) on the botched trade and two Japanese day traders made Y2.5 billion in a few minutes. Western investment houses who made money from the error have been publicly criticised by the Japanese Government and agreed to pay the profits they made into an investors' protection fund. Losses from the trade were sufficient to force Mizuho to cancel all end-of-year bonuses from the securities arm. The trader, believed to be a 24-year-old woman relatively inexperienced on the dealing floor, had wanted to sell one share in J Com, a new telecoms firm, for Y600,000. She mistyped the order and sold 600,000 shares at Y1 each.
The claim that "he thought Wikipedia was a gag site" (RISKS-24.12) seems unlikely, and I see it on a par with those who say "no, I was just doing research" when caught hacking/visiting dubious web sites. Yet this seems to have caught the attention of some parts of the media who don't usually see visiting those sites as plausible research. The suggestion is that it is reasonable for somebody to be so mistaken as to think Wikipedia is a "gag" site. While some of the information there may not be 100% accurate, it's hard to see how this apparently mistaken view can be seen as a genuine defence. Ian W Halliday, BA Hons, SA Fin, ATMG, CL
Desite the fact that there are many different map atlas programs for the US (although this entry concerns UK), they all use the same map database. Why? Because there is *only one available,* unless you care to compile your own. But this presents problems. For example: Using *any* map atlas program for the US, tell it to show you the intersection of Amboy Road and Wilson Road in Twentynine Palms, CA. This is a remote desert area and only Amboy Road is paved ... in a manner of speaking. Any of these programs will show a Mercedes-Benz-logo-shaped triad of three roads running south from this intersection. Take it from one who lived in that area for many years: none of those roads exist. The database was compiled from USGS topo maps and the one for that area is dated (ISTR) 1953, and if you are told to "turn right" at that point and blindly do you will piss off a lot of local residents because you will take out their mailboxes. And The Moral Is: Such programs should ALWAYS be taken with a grain of salt, even in urban areas. And: The farther from urban areas you get, the less reliable these programs are likely to be.
< if a sat nav system told you to jump off a cliff Pals armed with my cliff top estate coordinates ended up at the bottom of the cliff, and had to pay a local boozer to guide them the wasted 15 km. to the top. Moral: X,Y perhaps 100% but without considering Z, your sat nav system just gets you into more trouble.
Accepting instructions that are reasonably obviously wrong (e.g. one-way streets tend to have signs that indicate the restriction) can be a small problem. Pinpoint lane accuracy can be a problem in specific locations where divergent destinations depend on this accuracy. A harder-to-address problem with GPS navigation can be the reliance upon simple geography... When I was consulting to IBM in Los Angeles some years ago, one of the team was given a Hertz car with the NeverLost system. Traveling together, we had to ignore its turning suggestion after encountering roadwork and were impressed - a route recalculation showed us the new way to reach the same destination. We experimented with how clever this system was. And discovered a limitation we had not considered. After heading west back towards LA very late at night, we turned off the freeway and asked the system to see if it could find us a new route to central LA. Alas, the route it chose took us through what could most charitably be called a 'rundown' area. In fact, we were horrified to discover we seemed to have found a route through what we later found out was one of the most dangerous areas in Los Angeles. GPS systems can hardly be programmed to avoid seedy neighborhoods without political uproar. On the other hand, there are roads that shouldn't be traveled at some times of the day...
Note that road coordinates are not known with perfect accuracy either. Unless someone with a GPS has surveyed the road recently, the coordinates may have been lifted from a paper map and translated through several datums. For that matter, driving directions may use outdated one-way and turn restriction information. This used to be especially obvious in Boston during the big dig, where the airport exit changed every few weeks. In the end, it's a lot of fuzz.
> ... This provides anonymity for spammers, scammers, phishers, and other > illegal activities, and untraceability for malware-containing sites. It also provides relative anonymity for people like paralegal Pamela Jones, who operates groklaw.net, an award-winning web site dedicated to reporting on and analyzing "legal events important to the [Free and Open Source Software] community". Her relentless digging into the SCO lawsuits has made her the target of harassment and defamation by SCO and its supporters, such as journalist Maureen O'Gara - ask Google for the sordid details. Dag-Erling Smørgrav - firstname.lastname@example.org
I just hope that the GAO knows the difference between "unknown" and "withheld". My domain name is registered in the UK, and because of UK and European data protection laws applying to personal data, the WHOIS doesn't return certain information.
[Rick Jones submitted this comment on the item in RISKS-24.12. Many thanks! PGN] <http://www.kron.com/Global/story.asp?S=4226663> While the RISKS text talks as if the Golfland was on a terrorist watch list as in a list of presumed potential terrorists, a close look at the text on kron.com shows: "The moment we realized it was on the list, it was taken off," said San Jose police officer Rubens Dalaison, who handles "critical infrastructure assessment" for the department. "I myself took it off." Now, the rest of the text says things like "watch list" which sounds like the Mr. Smith Will Go to Guantanamo list, however, IIRC the "critical infrastructure assessment" bit suggests that someone listed the Golfland as a piece of critical infrastructure, that is as a potential terrorist target, not as a potential terrorist. Is the Risk in what KRON published, how it was read by different people, that a Golfland was on a list in the first place, or all of the above?-) What is left open is if any of the _other_ Golfland's are considered critical infrastructure, and perhaps how many people feel that a Golfland is more critical infrastructure than say Fortress US Capitol... [I think PGN was in Goofland not Golfland when he PGN-ed the item. PGN]
Here is a clear, relatively concise (13 pages) and detailed description and demonstration of a solution to a particular RISK that we're probably all familiar with. - --------- Forwarded message ---------- Date: Mon, 12 Dec 2005 17:03:54 -0500 From: David A. Wheeler <email@example.com> To: firstname.lastname@example.org Subject: Countering Trusting Trust through Diverse Double-Compiling Everyone here should be familiar with Ken Thompson's famous "Reflections on Trusting Trust." If not, see: http://www.acm.org/classics/sep95/ The "trusting trust" attack subverts the compiler binary; if attacker succeeds, you're doomed. Well, till now. I've written a paper on an approach to counter this attack. See: "Countering Trusting Trust through Diverse Double-Compiling" http://www.acsa-admin.org/2005/abstracts/47.html Here's the abstract: "An Air Force evaluation of Multics, and Ken Thompson's famous Turing award lecture "Reflections on Trusting Trust," showed that compilers can be subverted to insert malicious Trojan horses into critical software, including themselves. If this attack goes undetected, even complete analysis of a system's source code will not find the malicious code that is running, and methods for detecting this particular attack are not widely known. This paper describes a practical technique, termed diverse double-compiling (DDC), that detects this attack and some unintended compiler defects as well. Simply recompile the purported source code twice: once with a second (trusted) compiler, and again using the result of the first compilation. If the result is bit-for-bit identical with the untrusted binary, then the source code accurately represents the binary. This technique has been mentioned informally, but its issues and ramifications have not been identified or discussed in a peer-reviewed work, nor has a public demonstration been made. This paper describes the technique, justifies it, describes how to overcome practical challenges, and demonstrates it." I think you'll find this interesting. --- David A. Wheeler
BKACVRAD.RVW 20050731 "The Art of Computer Virus Research and Defense", Peter Szor, 2005, 0-321-30454-3, U$49.99/C$69.99 %A Peter Szor email@example.com %C P.O. Box 520, 26 Prince Andrew Place, Don Mills, Ontario M3C 2T8 %D 2005 %G 0-321-30454-3 %I Addison-Wesley Publishing Co. %O U$49.99/C$69.99 416-447-5101 800-822-6339 firstname.lastname@example.org %O http://www.amazon.com/exec/obidos/ASIN/0321304543/robsladesinterne http://www.amazon.co.uk/exec/obidos/ASIN/0321304543/robsladesinte-21 %O http://www.amazon.ca/exec/obidos/ASIN/0321304543/robsladesin03-20 %O Audience s+ Tech 3 Writing 2 (see revfaq.htm for explanation) %P 713 p. %T "The Art of Computer Virus Research and Defense" The preface states that the book is a compilation of research over a fifteen year period. While it is not explicitly stated, Szor seems to indicate that the primary audience for the work consists of those professionally engaged in the field of malware research and protection. (He also admits that his writing might be a little rough, which is true. While his text is generally clear enough, it is frequently disjointed, and often appears incomplete or jumpy. Illustrations are habitually less than helpful, although this can't be attributed to a lack of command of English.) Given the stature of people he lists in the acknowledgments one can hope for good quality in the technical information. Part one deals with the strategies of the attacker. Chapter one describes games and studies of natural ecologies relevant to computer viruses, as well as the early history (and even pre-history) of these programs. I could cavil that he misses some points (such as the 1980-81 Apple virus programs at two universities in Texas), or glosses over some important events (such as Shoch and Hupp's worm experiments at Xerox PARC), but the background is much better and broader than that found in most chronicles. The beginnings of malicious code analysis are provided in chapter two, although it concentrates on a glossary of malware types (albeit incomplete and not always universally agreed) and the CARO (Computer Antivirus Research Organization) naming convention. The environment in which viruses operate, particularly hardware and operating system platform dependencies, is reviewed in chapter three. This material is much more detailed than that given in any other virus related text. (Dependencies missing from the list seem to be those that utilize protective software itself, such as the old virus that used a function of the Thunderbyte antivirus to spread, or the more recent Witty worm, targeted at the BlackIce firewall. Companion viruses utilizing precedence priorities would seem to be related to operating system functions, but are not included in that section.) Unfortunately, the content will not be of direct and immediate use, since it primarily points out issues and relies on the reader's background to understand how to deal with the problems, but nonetheless the material is fascinating and the inventory impressive. Chapter four outlines infection strategies and is likewise comprehensive. Memory use and infection strategies are described in chapter five. The issue of viral self-protection; tactics to avoid detection and elimination; are given in chapter six. Chapter seven reviews variations on the theme of polymorphism, and also catalogues some of the virus generation kits. Payload types are enumerated in chapter eight. Oddly, botnets are mentioned neither here, nor in the material on worms, in chapter nine. (Szor's use of a modified Cohenesque definition of a virus as infecting files means that some of the items listed in this section are what would otherwise be called email viruses. His usage is not always consistent, as in the earlier mention of script viruses on page 81.) "Exploits," in chapter ten, covers a multitude of software vulnerabilities that might be used by a variety of malware categories for diverse purposes. This content is also some of the best that I've seen dealing with the matter of software vulnerabilities, and would be well recommended to those interested in building secure applications. Part two moves into the area of defence. Chapter eleven describes the basic types of antiviral or antimalware programs, concentrating primarily on various forms of scanning, although change detection and activity monitoring/restriction are mentioned. It is often desirable to find and disable malware in memory. The means of doing so, particularly in the hiding-place riddled Win32 system, are described in chapter twelve. Means of blocking worm attacks are discussed in chapter thirteen, although most appear to be either forms of application proxy firewalling, or (somewhat ironically) activity monitoring. Chapter fourteen lists generic network protection mechanisms, such as firewalls and intrusion detection systems, although the section on the use of network sniffers to capture memory- only worms is intriguing to the researcher. Software analysis, and the tools therefore, is covered in chapter fifteen, emphasizing functional aspects of the malware. Chapter sixteen concludes with a register of Websites for further study and reference. For those involved in malware research, Szor's book is easily the best since Ferbrache's "A Pathology of Computer Viruses" (cf. BKPTHVIR.RVW). It contains a wealth of information found nowhere else in book form. On the other hand, it is demanding of the reader, both in terms of the often uneven writing style, and the background knowledge of computer internals and programming that is required. The text does not provide material that would be suitable for general protection of computer systems and networks. On the other hand, intelligent amateur students of malicious software will find much to reward their investigation of this book. copyright Robert M. Slade, 2005 BKACVRAD.RVW 20050731 email@example.com firstname.lastname@example.org email@example.com http://victoria.tc.ca/techrev or http://sun.soci.niu.edu/~rslade
Please report problems with the web pages to the maintainer