Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
A SecurityFocus post <http://www.securityfocus.com/news/10618?ref=rss> discusses the nuclear industry's reaction to a proposed voluntary standard <http://horning.blogspot.com/2005/01/us-to-tighten-nuclear-cyber-security.html> for security of digital systems controlling nuclear power plants. "Two companies that make digital systems for nuclear power plants have come out against a government proposal that would attach cyber security standards to plant safety systems. The 15-page proposal, introduced last December by the U.S. Nuclear Regulatory Commission (NRC), would rewrite the commission's 'Criteria for Use of Computers in Safety Systems of Nuclear Power Plants.' The current version, written in 1996, is three pages long and makes no mention of security. The plan expands existing reliability requirements for digital safety systems, and infuses security standards into every stage of a system's lifecycle, from drawing board to retirement. Last month the NRC extended a public comment period on the proposal until March 14th to give plant operators and vendors more time to respond. So far, industry reaction has been less than glowing." "The NRC tries to promote the use of digital technology in the nuclear power industry on the one hand, but then over-prescribes what is needed when a digital safety system is proposed," wrote one company president. "The entire cyber security section should be deleted and only a passing reference to the subject retained," another company wrote. More information at http://www.securityfocus.com/news/10618?ref=rss and http://horning.blogspot.com/2005/03/security-nuclear-plants-dont-need-no.html
Hospital computer systems widely touted as the best way to eliminate dangerous medication mix-ups can actually introduce many errors, according to the most comprehensive study of hazards of the new technology. The researchers, who shadowed doctors and nurses in the University of Pennsylvania hospital for four months, found that some patients were put at risk of getting double doses of their medicine while others get none at all. 22 types of mistakes were identified, such as failing to stop old medications when adding new ones or forgetting that the computer automatically suspended medications after surgery. The findings underscore the complexity of improving safety in US hospitals, where the Institute of Medicine estimates that errors of all kinds kill 44,000 to 98,000 patients a year. [PGN-ed] Scott Allen, *The Boston Globe*, 9 Mar 2005 http://www.boston.com/yourlife/health/other/articles/2005/03/09/drug_error_risk_at_hospitals_tied_to_computers/
Reports over the past few years of increasing numbers of patient injuries and deaths due to medical errors sent hospital administrators scrambling for computerized solutions. But two new studies suggest that, in many cases, these high-tech systems have left doctors and nurses increasingly frustrated while providing little evidence of real benefit to patients. In fact, one widely used system actually helped foster medication errors, researchers found. See the 9 Mar 2005 issue of the Journal of the American Medical Association. Sympatico News, Hospital Computers Fail to Deliver: study finds they facilitated errors http://healthandfitness.sympatico.msn.ca/News/ContentPosting.aspx?contentid=cd4d283138844d228f2e7a99ff326350&show=True&number=5&showbyline=False&subtitle=&detect=&abc=abc Richard Akerman rakerman@chebucto.ns.ca http://www.akerman.ca/
Richard A. Clarke, The Security Adviser, Real ID's, Real Dangers, *The New York Times*, 6 Mar 2005 http://www.nytimes.com/2005/03/06/magazine/06ADVISER.html Have you ever wondered what good it does when they look at your driver's license at the airport? Let me assure you, as a former bureaucrat partly responsible for the 1996 decision to create a photo-ID requirement, it no longer does any good whatsoever. The ID check is not done by federal officers but by the same kind of minimum-wage rent-a-cops who were doing the inspection of carry-on luggage before 9/11. They do nothing to verify that your license is real. For $48 you can buy a phony license on the Internet (ask any 18-year-old) and fool most airport ID checkers. Airport personnel could be equipped with scanners to look for the hidden security features incorporated into most states' driver's licenses, but although some bars use this technology to spot under-age drinkers, airports do not. The photo-ID requirement provides only a false sense of security. [Excellent article abstracted for RISKS. PGN]
Sloan School of Management has joined Carnegie-Mellon and Harvard in rejecting applications from prospective students who hacked into a website to learn whether they had been admitted before they were formally notified. 32 MIT applicants reportedly took a peek, along with 1 at CMU, 119 at Harvard, and 41 at Stanford. The Web site is run by ApplyYourself, and also used by other business schools. Its access was compromised by a posting on a BusinessWeek Online forum. [PGN-ed from Robert Weisman, *The Boston Globe*, 8 and 9 Mar 2005] http://www.boston.com/business/articles/2005/03/08/harvard_rejects_119_accused_of_hacking_1110274403/ http://www.boston.com/business/articles/2005/03/09/mit_says_it_wont_admit_hackers/ [Dave Farber's IP list had several responses. Rejected applicants considered their treatment excessive. One candidate saw only a blank page at ApplyYourself, but was rejected for having accessed the site. Dave Lesher wrote What's the B-schools' culpability in contracting out a process to a company with inadequate security? [Presumably] the schools demanded SSN's and other financial data from the applicants. Was there informed consent by the applicants to have their data shared with, in effect, a data broker? Could they apply WITHOUT so agreeing? Joe Hall wrote What strikes me is how constructing a URL that is available to students without any further authentication or protection is considered "hacking". That's inevitably diluting any geek cred. held by any of us who are even crappy hackers! Joe also noted Ed Felten's post on this subject at http://www.freedom-to-tinker.com/archives/000780.html PGN wonders what if a competing candidate had masqueraded as other candidates to see if others had been accepted, and thereby wound up getting them all rejected! Could that be a suitable defense for the rejected students? PGN]
I have been reading about the problems with the 1bu.com site on the forum Webmaster World and decided to try it myself. Basically what it is that if you type in any site with the format: http://www.sitename.com.1bu.com you will get redirected to another site (actually a proxy server in China) that looks exactly like your site, but none of your pages that use scripting will work. Using the same technique other sites could hijack banking or online shopping sites and redirect input so they collect your credit card and other information. While this has been a popular topic of discussion in the webmaster forums, Google itself is silent on the issue. Tim Chmielewski, Webmaster, Human Edge Software http://www.humanedge.biz
AP, 8 Mar 2005 Credit card information from customers of more than 100 DSW Shoe Warehouse stores was stolen from a company computer's database over the last three months, a lawyer for the national chain said Tuesday. The company discovered the theft of credit card and personal shopping information on Friday and reported it to federal authorities, said Julie Davis, general counsel for the chain's parent, Retail Ventures Inc. The Secret Service is investigating, she said. DSW was alerted by a credit card company that noticed suspicious activity, she said. http://finance.lycos.com/home/news/story.asp?story=47512557
An article in the Guardian, http://www.guardian.co.uk/online/story/0,3605,1410921,00.html discusses a plan to implant chips in garbage bins covers some risks: "If, for example, computer hackers broke in to the system, they could see sudden reductions in waste in specific households, suggesting the owners were on holiday and the house vacant." But the tendency to believe anything written on a computer screen continues unabated: "He said the microchips would help the council fend off unwarranted criticism. "We will have a confident response to customers who claim their bin may not have been emptied," he added. "
My recent encounters with BofA include attempting to setup an "out of branch" transfer. The thought seems wonderful, and then I try to do it. Navagating the web site (https, thankfully) gets me to a page that asks me to enter a "confirmaiton number" that was sent to my e-mail address. Unfortunately, something doesn't comeplete the sending of this message, and I never get it. Of course no error message appears and I'm left without the ability to transfer. I call the bank (or send internal [secure] messages and the response is "get another e-mail address" or some such. The risks: I'm told some tale that it is really "my problem" (it isn't!), and the bank's web service is sending messages into wierd places. I guess I really can't trust them until they "get it right". On the other hand, my other bank is perfectly able to send e-mail to the address is question. You think they would read their logs and wonder (do they?).
In RISKS-23.71, David Magda pointed to the suggested use of GPS on trains in the UK. In RISKS 23.52 (Shutting the train door before the commuter has bolted?) I drew attention to existing problems with one such system already operational on some UK trains. DM highlighted the potential issue of the GPS system being '"shut off" by the US government during emergencies and suggests Galileo (or inertial guidance) as a back-up. The threat of the existing GPS system being 'switched off' has been touted on many occasions, but, given its incorporation into so many systems (public use as well as military) in the USA and world-wide, it is doubtful that anything more extreme than a 'detuning' of the system would ever be contemplated. This could well be effected via an existing option which would allow the military to continue using GPS at a necessary high degree of accuracy while still providing private users with a service - albeit with lesser accuracy. Whilst the effects of such a lesser accuracy are 'undefined' and could lead to severe consequences - including death - GPS-based systems would continue to operate. Further, using Galileo as a back-up may be contentious for the reason that the US government has previously expressed grave concerns about its up-coming loss of exclusivity over the provision of such positioning technology and about the nationality of some of the participants in the Galileo scheme. This gives rise to a dilemma. Either the existence of Galileo provides a reason for not 'turning off' (or 'detuning') the existing GPS system; or Galileo itself must be similarly 'turned off' (or 'detuned') in parallel. If we assume the latter, this would have to be effected by one of three means: US governmental pressure on the Galileo operators, external (and unauthorised?) override of the satellites' software, or physical interference with the satellites (up to and including destruction). Any ground-based 'satellites' would (probably) have to be similarly affected. Thus, if we assume that GPS might be 'turned off' (or 'detuned'), we must assume the same of Galileo. This would leave inertial guidance as the only viable back up system - were such accuracy actually required. But, let us examine what benefits are sought from using a precise positioning technology. The articles quoted skillfully (and typically) interweave the 'safety' issue - trading on terms such as "A number of devastating crashes over the last 10 years have pushed rail safety to the top of the national agenda." - with that of operational convenience. Trains are currently located somewhere in (say) one kilometre 'blocks' protected to the rear by a red signal. On the UK's railways there have been a number of recent incidents of SDAP - Signal Passed At Danger - some of which have led to fatalities. However, other recent fatalities have had causes that would not be well-militated against through exact positioning technology (e.g. derailment caused by a broken rail and collision with a car at a crossing). The lumping of accidents together - regardless of cause - to justify spending on a system which might address only one of them is all too common. Whilst it is possible that a GPS-type system could enable signallers to more accurately locate trains and know their speed too and it is possible that this could enhance safety, I fear that the prime driver for the installation of positioning technology is actually intended to enable an increase in train density on the existing rail infrastructure. Such a move would likely increase risk rather than reduce it. The rationale for my fear is that the pressures to increase density exist, are increasing and will continue to increase. Building additional track capacity is not a viable option and lengthening trains themselves is highly problematic - because of restrictions on platform length at termini and intermediate stations. The only option is to increase track usage (more trains per mile of track) - and this cannot be done with the existing signalling systems and block working. But, is the technology up to it? Railways in the UK provide special challenges for the designers of equipment. The recent history of equipment on new trains such as remotely released doors, automatic announcements, new-style pantograph and third-rail pickups, tilting carriages, etc. has been fraught. Seemingly, designers had no real concept of the challenges provided by the environment. Reliability in an environment that is subject to continual and continually variable vibration and oscillation; intermittent but frequent harsh shocks of variable intensity; rough handling; infrequent, sometimes poor and occasionally non-existent maintenance, and the necessary 'building down to a price' demanded by a cash-strapped industry is demonstrably hard to achieve. Therefore a primary question must be: "What happens when an on-board positioning system fails (either the computer or the transmitter)?" The train might simply vanish off the display and possibly out of the system. Signallers and drivers of following trains would not know whether there was a train ahead (or behind), nor exactly where that train was. So, if train separations were reduced through the introduction of such a system, catastrophe could easily and quickly result. Even if the system 'fails-safe', problems would still result from any reduction in train separation. Then there is the human factor. Common issues here include the operator's implicit trust in the automated system, their inability to recognise, assimilate and react to emergency situations, poor MMI and clashes between the system's and the users actions (vis. the Australian Naval ship referred to in RISKS 23.71). We have existing examples in air traffic control and aircraft flight systems. These are formed of far-longer established protocols; but we still have (brand new) systems that fail, either inherently or at the man-machine interface, and clashes between pilots and 'George'. When it comes to implementing remote or automated control in mass-transport systems, the record is not good. When we couple commercial greed - especially when argued in the grounds of 'safety' - with the continuing failure of designers to construct reliable and reliably fail-safe systems the RISKS are manifest and manifold.
BKWNFOIR.RVW 20041224 "Windows Forensics and Incident Recovery", Harlan Carvey, 2005, 0-321-20098-5, U$49.99/C$71.99 %A Harlan Carvey %C P.O. Box 520, 26 Prince Andrew Place, Don Mills, Ontario M3C 2T8 %D 2005 %G 0-321-20098-5 %I Addison-Wesley Publishing Co. %O U$49.99/C$71.99 416-447-5101 fax: 416-443-0948 bkexpress@aw.com %O http://www.amazon.com/exec/obidos/ASIN/0321200985/robsladesinterne http://www.amazon.co.uk/exec/obidos/ASIN/0321200985/robsladesinte-21 %O http://www.amazon.ca/exec/obidos/ASIN/0321200985/robsladesin03-20 %O tl a rl 1 tc 2 ta 2 tv 1 wq 2 %P 460 p. + CD-ROM %T "Windows Forensics and Incident Recovery" Chapter one is an introduction, both to the book and to the ideas behind it. For once, the author does, indeed, try to define what an incident is. The definition is broad, but so are the possibilities. The intended audience is stated to be anyone interested in the security of Microsoft Windows, but it is instructive that, in listing specific groups, forensic specialists and security professionals are *not* mentioned. Carvey notes that a great many people would like to know the information that Windows forensics can provide, since the platform is nearly ubiquitous, but few have the knowledge of system internals that is necessary to find the relevant bits. Based on the definition of an incident as an event that violates security policy, chapter two demonstrates some of the ways that policy failures, and therefore attacks, can occur. (The rationale behind the inclusion of eleven pages of Perl source for a program to detect null sessions escapes me.) Chapter three reviews a number of places to hide data, but all of these are at the user interface level, such as setting hidden file attributes, placing data in unused keys in the Registry, NTFS (NT File System) alternate data streams (ADS), and the extra information stored in data files by applications like Microsoft Word. There is no mention of the lower level caches: slack space (whether in terms of zero padding, extra space in sectors, or the timing margins on hard disks) or page files. In addition, for those locations that are mentioned, specific programs for extracting particular data are listed, but no details of structural internals (for example formats for NTFS, OLE/COM, or Word) are provided for analysis with more general utilities. This is not to say that Carvey does not do a good job of explaining what he does cover: the tutorial on NTFS ADS is clear and complete. The material in chapter four addresses the issue of preparation by suggesting various means of hardening systems and networks against attack. The content is unusual, and deals with functions and activities that are frequently left out of security texts. At the same time, it does not touch on some common suggestions for system security: this should be seen as a complement to, rather than a replacement for, other Windows security works. A wealth of utilities for deriving all manner of information from Windows systems are listed and described in chapter five. Chapter six presents suggestions for the methods and procedures to be used in responding to a potential incident, but it does so in the form of a number of fictional examples. The stories can be instructive, but it does take a long time to sort through the material to find the relevant points to use. Various indications that can be evidence of the existence of malware (particularly network-based remote access trojans) are examined in chapter seven. The author's Forensic Server Project, a tool for managing forensic data collection, is presented in chapter eight. Chapter nine describes an assortment of network scanning and data capture tools. Although a number of areas are addressed, the text will be of greatest use to those who are concerned about network malware, especially of the remote access type. The intended audience, of experienced but non-specialist Windows administrators and law enforcement professionals with some technical background, will find a number of valuable indicators that will point out whether a system will reward further scrutiny. The professional, and particularly one with experience in forensic analysis, will find some very useful information on newer operations of Windows, but may be frustrated at the lack of detail. (I'm still not sure who is going to get a lot out of all the Perl source code ...) copyright Robert M. Slade, 2004 BKWNFOIR.RVW 20041224 rslade@vcn.bc.ca slade@victoria.tc.ca rslade@sun.soci.niu.edu http://victoria.tc.ca/techrev or http://sun.soci.niu.edu/~rslade
Please report problems with the web pages to the maintainer