Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
German federal police enrolled 200 commuters to test if they could use face recognition software to pick out suspects from a CCTV feed at a train station under real-world circumstances. The three systems tested (produced by Cognitec, Bosch and Cross Match) failed to recognize 8 out of 10 people they should have, even when they were fed images of people standing still on an escalator, one of the favourite settings for this kind of biometrics. The key factor was the bad lighting conditions in the morning and afternoon, when most of the test suspects passed the cameras. (The test suspects were also fitted with RFID tags so they could be reliably identified by the test setting). Under the right conditions, the systems failed to recognize 4 out of 10 people, at a rate of 0.1 per cent of false alarms, which the researchers thought acceptable for practical police work. The final report [German, link below] recommends against using the systems for identification purposes. They would only be useful under constant lighting conditions, and either openly seeking cooperation of the persons being checked by the biometrics software, or making them cooperate involuntarily, by using what the report calls "eye-catchers", like changing billboards or marquees. The report states that three-dimensional face recognition, currently being developed, could probably do better. Although the report points out that the systems tested are basically not usable yet, there is still a major flaw in the design: The researchers thought 23 false alarms per day would be acceptable. If you have 23 false alarms a day, and only one or two real suspects (probably hiding their faces behind a newspaper) crossing the cameras per week, I think you would stop to trust the system very soon. The final report (28 pages, german) is available here: http://www.bka.de/kriminalwissenschaften/fotofahndung/pdf/fotofahndung_abschlussbericht.pdf Martin Virtel, Redakteur Forschen & Entwickeln Fon: +49/40/319 90 469 Financial Times Deutschland GmbH & Co KG, Stubbenhuk 3, 20459 Hamburg; Amtsgericht Hamburg HRA 92810 http://www.ftd.de/forschung email@example.com
[Note: This item comes from reader Randall. DLH] From: Randall <firstname.lastname@example.org> Date: July 11, 2007 2:02:15 PM PDT To: David Farber <email@example.com>, Dewayne Hendricks <firstname.lastname@example.org> Subject: Oops. Detailed schematics of a military detainee holding facility in southern Iraq. Geographical surveys and aerial photographs of two military airfields outside Baghdad. Plans for a new fuel farm at Bagram Air Base in Afghanistan. The military calls it "need-to-know" information that would pose a direct threat to U.S. troops if it were to fall into the hands of terrorists. It's material so sensitive that officials refused to release the documents when asked. But it's already out there, posted carelessly to file servers by government agencies and contractors, accessible to anyone with an Internet connection. In a survey of servers run by agencies or companies involved with the military and the wars in Iraq and Afghanistan, The Associated Press found dozens of documents that officials refused to release when asked directly, citing troop security. [Source: Mike Baker, Military files left unprotected online, AP item, 11 Jul 2007; PGN-truncated good long item, not surprising] http://news.yahoo.com/s/ap/military_online_insecurity;_ylt=Aixup_YEMhxbq7rTtPYTDaNhr7sF
Apparently the BKA (German equivalent of the FBI) tested face recognition, spent 200K euros to test the system in a rail terminal in the city of Mainz and basically declared it worthless in terms of being an investigative tool. Apparently (per the article) this is the first public trial under normal, every day conditions (rather than having the conditions manipulated for a good showing) and only matched 30%. Even when the lighting was modified to be ideal, it only reached 60%. The BKA considers the system only useful if the success rate is very near 100%. The sample size was approximately 23,000 travelers per day over a period of roughly 3 months. The targets were 200 commuters who had volunteered for the trial and travel through this rail terminal at least once per day. BKA recommended that this is not a suitable system for surveillance and facial recognition to try to match suspects in a manhunt etc. The article is in German; try your favorite mechanized translator. If there's enough demand, I happen to be bilingual and may be convinced into doing a translation in my spare time. ;-) http://www.spiegel.de/panorama/justiz/0,1518,493911,00.html Archives: http://v2.listbox.com/member/archive/247/=now RSS Feed: http://v2.listbox.com/member/archive/rss/247/
After the latest monthly automatic updates for Windows XP, I got the following message on my screen: Data Execution Prevention: Microsoft Windows To help protect you computer, Windows has closed this program. Name: Windows Explorer Publisher: Microsoft Corporation Data Execution Prevention helps protect against damage from viruses and other security threats. What should I do? ----------------- Here is the screen picture: http://fohs.bgu.ac.il/bgu-med/pub/windowserror.jpg I will leave it to the Risks readers to find a creative explanation. David de Leeuw, Medical Computing Unit, Ben Gurion University of the Negev Beer Sheva, Israel [Actually, a Beer sounds like a good idea, after which you could Sit Shiva for your PC. PGN]
A Canadian jogger happened to be carrying an iPod at the wrong place at the wrong time. Lightning struck his body during a thunderstorm, and the current ran along the path of the earphones and into his head, causing injuries to his jaw and ear eardrums. The patient's physicians say the combination of sweat and the metal earphones directed the current to his head. http://www.technewsworld.com/rsstory/58292.html
Jeremy Kirk, Greek spying case uncovers first phone switch rootkit, 12 Jul 2007 http://news.yahoo.com/s/infoworld/20070712/tc_infoworld/90154 A highly sophisticated spying operation that tapped into the mobile phones of Greece's prime minister and other top government officials has highlighted weaknesses in telecommunications systems that still use decades-old computer code, according to a report by two computer scientists. The spying case, where the calls of around 100 people were secretly tapped, remains unsolved and is still being investigated. Also complicating the case is the questionable suicide in March 2005 of a top engineer at Vodafone Group in Greece in charge of network planning. A look into how the hack was accomplished has revealed an operation of breathtaking depth and success, according to an analysis on IEEE Spectrum Online, the Web site of the Institute of Electrical and Electronics Engineers. The case includes the "first known rootkit that has been installed in an [phone] exchange," said Diomidis Spinellis, an associate professor at the Athens University of Economics and Business, who authored the report with Vassilis Prevelakis, an assistant professor of computer science at Drexel University in Philadelphia. A rootkit is a special program that buries itself deep into an OS for some malicious activity and is extremely difficult to detect. The rootkit enabled a transaction log to be disabled and allow call monitoring on four switches made by Telefonaktiebolaget LM Ericsson within Vodafone's equipment. The software enabled the hackers to monitor phone calls in the same way law enforcement would, minus the required court order. The software allowed for a second, parallel voice stream to be sent to another phone for monitoring. The intruders covered their tracks by installing patches on the system to route around logging mechanisms that would alert administrators that calls were being monitored. "It took guile and some serious programming chops to manipulate the lawful call-intercept functions in Vodafone's mobile switching centers," the authors wrote. The secret operation was finally discovered around January 2005 when the hackers tried to update their software and interfered with how text messages were forwarded, which generated an alert. Investigators found hackers had installed 6,500 lines of code, an extremely complex coding feat. "The size of the code is not something that somebody could hack in a weekend," Spinellis said. "It takes a lot of expertise and time to do that." The investigation, which included a Greek parliamentary inquiry, netted no suspects, due in part to key data that was lost or destroyed by Vodafone, the authors wrote. It's not known if the hack was an inside job. Vodafone may have been able to discover the scheme sooner through statistical call analysis that could have linked the calls of those being monitored to calls to phones used to monitor the conversations, they wrote. Carriers already do that sort of analysis, but more for marketing than security. But the defense against rogue code, viruses and rootkits is complicated due to how telecom infrastructure has developed. "Complex interactions between subsystems and baroque coding styles (some of them remnants of programs written 20 or 30 years ago) confound developers and auditors alike," the report said.
> The Space Shuttle does *not* use N-version programming - it uses identical > instances of the same software, and uses redundancy to account for hardware > failures. Again, a good explanation of the methodology used is at > http://en.wikipedia.org/wiki/Space_shuttle. I wonder if Jeremy read the Wikipedia article he linked to... currently it reads: "The Backup Flight System (BFS) is separately developed software running on the fifth computer, used only if the entire four-computer primary system fails. The BFS was created because although the four primary computers are hardware redundant, they all run the same software, so a generic software problem could crash all of them." http://en.wikipedia.org/w/index.php?title=Space_Shuttle&oldid=141962184
As I understand it, the following is true: the FIFTH computer is not fully functional — it is intended to have just enough programming to land the shuttle in the event that the four main computers all fail. Testing it safely under live conditions where the first four computers are inoperable is essentially undesirable, if not practically impossible. The fifth system has never been invoked. Worse yet, it has most likely not been maintained for compatibility with the other four. That is not what is generally thought of as N-version programming for N=2 in the realistic sense of the word, although it might be considered so for the stark subset of the functionality. It is more like a hot standby fail-safe mechanism.
Regarding the thread in RISKS-24.71 and 72, the results of Knight and Leveson's famous N-version experiment show that, if any three of the replicates from among those they had written were combined in a two-out-of-three voting configuration, the resulting fault-tolerant system would have a probability of failure 19 times smaller than one of the replicates on its own. This is not as much as fully independent failure would yield, but it is a significant improvement. Peter Mellor +44 (0)20 8459 7669 Mobile: 07914 045072 MellorPeter@aol.com
This delay-caused pilot-induced-oscillation reminds me of trying to drive some of the simulated vehicles in current video-game environments. The video (and other) effects are stunning, but the experience is marred by the delays between the controls and the perceived video. Unlike driving a real car at >100mph, for example, where the effects of control inputs are felt immediately, the control inputs in videogame-simulated vehicles have a noticeable delay. These delays can cause uncontrollable oscillations if not consciously damped by the gamer. Analogously, a gamer who gets into a real car and attempts to go >100mph will find the opposite situation — he is expecting a delay, but instead gets instant (and potentially disastrous) results, compounded by the real inertia of his arms & legs interfering with any recovery effort.
In RISKS-24.71 Paul E. Black quotes "maddogone" as saying, "The tests show it was the G-suit which activated the ejection. ... when it filled with air it pressed against the release handle" In RISKS-24.72 Matt Jaffe <email@example.com> quotes him and writes: > I am unfamiliar with the Gripen but back in my day, more decades ago than > I care to think about, US ejections seats were activated by handles of one > sort or another and none of the handles in the aircraft I am familiar with > could be activated by simple pressure (of an inflating G-suit). In the early 1990's I spoke to a manufacturer of ejector-seats. Ejection was initiated by an upward pull on a handle positioned between the pilot's legs. The procedure was for the pilot to pull on the handle with the right hand, with the left hand gripping the right wrist. My contact explained that this was not because the handle was particularly stiff to operate (although it was not "hair-trigger") but in order to ensure that the pilot took his left arm with him when he left. Little chance of the inflation of a G-suit, or G-force alone, causing unintentional operation in that case. (I don't know if this applied to specifically to the Gripen.) With aerodynamically unstable aircraft, the situation is different. If the FCS goes down, the aircraft might break up within half a second or so, depending on the airspeed and attitude, and I was given to understand that ejection would be automatic, i.e., initiated without manual input from the pilot. Perhaps someone familiar with the Eurofighter could supply some authoritative information. Peter Mellor +44 (0)20 8459 7669 Mobile: 07914 045072 MellorPeter@aol.com
Jeremy Epstein wrote " ...the RISKS of relying on systems that may not have been fully tested are pretty obvious." This comes up far too often. How would you know a system had been fully tested? How long would it take? Can you think of a better way to avoid system failures than test-and-fix for a period of decades or more? Testing is important for two main reasons: to try to validate the assumptions you have made about the system's environment; to detect systems that are egregiously bad, so that you can scrap them and start again. Computer scientists and programmers were saying all this 25 years ago. We won't improve much on the current failure rates of projects until we accept it, and act on it.
* Immediate irresponsible editing, hugely magnified by Google, drives Wikipedia. * Wikipedia survives through advanced blame-shifting. (Credit Seth Finkelstein for that insight.) Changing either would destroy Wikipedia. Why that won't happen: * "One character who's laughing all the way to the bank, though, is Wales himself."¹ * "Almost all of Wikipedia's 1,000-odd "administrators" receive no pay for their hard work other than the pleasure of power tripping - seeing nothing of the $14m of VC money Wikipedia co-founder Jimmy Wales has banked."² 1. "Wikipedia defends Reality", from The Register. <http://www.theregister.co.uk/2007/02/02/colbert_wikipedia_reality/> 2. "Farewell, Wikipedia?", The Register <http://www.theregister.co.uk/2007/03/06/wikipedia_crisis/page3.html>
I have several counter-risks here: * writing applications that ignore the known (perhaps sometimes non-trivial) best practice, which is to detect the capabilities required by the application (and which, as I discover, has been supported by the Gestalt() API since Classic OS 6.0.4) http://developer.apple.com/documentation/Darwin/Reference/Manpages/man3/Mac::Gestalt.3pm.html * if not using the best practice, writing applications that depend on a third point of the OS version; * if detecting a minor OS version, writing applications that refuse to run instead of displaying a warning dialogue. Having said which, the definition of MAC_OS_X_VERSION_ACTUAL does seem incredibly short-sighted.
To see if I understand Lauren Weinstein's premise correctly, let me give an example: my company has a web site [A] on which we advertise a particular product that we have created and sell. A competitor sets up a web site [B] hosted in some odd place which gets either more, or round about the same number of, hits on a search engine when someone seeks information on distinctive keywords to do with my site [A]. This competitor's web site [B] contains derogatory, possibly misleading, and certainly unflattering information about my company and its product. It may even pretend to be my company and might sell a similar product or a clone of mine. Search engines will as search engines do and [A] and [B] are likely to come close in keyword searches because that is the skill of [B] to be able to second-guess the algorithms. Getting court orders against [B] will take ages, and may not be effective. Lauren proposes that a 'dispute register' be set up in which [A] can register that [B] and [A] are in dispute about content. The entry in the register can't afford to make veracity claims or to take sides. It can only note that there is a dispute between [A] and [B], which dispute has been notified to the register by either owners of [A], [B] or both. If there is an attempt by the register to make veracity claims, then a clever faker of site [B] could tie up a process indefinitely with specious arguments (and oh boy, have we all heard some lulus!) The best way for the register to work is that if a searcher finds either [A] or [B] they will also be given a link to the entry in the register. However, if the searcher has to go to a special list at, say, disputes.org then if I were [B] I would certainly want to draw the searcher's attention to the entry in this list and rely on my ability to scam. If I were [A] it would not matter: someone has already found my site [A] and I would either warn the searcher of counterfeit sites, or present my information in such a way that it would be convincing. Either way, this makes the job of [B] even easier. All [B] has to do now is to set up a bogus site, never mind the keywords or any expertise in getting noticed by a search engine. Having set up his misleading site, he then notifies the register that [A] and [B] are in dispute as if he were the aggrieved party. And so even if it works for a small percentage of searchers, [B] has made his hit. The only real cure is web savvy and siting oneself within web communities. It may take a while for this to sink in (how many people STILL get caught in 'Lotto' scams?), but on the web where there is a lot of free information the seeker should understand that the rule CAVEAT EMPTOR applies. Let the buyer beware. My best defence as [A] is as follows: I contact sites which reference mine [C], [D]... and ask them to put a note next to their listing of [A] saying something to the effect that the reader should be aware that bogus sites have appeared (not giving their URLs!) A person browsing for the distinctive keywords of my site will likely find mention of my site on other sites indexed by the same keywords [C], [D]... and will find this information. This is not a route available to the bogus site owner [B] who does not have the same peer network as I do. It will be in the best interests of [C], [D]... to assist me in this as they themselves may one day come under attack in this way. If someone browses using the distinctive keywords they will get [A], [B], [C], [D]... and will see that there is a problem between [A] and [B]. I offer this more in the spirit of a 'straw man' since there must be an obvious rejoinder which unfortunately this morning I just can't see.
Greg Hoglund and Gary McGraw Exploiting Online Games: Cheating Massively Distributed Systems (with a foreword by Ed Felten) http://exploitingonlinegames.com, http://www.cigital.com/silverbullet/ provides some background on the book. Gary McGraw wrote: The most interesting thing to me about EOG is that I believe the kinds of time and state errors found in MMORPGs [massively multiplayer online role-playing games] like World of Warcraft are indicators of what we can expect over the next decade as SOA actually catches on. You see, moving around state between gazillions of clients and a central server in real time is a huge security challenge. Most software people screw it up. Darkreading wrote a little story about this: http://www.darkreading.com/document.asp?doc_id=128961&WT.svl=news1_1 The book is packed with real code, hard-core examples, and things you can try yourself. Give it a spin! For multiplayer game developers, the book is a goldmine on virtual-world security — particularly what needs to be learned from the RISKS Experience. For RISKS readers not really interested in games per se, there is still much grist for the mill in this book. The subtitle of the book is perhaps the real hook, exploring what developers of large complex distributed systems need to learn and mistakes not to make. A quote from Avi Rubin is pithy: "Every White Hat should read it. It's their only hope of staying only one step behind the bad guys." PGN
Please report problems with the web pages to the maintainer