According to current news sources, there is still no clear definitive understanding of what really happened. Some stock prices went to almost zero, some went wildly higher, and some wild attempted profit-taking ensued. My guess is that there is no one cause—it is a combination of the fat-fingered Billion-vs-Million transaction, the incredibly short-fuse automated programmed microtrading that is inherently unstable with self-destructive feedback, the absence of effective audit trails. a lack of regulation of derivatives, greed, uncertainty about the Greek situation, and probably many other factors. “In one of the most dizzying half-hours in stock market history, the Dow plunged nearly 1,000 points before paring those losses in what possibly could have been a trader error.'' [CNBC] “According to multiple sources, a trader entered a `b' for billion instead of an `m' for million in a trade possibly involving Procter & Gamble, a component in the Dow. (CNBC's Jim Cramer noted suspicious price movement in P&G stock on air during the height of the market selloff. Watch.)'' [CNBC] “We have a market that responds in milliseconds, but the humans monitoring respond in minutes, and unfortunately billions of dollars of damage can occur in the meantime," said James Angel, a professor of finance at the McDonough School of Business at Georgetown University.' [*The New York Times*] “Nasdaq Operations said it will cancel all trades executed between 2:40 p.m. to 3 p.m. showing a rise or fall of more than 60 percent from the last trade in that security at 2:40 p.m or immediately prior. Nasdaq said the stocks affected and break points will be disseminated soon.'' (Reuters) Sources: Dow Falls 1000 Then Rebounds, Shaking Market, *The NYTimes* 7 May 2010 http://www.nytimes.com/2010/05/07/business/economy/07trade.html?hpw http://finance.yahoo.com/news/Stock-Selloff-May-Have-Been-cnbc-1746103756.html http://online.wsj.com/article/BT-CO-20100506-722769.html?mod=WSJ_latestheadlines http://www.reuters.com/article/idUSTRE6456QB20100506 [Thanks to Gunnar Peterson, Jeremy Epstein, and Gabriel Goldberg for contributing items and their own personal comments. PGN]
[Just a reminder that sometimes safety features can, indeed, do what they're supposed to. DB] Unfortunately, it was well named... the train operator did, indeed, die. But the control worked as designed, the train stopped, and no one else was hurt. http://www.nytimes.com/2010/04/29/nyregion/29motorman.html?ref=nyregion
"Modern Railways" magazine reports in its May 2010 issue that someone at SNCF, the French national railway, made a little mistake in March. During a training exercise for staff to practice responding to a major disaster, the disaster scenario was inadvertently posted to the SNCF's website and the public began reading about an "explosion of unknown origin" on a TGV at Macon, with over 100 deaths. Only when large numbers of people started phoning for more information was it realized what had happened.
The *Wall Street Journal* reports on the high rate of complications at Wentworth-Douglass Hospital when doctors used a new surgical robot: The da Vinci has been billed as a breakthrough in the quest to make surgery less invasive. With its four remote-controlled arms and sophisticated camera, it enables surgeons to operate through small incisions with greater precision and visibility. At Wentworth-Douglass, however, the robot has been used in several surgeries where injuries occurred. One patient... required four more procedures to repair the damage. In earlier robotic surgeries, two patients suffered lacerated bladders. The article seems to place blame on insufficient training and practice: Surgeons who use the da Vinci regularly say the robot is technologically sound and an asset in the hands of well-trained doctors. But they caution that it requires considerable practice. As a small regional hospital, Wentworth-Douglass has used the da Vinci about 300 times in four years. That's a fraction of the usage rate of some big medical centers and, some surgeons say, too little for the doctors at the hospital to master it. Source: http://online.wsj.com/article/SB10001424052702304703104575173952145907526.html Steven Klein Computer Service 1-248-YOUR-MAC 1-248-968-7622
Apple recently filed a patent application for a "Seamlessly Embedded Heart Rate Monitor" to be used in a mobile phone. *From the patent application: For example, the durations of particular portions of a user's heart rhythm, or the relative size of peaks of a user's electrocardiogram (EKG) can be processed and compared to a stored profile to authenticate a user of the device. According to an article about the patent: The system could then allow only the owner of the phone to use it, and block out those who do not match the unique biometric data. I wonder if a user having a heart attack would be able to call for an ambulance? Or might the abnormal heart rhythm prevent the phone from recognizing them? Source: http://www.appleinsider.com/articles/10/05/06/apples_future_iphones_could_recognize_a_user_by_their_heartbeat.html Steven Klein 1-248-YOUR-MAC or (248) 968-7622
[Source: Marnie Hunter, Anatomical ridicule raises body-scanning concerns, CNN 7 May 2010] http://www.cnn.com/2010/TRAVEL/05/06/tsa.scanner.assault/index.html Full-body scanning machines may reveal a little too much, if an incident of workplace violence this week among Transportation Security Administration screeners is any indication. A TSA worker at Miami International Airport in Florida was arrested for allegedly assaulting a co-worker who had repeatedly teased him about the size of his genitals. The insults stemmed from an X-ray of the accused captured during a training exercise with the airport's full-body scanning machines, the report said. Rolando Negrin "stated he could not take the jokes anymore and lost his mind," allegedly striking the victim with a police baton. According to the report, a witness heard Negrin say in Spanish, "get on your knees or I will kill you and you better apoligise [sic]." In response to the incident, TSA said it has a zero-tolerance policy for workplace violence. "At the same time, we are investigating to determine whether other officers may have violated procedures in a training session with coworkers and committed professional misconduct," the agency said in a statement. The incident puts the spotlight back on technology some privacy advocates liken to a virtual strip search. "As far as I'm concerned, this really demonstrates exactly how detailed the images are, exactly how invasive the search is," said John Verdi, senior counsel with the Electronic Privacy Information Center, a Washington-based research center specializing in civil liberties and privacy issues. It receives much of its funding from private foundations. Verdi said the Miami incident "... also demonstrates that this technology, and the way it's being implemented by TSA, is ripe for abuse." The TSA screener scuffle is not the only recent case of workplace tension involving the technology. A security worker at London's Heathrow Airport allegedly made lewd comments about a female colleague who mistakenly entered a scanner, according to the UK's Press Association. The accused worker was given a police warning for harassment. TSA officials stressed that the incident in Miami was internal and did not involve any member of the traveling public. When the technology is used in airports, one screener views the scan in a remote location and does not come into contact with passengers being screened. The images are permanently deleted and never stored, according to the TSA. EPIC has filed a lawsuit against the Department of Homeland Security under the Freedom of Information Act seeking details about the government's use of advanced imaging technology. In April 2010, DHS revealed in a letter to EPIC that it has 2,000 full-body scanning test images, "using TSA models, not members of the public," stored at its test facility. The agency is withholding the images, citing exemptions to the Freedom of Information Act for information pertaining only to internal personnel rules and practices and records that might "benefit those attempting to violate the law." Verdi finds the idea that the images might be used to evade security "highly problematic." "Because if merely publishing examples of the images that the TSA has generated during testing would harm security, that really calls into question the effectiveness of the machines," he said. Examples the TSA says are consistent with what screening officers see in airports are available on the agency's website. Aviation security expert Douglas Laird said it is "perfectly logical" for the TSA to withhold the 2,000 test images. "If they were available to the public, then if you were trying to defeat the machine you would study the images to find the weak link, so to speak. I would think they would be crazy to release them," said Laird, who is president of aviation security consulting firm Laird & Associates. There are shortcomings for any technology, Laird said. Still, Laird said he believes body-scanning technology would have given officials at Amsterdam's Schipol Airport a much better chance of catching a Nigerian man who boarded a Detroit, Michigan-bound flight on Christmas Day with explosives concealed in his groin area. The alternative pat-down, which U.S. passengers may opt for instead of body scanning, has to be very intrusive to be effective, and studies show people are less tolerant of physical intrusion than of intrusive technology, Laird said. While advanced imaging technology doesn't involve direct physical contact, the screener training incident in Miami highlights some travelers' reservations about full-body scans. "I really think it would give a lot of folks pause if they thought that TSA employees were mocking naked body scans of American air travelers," Verdi said.
Facebook has gone rogue, drunk on founder Mark Zuckerberg's dreams of world domination. It's time the rest of the Web ecosystem recognizes this and works to replace it with something open and distributed. [Source: Ryan Singel, Facebook's Gone Rogue; It's Time for an Open Alternative, WiReD,com, 7 May 2010] http://www.wired.com/epicenter/2010/05/facebook-rogue/ http://bit.ly/bc7JLk Lauren Weinstein, firstname.lastname@example.org +1 818 225-2800 http://www.pfir.org/lauren NNSquad: Network Neutrality Squad - http://www.nnsquad.org Lauren's Blog: http://lauren.vortex.com
http://bit.ly/c7Vy8f (TechCrunch) And the Facebook screw-ups just keep on comin'! - - - "Hey Moe .. uh, Mark! What should I do with this wacky privacy code?" "I don't know, numbskull, don't bother me with that privacy garbage, especially when I'm counting my money. Just stuff anything in there, the suckers will never know the difference!" "Nyuk! Nyuk! Nyuk!" Lauren Weinstein email@example.com +1 818 225-2800 http://www.pfir.org/lauren People For Internet Responsibility - http://www.pfir.org
Actually, this is not exactly an old technology. It is nano-scale imaging - fingerprinting is only used to make it simpler to explain. And I should note that you need to do the "fingerprint" before EVENT SEQUENCE OF INTEREST to be able to detect that it is the same thing after EVENT SEQUENCE OF INTEREST. While it may be useful in some cases, it does not relate one part of an item to another part of the same item (i.e., you can't take a shard and compare it to the rest of the sheet to tell that one came from the other) unless the whole has been previously analyzed. You might be able to identify systematic patterns in some things, like chemical compounds run as part of one batch compared to another batch (thus tagents may be emulated). The technique is really useful predominantly for high valued items. For example, you might want to fingerprint pieces of paper containing trade secrets so that the original sheet can be traced back to the original holder (but not copies made on a copier). Fred Cohen & Associates tel/fax: 925-454-0171 http://all.net/ 572 Leona Drive Livermore, CA 94550
> Certificates. What possible reason do I have to trust that one of the > commercial certificate providers will not sell my private key to an > outsider? Or, one of their employees, for that matter. Really? This old chestnut again? Ummm. Maybe the fact that they never have your private key... Surely there are trust issues with CA's, but the issues don't generally arise from the trust that their clients put in them. Rather, the problem is the trust that relying third parties (their clients' customers / visitors) put in them (trust that their clients' visitors put in the CA transitively, by trusting browser manufacturers who in turn trust CA's). I talked about this a bit at http://www.kobly.com/drupal/node/6 and maybe it's time to write some more on the subject...
In RISKS-26.05, Roy Smith mentioned the aggravation of his wife's Prius losing personal data such as contact lists after an anti-lock brake fix was applied. While that aggravation is understandable, I'm more upset that there is not more isolation between safety-critical functions and personal data. As cars get more and more user-supplied data, whether it's phone contact lists or entertainment media, the chances for propagation of malware presumably sharply increase. The claim that the anti-lock brake update erased the phone contact list makes me highly suspicious that the isolation is less than it should be. It's not hard to imagine a lot of really bad results from that, both accidental and malicious. While there are techniques that could help reduce both risks, I'd be a lot more comfortable if the first of those was physical isolation between the safety-critical pieces and any user-controlled data.
Literate people of a certain age will notice my reference to "the matter of J. Robert Oppenheimer", in which a scientist was driven from his career by a whispering campaign; but I'm afraid that Herbert Schildt, the unwilling subject of a wikipedia biography, is no Oppenheimer. Who is Herbert Schildt? He is merely a hard-working author of doorstopper and boat anchor practical programming books which aren't great. Some of them describe the C programming language from a Microsoft perspective. Now, despite the fact that the C programming language was "standardized" in 1989 and again in 1999, this effort was nothing like the more serious and grownup efforts at language standardization one saw in the past with Algol and more recently with Java. Instead, the standard made a sow's ear out of another sow's ear; a language with notable drawbacks including free aliasing was subject to further abuse by making its semantics, in many cases, undefined, in order (as far as I can see) to make as many vendors of existing C compilers happy...because they would not have to change their compilers in order to label them standard. Mr. Schildt fell foul of this confused process because he wrote primarily for a Microsoft audience, starting at a time when Microsoft C compilers were quite different from other vendors. For example, John "A Beautiful Mind" Nash was brought to my office in 1991 at Princeton for he'd validly coded a limit test in an extra precision package for C as a compile-time constant expression which was not evaluated correctly by the Microsoft compiler; I gave him the Borland compiler and his (beautiful) program worked. In Schildt's case, and in his early work, he used the idiom void main() for a main procedure in a C executable because then and now this does no harm on specific platforms. However, in a unix-linux command line environment it MAY cause unexpected behavior when the command shell procedure branches based on what it thinks is a return code from the executable, for void main() doesn't change the stack; int main() does. Of course, one could well argue that diligent shell programmers would know that it's their job to find out if in fact an executable returns anything useful, but instead, this putative error became the basis for a highly personalized attack on Schildt that's been going on for almost twenty years. The most egregious harm has been the snarky publication of a Wikipedia "biography" of Schildt. Although its originator claims in the Discussion session to have no intention to harm Schildt, the biography was clearly meant as a container, a framework, for Schildt's enemies to wax prolix about how "bad" his books are. One thing I find disturbing about their language is that to intelligent people, there are few or no bad books that are known to be bad, just as for your real dog lover, there are "no bad dogs". This is easy to prove, for books if not for dogs. If an intelligent person picks up a book, on programming or anything else, let's say at Border's, and peruses it, she may well find things that don't make sense or that he thinks are false. What does she do? She chucks it aside, or, if considerate, she refiles it so the Border's staff doesn't have to. But note that at this point she has no "justified true belief" THAT the book is "bad". To really know this, one has to do something that only paid book reviewers, students assigned a book, or OCD (obsessive compulsive disorder) people do: read the book from cover to cover. In fine, "you can't tell a book from looking at the cover". Therefore, intelligent people are wildly enthusiastic about good books that they have read. The sort of people who like to focus on pernicious bad books? Hmm, *imams* issuing *fatwas* to their *taliban* in their *madrassahs* about The Satanic Verses. The old Roman Catholic Church's hierarchy who as recently as 1948 published their *index librorum prohibitorum". And, to Godwin-converge to unity and get it over with, Nazis. "I hate Illinois Nazis" - John Belushi as Jake Blues in the Blues Brothers. Now, I have been long aware that technology is for many white males (and I do not introspectively exempt myself) a sort of laager in which we white males can escape the messiness of relationships, literature, and, in general what TS Eliot called the pain of living and the drug of dreams. It's fun to be right for a change instead of subaltern and corrected. But, the fact is that even computer books and the sample code found in them cannot be right all the time. In my very first computer class, I was assigned Sherman's Programming and Coding for Digital Computers. It described how to program the IBM 7094. But my university did not have a 7094, a one-address instruction, general register machine with a 36 bit word. It had the truly strange but very popular IBM 1401, a smaller machine by far, with a two-address instruction, no general registers, and a variable length word...so variable, in fact, that I floored the prof with the exact value of 100 factorial calculated by a simple multiply. Since we were on strike because Nixon had invaded Cambodia (in violation of the will of Congress and the people), I did not attend enough classes to get this difference explained. Instead, I LEARNED that computers can be different-and-the-same, because Sherman had a chapter on how to write an interpreter for a different machine. I learned "Turing Equivalence". Somehow, the Schildt-bashers missed Hegel's class where one learns that the truth is partial, and the question is how the active reader (the only reader who learns from a book) makes progress, even by realizing that if he calls a C program with a void main procedure, he'd better not examine the return code, or, he should change void main() to int main(). Another way in which the anti-Schildt campaign repels me instinctively is its adolescent personalization. Hero computer scientist Dijkstra was in fact very acerbic. He wounded a lot of people, which may be why he fled the corporation for academia [Peter? Do you agree?]. But you search his writings in vain for the proper names of his "enemies" if indeed he considered John McCarthy or Ken Iverson his enemies. Dijkstra did say "APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums". He said many things: concerning *programming languages* and *ideas*, but one searches the EWD archive in vain for him to call John McCarthy a turkey or say that Ken Iverson's Mom wears Army boots. This despite the fact that many nasty things were said about Dijkstra, such as "ivory tower theorist"; for in fact, "speaking truth to [group/ collective] shibboleth" is a good way to become real unpopular, at warp speed today on the Internet; this explains the childish regression to adolescent bullying: if you're filled with anger and afraid for your job, it's much safer to attack an individual, whereas if you talk about ideas, you may offend some corporation who's invested in its negation. But all this would be by the way, ho hum, same stuff, different day. Where's the RISK? The Risk is that Herbert Schildt clearly *does not want* a biography in wikipedia. He *does not want* his name to be morphed into "Bullschildt", an adolescent stunt calculated to wound even a grownup because it's an attack on one's family. The Risk is that many scholars of the American constitution agree that the "reserved rights" clause of the Ninth Amendment includes a right to personal privacy as does ordinary libel law. These rights are trumped only in the case of well-known public figures. No respect has been shown Schildt's Ninth Amendment rights. That he "asserts" these rights is shown that not once has he spoken out in his own defense. It is clear to me that this silence means that he considers himself, not a public figure, not some sort of tinpot Zola like me, but more a Dreyfus who would very much like to be left alone as a private professional who happens to write computer books...which, like it or not, are indemnified against lawsuits based on "errors" because they might contain errors in code snippets. As in the more celebrated case of Kathy Sierra, a private person is exposed to unwarranted shame (and in her case death threats) while working essentially as an employee of the publisher, who in the case of computer books maintains employer-level control over content, schedule and quality. Yes, Wyoming attorney Gerry Spence lost a case in which a (former) Miss Wyoming sued Hustler for a foul sexual innuendo that caused Miss Wyoming to lose a job she needed, to become unemployable and to have to join the Army. But Spence's reasoning is sound: that more celebrated case (Hustler v Falwell) did create a somewhat unprotected class of truly well-known figures who could be the target of hyperbolic satire, the SCOTUS did not mean to remove protection from beauty queens, computer authors without distinction, scholarship winners, or kids on Youtube playing Chopin etudes. The criticisms of Schildt were jejune. They are for the most part based on a document created by someone who has never taken a single computer science class (Schildt has the BS and MSCS in computer science from the Univ of Ilinois). The document ("C: the Complete Nonsense") claims that there are "hundreds" of errors in the targeted Schildt book ("C: the Complete Reference") but lists, as "currently known", only 20. It has been recently updated to a document of equal foolishness; in neither case did the author of these documents read the book; instead, he flipped through it to find errors, therefore my reasoning above, that he does not KNOW that the book is "bad", applies. However, and this is where Wikipedia comes in, this document (and documents citing it directly or indirectly) were the basis for the Wikipedia article. No-one seems to note a major RISK in Wikipedia. Ask yourself, where on a wikipedia article does a sort of Dewey Decimal number appear? The books in a library, and the articles in some encyclopedias, are arranged by means of a number which shows the position of the knowledge or content in them in some sort of coherent scheme of "what's fit to know". Sure, in Dewey Decimal, numbers starting in zero are for the strange, the mysterious, the bizarre, including computer science. But everything has its place and a criterion applies which has to be above discussion. Nobody has to figure out where in an over-arching scheme of knowledge his new article fits. But "minor postmodern Grub street author makin' a livin' writin' computer books for people who don't read but gots jobs" is de minimis. Educators complain that individual Wikipedia articles are inaccurate. But the real problem is more global: the whole scheme is a complete mess. Basically, a random search shows Wikipedia to be a sort of Boomer Bible or toxic waste dump of the worst sort of popular culture, informed in fact by American white male and upper-class Asian bias throughout: articles about cigarettes that were fun to smoke, "virgin killers", "duh-bates" about issues that should never have arisen including the veracity of the holocaust and evolution, and hey, maybe the American south should secede again. In this mess, it's easy to insert an "article" about someone you don't like, and only if you're a person of some Distinction, like John Seigenthaler (who got an article associating him with the assassination of JFK taken down), can you do anything about it. The case of Seigenthaler created the Wikipedia "biographies of living persons" policy, but the Schildt article violates this policy; the policy states that "biographies of living persons" must have solid, "neutral point of view" references, but the Schildt references are biased tirades based on "C: the Complete Nonsense", a document whose very title reveals bias. Recently, to be "fair and balanced", some editors have added McGraw Hill promotional puffery which claim, more or less, that Schildt is the Greatest Programmer in the World. This is worse, because, of course, he isn't. This affair is what Jaron Lanier (in "You Are Not a Gadget", Allen Lane, 2010) calls "digital Maoism". The "gods in the clouds" as he calls them, the merely lucky sharpies like Jimbo Wales, are entirely too busy, these days, enjoying their millions to do any "work", and controlling the rest of us helots is "work" akin it's been said, to herding cats. Therefore they have instinctively turned to Mao Zedong Thought: let thugs, bullies and lunatics form mobs in order to out, to persecute, to hound and bully ordinary people just trying to get by. The article on Herbert Schildt is a Risk. Wikipedia's lack of organization is a RISK.
Please report problems with the web pages to the maintainer