On 7 Oct 2008, at 37,000 feet off Western Australia, a Qantas A330 experienced two sudden nosedives, injuring one-third of the 110 passengers and nine crew members. The first was a drop of 150 feet in two seconds during a 690-foot 23-second dive. Two minutes later, the second was a 400-foot drop in 15 seconds. 12 were injured seriously, and 39 were hospitalized. (At least 60 passengers were reportedly not wearing seat belts.) A three-year-long investigation has determined that one of the three airspeed sensors malfunctioned, giving erroneous data intermittently. The cabin pressurization and the plane's auto-braking system also failed. (This malfunction was one of only three known worldwide in 128 million operating hours, although one of the others involved the same sensor unit on an A330, on 12 Sep 2006.) [Source: Andrew Heasley, PGN-ed; thanks to Lauren Weinstein for this item.] http://www.stuff.co.nz/travel/australia/6163633/Qantas-terror-blamed-on-computer
Vint Cerf, Why data matters for public policy First posting in the new Google "Policy by the Numbers" Blog http://j.mp/ucuW0U (Google - Policy by the Numbers) [via NNSquad] "As a computer scientist and engineer, I've always been fascinated by the process that determines how policies and institutions are created. Unlike computing systems, policymaking is anything but binary. An unpredictable combination of special interests, money, hot topics, loyalties and many other factors shape legislation that passes into law. Now, more than ever, we need to use data to build sound policy frameworks that facilitate innovative breakthroughs. In order to inspire confidence in the future (and the markets), governments have to lead by using today's facts to place big bets on-not against-a better tomorrow."
[Brent Glass via Dewayne Hendricks via Dave Farber's IP] Here's an article that might interest readers of the your list. In it, Scott Wallsten and Amy Smorodin track the incidence of claims that an event, a technology, or a piece of legislation will be the "end of the Internet" or "break the Internet," and propose an Internet Hysteria Index (IHI) to measure the level of Internet hysteria. The article also expresses concern that the US may be lagging in the business of spreading Internet hysteria and could soon be surpassed by other nations, including the UK. Excerpt and link below. Brett Glass Scott Wallsten, Internet Hysteria: Are We Losing Our Edge?, 15 Dec 2011 From Anthony Wiener's wiener to the FCC's brave stand on Americans' shameful inability to turn down the damn volume by themselves, 2011 has been a big year for tech and communications policy. But how has one of the Washington tech crowd's most important products—Internet hype—fared this year? In this post, we seek to answer this crucial question. The Internet Hysteria Index The Internet is without doubt the most powerful inspiration for hyperbole in the history of mankind. Some extol the Internet's greatness, like Howard Dean, who called the Internet "the most important tool for re-democratizing the world since Gutenberg invented the printing press." Others fret about the future, like Canada's Office of Privacy Commissioner, who claimed, "Nothing in society poses as grave a threat to privacy as the Internet Service Provider." Sometimes the hyperbole is justified. For example, thanks to Twitter, attendees at this past summer's TPI Aspen Summit were privy to a steady stream of misinformation even before the DC-area earthquake stopped. In the same spirit, we present the Internet Hysteria Index (IHI). The IHI, which the DOJ and FCC should take care not to confuse with the HHI, is the most rigorous and flexible tool ever conceived for gauging the Internet's "worry zeitgeist". It's rigorous because it uses numbers and flexible because you can interpret it in so many different ways that it won't threaten your preconceived ideas no matter what you believe. The IHI has two components. The first tracks fears of an unrecognizable, but certainly Terminator-esque, future Internet. We count the number of times the exact phrases "the end of the internet as we know it" and "break the internet" appear in Nexis news searches each year since 2000. Figure 1 shows that 2011 produced a bumper crop of "break the internet" stories, mostly related to the Stop Online Piracy Act and the Protect IP Act. The spike in 2006 reflects a wave of Net Neutrality stories after AT&T's then-CEO proclaimed that "what they [content providers] would like to do is use my pipes free, and I ain't going to let them do that because we have spent this capital and we have to have a return on it." As our research illustrates, the "End of the Internet" hyperbole shows a healthy, generally upward trend, reflecting the effectiveness of our collective fretting and hand-wringing. Our data do not allow us to identify whether the trend is due to clever Washington PR, lazy hacks retreading old lines, real concerns, or collusion among interest groups simply ensuring they can all stay in business by responding to each other. More at: http://www.techpolicyinstitute.org/blog/2011/12/internet-hysteria-%E2%80%93= -are-we-losing-our-edge/
BufferBloat: What's Wrong with the Internet? A discussion with Vint Cerf, Van Jacobson, Nick Weaver, and Jim Gettys <http://queue.acm.org/detail.cfm?id=2076798> [via Dave Farber's IP] Internet delays are now as common as they are maddening. That means they end up affecting system engineers just like all the rest of us. And when system engineers get irritated, they often go looking for what's at the root of the problem. Take Jim Gettys, for example. His slow home network had repeatedly proved to be the source of considerable frustration, so he set out to determine what was wrong, and he even coined a term for what he found: bufferBloat. BufferBloat refers to excess buffering inside a network, resulting in high latency and reduced throughput. Some buffering is needed; it provides space to queue packets waiting for transmission, thus minimizing data loss. In the past, the high cost of memory kept buffers fairly small, so they filled quickly and packets began to drop shortly after the link became saturated, signaling to the communications protocol the presence of congestion and thus the need for compensating adjustments. Because memory now is significantly cheaper than it used to be, buffering has been overdone in all manner of network devices, without consideration for the consequences. Manufacturers have reflexively acted to prevent any and all packet loss and, by doing so, have inadvertently defeated a critical TCP congestion-detection mechanism, with the result being worsened congestion and increased latency. Now that the problem has been diagnosed, people are working feverishly to fix it. This case study considers the extent of the bufferbloat problem and its potential implications. Working to steer the discussion is Vint Cerf, popularly known as one of the "fathers of the Internet." As the co-designer of the TCP/IP protocols, Cerf did indeed play a key role in developing the Internet and related packet data and security technologies while at Stanford University from 1972-1976 and with DARPA (the U.S. Department of Defense's Advanced Research Projects Agency) from 1976-1982. He currently serves as Google's chief Internet evangelist. Van Jacobson, presently a research fellow at PARC where he leads the networking research program, is also central to this discussion. Considered one of the world's leading authorities on TCP, he helped develop the RED (random early detection) queue management algorithm that has been widely credited with allowing the Internet to grow and meet ever-increasing throughput demands over the years. Prior to joining PARC, Jacobson was a chief scientist at Cisco Systems and later at Packet Design Networks. Also participating is Nick Weaver, a researcher at ICSI (International Computer Science Institute in Berkeley where he was part of the team that developed Netalyzr, a tool that analyzes network connections and has been instrumental in detecting bufferbloat and measuring its impact across the Internet. Rounding out the discussion is Gettys, who edited the HTTP/1.1 specification and was a co-designer of the X Window System. He now is a member of the technical staff at Alcatel-Lucent Bell Labs, where he focuses on systems design and engineering, protocol design, and free software development. VINT CERF What caused you to do the analysis that led you to conclude you had problems with your home network related to buffers in intermediate devices? JIM GETTYS I was running some bandwidth tests on an old IPsec (Internet Protocol Security)-like device that belongs to Bell Labs and observed latencies of as much as 1.2 seconds whenever the device was running as fast it could. That didn't entirely surprise me, but then I happened to run the same test without the IPsec box in the way, and I ended up with the same result. With 1.2-second latency accompanied by horrible jitter, my home network obviously needed some help. The rule of thumb for good telephony is 150-millisecond latency at most, and my network had nearly 10 times that much. My first thought was that the problem might relate to a feature called PowerBoost that comes as part of my home service from Comcast. That led me to drop a note to Rich Woundy at Comcast since his name appears on the Internet draft for that feature. He lives in the next town over from me, so we arranged to get together for lunch. During that lunch, Rich provided me with several pieces to the puzzle. To begin with, he suggested my problem might have to do with the excessive buffering in a device in my path rather than with the PowerBoost feature. He also pointed out that ICSI has a great tool called Netalyzr that helps you figure out what your buffering is. Also, much to my surprise, he said a number of ISPs had told him they were running without any queue management whatsoever=97that is, they weren't running RED on any of their routers or edge devices. The very next day I managed to get a wonderful trace. I had been having trouble reproducing the problem I'd experienced earlier, but since I was using a more recent cable modem this time around, I had a trivial one-line command for reproducing the problem. The resulting SmokePing plot clearly showed the severity of the problem, and that motivated me to take a packet-capture so I could see just what in the world was going on. About a week later, I saw basically the same signature on a Verizon FiOS [a bundled home communications service operating over a fiber network], and that surprised me. Anyway, it became clear that what I'd been experiencing on my home network wasn't unique to cable modems. VC I assume you weren't the only one making noises about these sorts of problems? JG I'd been hearing similar complaints all along. In fact, Dave Reed [Internet network architect, now with SAP Labs] about a year earlier had reported problems in 3G networks that also appeared to be caused by excessive buffering. He was ultimately ignored when he publicized his concerns, but I've since been able to confirm that Dave was right. In his case, he would see daily high latency without much packet loss during the day, and then the latency would fall back down again at night as flow on the overall network dropped. Dave Clark [Internet network architect, currently senior research scientist at MIT] had noticed that the DSLAM (Digital Subscriber Line Access Multiplexer) his micro-ISP runs had way too much buffering=97leading to as much as six seconds of latency. And this is something he'd observed six years earlier, which is what had led him to warn Rich Woundy of the possible problem. VC Perhaps there's an important life lesson here suggesting you may not want to simply throw away outliers on the grounds they're probably just flukes. When outliers show up, it might be a good idea to find out why. NICK WEAVER But when testing for this particular problem, the outliers actually prove to be the good networks. JG Without Netalyzr, I never would have known for sure whether what I'd been observing was anything more than just a couple of flukes. After seeing the Netalyzr data, however, I could see how widespread the problem really was. I can still remember the day when I first saw the data for the Internet as a whole plotted out. That was rather horrifying. NW It's actually a pretty straightforward test that allowed us to capture all that data. In putting together Netalyzr at ICSI, we started out with a design philosophy that one anonymous commenter later captured very nicely: "This brings new meaning to the phrase, 'Bang it with a wrench.'" Basically, we just set out to hammer on everything=97except we weren't interested in doing a bandwidth test since there were plenty of good ones out there already. I remembered, however, that Nick McKeown and others had ranted about how amazingly over-buffered home networks often proved to be, so buffering seemed like a natural thing to test for. It turns out that would also give us a bandwidth test as a side consequence. Thus we developed a pretty simple test. Over just a 10-second period, it sends a packet and then waits for a packet to return. Then each time it receives a packet back, it sends two more. It either sends large packets and receives small ones in return, or it sends small packets and receives large ones. During the last five seconds of that 10-second period, it just measures the latency under load in comparison to the latency without load. It's essentially just a simple way to stress out the network. We didn't get around to analyzing all that data until a few months after releasing the tool. Then what we saw were these very pretty graphs that gave us reasonable confidence that a huge fraction of the networks we had just tested could not possibly exhibit good behavior under load. That was a very scary discovery. JG Horrifying, I think. NW It wasn't quite so horrifying for me because I'd already effectively taken steps to mitigate the problem on my own network=97namely, I'd paid for a higher class of service on my home network specifically to get better behavior under load. You can do that because the buffers are all sized in bytes. So if you pay for the 4x bandwidth service, your buffer will be 4x smaller in terms of delay, and that ends up acting as a boundary on how bad things can get under load. And I've taken steps to reduce other potential problems =97 by installing multiple access points in my home, for example. JG The problem is that the next generation of equipment will come out with even larger buffers. That's part of why I was having trouble initially reproducing this problem with DOCSIS (Data over Cable Service Interface Specification) 3.0 modems. That is, because I had even more extreme buffering than I'd had before, it took even longer to fill up the buffer and get it to start misbehaving. VC What I think you've just outlined is a measure of goodness that later proved to be exactly the wrong thing to do. At first, the equipment manufacturers believed that adding more buffers would be a good thing, primarily to handle increased traffic volumes and provide for fair access to capacity. Of course, it has also become increasingly difficult to buy a chip that doesn't have a lot of memory in it. NW Also, to the degree that people have been testing at all, they've been testing for latency or bandwidth. The problem we're discussing is one of latency under load, so if you test only quiescent latency, you won't notice it; and if you test only bandwidth, you'll never notice it. Unless you're testing specifically for behavior under load, you won't even be aware this is happening. VAN JACOBSON I think there's a deeper problem. We know the cause of these big queues is data piling up wherever there's a fast-to-slow transition in the network. That generally happens either going from the Internet core out to a subscriber (as with YouTube videos) or from the subscriber back into the core, where a fast home network such as a 54-megabit wireless hits a slow 1- to 2-megabit Internet connection. [snip] Dewayne-Net RSS Feed: <http://www.warpspeed.com/wordpress> Archives: https://www.listbox.com/member/archive/247/=3Dnow
Can the U.S. Government close social media accounts? [via NNSquad] http://j.mp/s4Ek1i (Salon) "The Obama administration and *The New York Times* are teaming up to expose and combat the grave threat posed by a Twitter account, purportedly operated by the Somali group Shabab, and in doing so, are highlighting the simultaneous absurdity and perniciousness of the War on Terror. This latest tale of Dark Terrorist Evil began on December 14 when the NYT's Jeffrey Gettleman directed intrepid journalistic light on the Twitter account maintained under the name "HSMPress," which claims to be the press office of Harakat al-Shabab al-Mujahedeen, the Shabab's full name."
Computing On Encrypted Databases Without Ever Decrypting Them http://j.mp/w0NLE3 (Forbes) "Now the Google- and Citigroup-funded work of three MIT scientists holds the promise of solving that long-nagging issue in some of the computing world's most common applications. CryptDB, a piece of database software the researchers presented in a paper (PDF here) at the Symposium on Operating System Principles in October, allows users to send queries to an encrypted set of data and get almost any answer they need from it without ever decrypting the stored information, a trick that keeps the info safe from hackers, accidental loss and even snooping administrators. And while it's not the first system to offer that kind of magically flexible cryptography, it may be the first practical one, taking a fraction of a second to produce an answer where other systems that perform the same encrypted functions would require thousands of years." CryptDB: Protecting Confidentiality with Encrypted Query Processing http://j.mp/u1INfV (MIT [PDF]) "It works by executing SQL queries over encrypted data using a collection of efficient SQL-aware encryption schemes. CryptDB can also chain encryption keys to user passwords, so that a data item can be decrypted only by using the password of one of the users with access to that data. As a result, a database administrator never gets access to decrypted data, and even if all servers are compromised, an adversary cannot decrypt the data of any user who is not logged in." Potentially a *very* big deal. [Network Neutrality Squad: http://www.nnsquad.org]
Kim Zetter, WiReD.com, 8 Dec 2011 Four Romanian nationals have been charged with hacking card-processing systems at more than 150 Subway restaurants and 50 other unnamed retailers, according to an indictment unsealed Thursday. The hackers compromised the credit-card data of more than 80,000 customers and used the data to make millions of dollars of unauthorized purchases, according to the indictment (.pdf). From 2008 until May 2011, the hackers allegedly hacked into more than 200 point-of-sale (POS) systems in order to install a keystroke logger and other sniffing software that would steal customer credit, debit and gift-card numbers. They also placed backdoors on the systems to provide ongoing access. The hackers allegedly scanned the Internet to identify vulnerable POS systems with certain remote desktop software applications installed on them, and then used the applications to log into the targeted POS system, either by guessing the passwords or using password-cracking software programs. http://www.wired.com/threatlevel/2011/12/romanians-subway-hack/ Jim Reisert AD1C, <email@example.com>, http://www.ad1c.us
Center computers (Misty Williams and Joel Anderson) Misty Williams and Joel Anderson, *The Atlanta Journal-Constitution*, 9 Dec 2011 Gwinnett Medical Center on Friday confirmed it has instructed ambulances to take patients to other area hospitals when possible after discovering a system-wide computer virus that slowed patient registration and other operations at its campuses in Lawrenceville and Duluth. Staff members discovered the virus Wednesday afternoon and have been working since then with outside I.T. experts to fix the problem, spokeswoman Beth Okun said. In the meantime, the health system has been forced to switch back to paperwork. https://www.ajc.com/news/gwinnett/ambulances-turned-away-as-1255750.html 1. The article doesn't say how the virus was able to infect the medical center's computers. [Does that need much explanation for RISKS readers?] 2. Why didn't they have adequate anti-virus protection? If one of our computers at home gets infected, no one is likely to die as a result. [AV "protection" typically fails to give complete coverage.] 3. The article was posted on Friday night, and the virus was discovered on Wednesday afternoon, so almost 2 days later they still don't have things patched up? [Surprise?] Jim Reisert AD1C, <firstname.lastname@example.org>, http://www.ad1c.us [Inserted comments from PGN, who could not resist.]
Jack Shafer, Reuters blog 16 Dec 2011 http://j.mp/w1Ja2U (Reuters) [via NNSquad] "So grand is the entertainment complex's umbrage that I half expect its next move will be to petition the Department of Justice for the authority to shut down the electric utilities that provide power to any and all computers it suspects are pinching its intellectual property."
Electronic Frontier Foundation Media Release Marcia Hofmann, Senior Staff Attorney Electronic Frontier Foundation email@example.com +1 415 436-9333 x116 Seth Schoen, Senior Staff Technologist Electronic Frontier Foundation firstname.lastname@example.org +1 415 436-9333 x107 Protect Yourself from Intrusive Laptop and Phone Searches at the U.S. Border EFF's New Guide Helps Travelers Defend Their Data Privacy San Francisco - Anytime you travel internationally, you risk a broad, invasive search of your laptop, phone, and other digital devices -- including the copying of your data and seizing of your property for an indefinite time. To help travelers protect themselves and their private information during the busy holiday travel period, the Electronic Frontier Foundation (EFF) released a new report today with important guidance for safeguarding your personal data at the U.S border. Thanks to protections enshrined in the U.S. Constitution, the government generally can't snoop through your laptop for no reason. But the federal government claims those privacy protections don't cover travelers at the U.S. border, allowing agents to take an electronic device, search through all the files, and keep it for further scrutiny—without any suspicion of wrongdoing whatsoever. For business travelers, that could expose sensitive information like trade secrets, doctor-patient and attorney-client communications, and research and business strategies. For others, the data at risk includes personal health histories, financial records, and private messages and photos of family and friends. EFF's new report, "Defending Privacy at the U.S. Border: A Guide for Travelers Carrying Digital Devices," outlines potential ways to protect that private information, including minimizing the data you carry with you and employing encryption. "Different people need different kinds of precautions for protecting their personal information when they travel," said EFF Senior Staff Technologist Seth Schoen. "Our guide helps you assess your personal risks and concerns, and makes recommendations for various scenarios. If you are traveling over the U.S. border soon, you should read our guide now and get started on taking precautions before your trip." Over the past few years, Congress has weighed several bills to protect travelers from suspicionless searches at the border, but none has had enough support to become law. You can join EFF in calling on the Department of Homeland Security to publish clear guidelines for what they do with sensitive traveler information collected in digital searches by signing our petition. You can also test your knowledge about travelers' privacy rights and help spread the word about the risks by taking our border privacy quiz. "We store detailed records of our lives on our laptops and our phones. But the courts have diminished our constitutional right to privacy at the border," said EFF Senior Staff Attorney Marcia Hofmann. "It's time for travelers to take action and protect themselves and their private information during international trips." For Defending Privacy at the U.S. Border: A Guide for Travelers Carrying Digital Devices: https://www.eff.org/wp/defending-privacy-us-border-guide-travelers-carrying-digital-devices To take the border privacy quiz: https://www.eff.org/pages/border-search-quiz To sign the petition: https://action.eff.org/o/9042/p/dia/action/public/?action_KEY=8341 For this release: https://www.eff.org/press/releases/protect-yourself-intrusive-laptop-and-phone-searches-us-border
Daniel Kahneman, Thinking, Fast and Slow Farrar, Straus and Giroux, 499 pp., 2011 [with extensive appendices, notes and index] Kahneman is a Nobel Prize winner in economics, challenging the rational model of judgment and decision making. This book examines interrelations between fast, intuitive, and emotional thinking on one hand, and slow, more deliberative and logical thinking on the other hand. The book will "transform the way you think about thinking." The RISKS connection is quite evident. (The index has two dozen lines of entries on risk assessment, risk aversion, and risk seeking.) However, the book is written from a holistic perspective that is extraordinary in its scope. To me, the fast-slow dichotomy seems to bear a resemblance to right-brain (e.g., intuitive) vs left-brain (e.g., logical, rational, linear) behavior, but goes much deeper than anything I had seen along those lines before. The back jacket has a strong set of squibs, including this one; * This book is a tour de force by an intellectual giant; it is readable, wise and deep. Buy it fast. Read it slowly and repeatedly. (Richard H. Thaler)
Please report problems with the web pages to the maintainer