A software bug in a telecom provider's phone number blacklisting system caused the largest telephony outage in US history according to a report released by the US Federal Communications Commission (FCC) at the start of the month. The telco is Level 3, now part of CenturyLink, and the outage took place on October 4, 2016. http://www.bleepingcomputer.com/news/software/software-bug-behind-biggest-telephony-outage-in-us-history/ According to the FCC's investigation, the outage began after a Level 3 employee entered phone numbers suspected of malicious activity in the company's network management software. The employee wanted to block incoming phone calls from these numbers and had entered each number in fields provided by the software's GUI. The problem arose when the Level 3 technician left a field empty, without entering a number. Unbeknownst to the employee, the buggy software didn't ignore the empty field, like most software does, but instead viewed the empty space as a "wildcard" character. As soon as the technician submitted his input, Level 3's network began blocking all incoming and outgoing telephone calls ” over 111 million in total. http://it.slashdot.org/story/18/04/01/215242/software-bug-behind-biggest-telephony-outage-in-us-history [This was a 2016 event, but just posted on Slashdot. Discussion thread has entertaining bits on other design/programming failures. Like IBM code I reported that tested one byte for X"FF", when parameter fence was defined as eight bytes x"FF". What could go wrong? How about: disallowing x"FF" as parameter value when it was valid. Initial response was “So what?” vs. “Uh oh”. Gabe] [[Interesting notion of “biggest outage”. RISKS followed the Martin Luther King Day AT&T long-lines outage in 1990, where calls across the U.S. could not be completed for half a day. That potentially affected every would-be caller in the country, which could easily have been more than 111 million customers. That case is a lovely history lesson for more recent RISKS readers. See RISKS-9.61,62, 63,64,66,67,69,70,71. PGN]]
http://www.nytimes.com/2018/04/01/technology/saks-lord-taylor-credit-cards.html A ring of cybercriminals tapped into cash registers at retail stores to obtain the card numbers, then offered them for sale, a security firm says.
Dominic Gates, Seattle Times, 2 Apr 2018 http://www.seattletimes.com/business/boeing-aerospace/boeing-hit-by-wannacry-virus-fears-it-could-cripple-some-jet-production/ Boeing was hit Wednesday by the WannaCry computer virus, and after an initial scare within the company that vital airplane-production equipment might be brought down, company executives later offered assurances that the attack had been quashed with minimal damage. Though news of the attack by the WannaCry virus triggered widespread alarm within Boeing and among airline customers during the day Wednesday, by evening the company was calling for calm. Boeing was hit Wednesday by the WannaCry computer virus, and after an initial scare within the company that vital airplane-production equipment might be brought down, company executives later offered assurances that the attack had been quashed with minimal damage. [...] Microsoft issued patches to plug the vulnerability. However, Corey Nachreiner, chief technology officer of Seattle security technology firm WatchGuard Technologies, said some companies with specialized equipment don't update very often for fear their custom-built systems will be in danger. Microsoft declined to comment on the Boeing cyber-attack. [Morals of this story: 1. Don't use Windows for systems that are not going to be kept up-to-date with patches. Besides, the shrink-wrap guidance usually says don't use it for critical applications—even if always patched. 2. Back up everything frequently so that you will never have to cough up ransomware. 3. Recognize that essentially every operating system is subvertible one way or another, and plan accordingly. PGN]
Ethan Zuckermann, *The Atlantic*, 23 Mar 2018 https://www-theatlantic-com.cdn.ampproject.org/c/s/www After five days of silence, Mark Zuckerberg finally acknowledged the massive data compromise that allowed Cambridge Analytica to obtain extensive psychographic information about 50 million Facebook users. His statement, which acknowledged that Facebook had made mistakes in responding to the situation, wasn't much of an apology—Zuckerberg and Facebook have repeatedly demonstrated they seem to have a hard time saying they're sorry. For me, Zuckerberg's statement fell short in a very specific way: He's treating the Cambridge Analytica breach as a bad-actor problem when it's actually a known bug. In the 17-months-long conversation Americans have been having about social media's effects on democracy, two distinct sets of problems have emerged. The ones getting the most attention are bad-actor problems—where someone breaks the rules and manipulates a social-media system for their own nefarious ends. Macedonian teenagers create sensational and false content to profit from online ad sales. Disinformation experts plan rallies and counterrallies, calling Americans into the streets to scream at each other. Botnets amplify posts and hashtags, building the appearance of momentum behind online campaigns like #releasethememo. Such problems are the charismatic megafauna of social-media dysfunction. They're fascinating to watch and fun to study who wouldn't be intrigued by the team of Russians in St. Petersburg who pretended to be Black Lives Matter activists and anti-Clinton fanatics in order to add chaos to the presidential election in the United States? Charismatic megafauna may be the things that attract all the attention—when really there are smaller organisms, some invisible to the naked eye, that can dramatically shift the health of an entire ecosystem. [... lots more. Good article. PGN]
Yonatan Zunger, *The Boston Globe*, 22 Mar 2018. http://www.bostonglobe.com/ideas/2018/03/22/computer-science-faces-ethics-crisis-the-cambridge-analytica-scandal-proves/IzaXxl2BsYBtwM4nxezgcP/story.html Zunger (now at Humu) formerly worked for Google on security and privacy. Sadly, his long op-ed piece for *The Globe* begins with this: “the field of computer science, unlike other sciences, has not yet faced serious negative consequences for the work its practitioners do.'' He ends with this: “Computer science must step up to the bar set by its sister fields, before its own bridge collapse—or worse, its own Hiroshima.'' However, it would seem he has never been a RISKS reader—although he was quoted once, in RISKS-29.55.
"A newly discovered family of malware is being used to compromise Linux servers exposed to the Internet. The good news for IT pros is that it doesn't appear to be targeting traditional commercial servers, but is going after consumer devices." Is this an April Fool's hoax? It seems serious. http://www.itprotoday.com/endpoint-security/newly-found-malware-deliberately-avoids-government-networks http://blog.talosintelligence.com/2018/03/goscanssh-analysis.html
> I object to public speculation and the rush to draw lessons learned before > the actual cause has been determined and released. Although someone is to > blame (presumably), others are innocent and there is a human cost to > innocent people and their families caused by incorrect premature speculation > as to causes. I had assumed this to be a particularly British problem, but maybe not -- whenever there's a major disaster in the UK, pitchfork mobs spring into action demanding that whoever is at fault must face punishment immediately, while any lack of information is taken as evidence of a cover-up. What should happen is: a detailed investigation to find what went wrong (root causes); deciding what needs to be done to stop it happening again; and then criminal proceedings if necessary in cases of negligence or malicious actions. Usually in real life, these happen in the reverse order. This has probably been covered in RISKS before; I'm no expert, but as I see it, there are two factors in major disasters: (1) There's usually a chain of events leading up to the actual tragedy. Potentially-dangerous things happen all the time but are usually caught by safety procedures and systems before they develop too far, however just sometimes too many wrong-side failures will align and disaster ensues. (2) We are imperfect beings in an imperfect world. People make mistakes, instructions may be unclear, communications may be misunderstod, people may not follow rules for well-intentioned reasons, etc. In particular, people may have to make difficult decisions quickly under pressure with incomplete information in unfamiliar situations, so may not get them right. (I'm assuming that deliberate wrongdoing is rare, and of course should be punished.) Therefore, a major disaster may involve quite a lot of people, but each may only have a small contribution, and each did what they thought was for the best at the time—it's easy to be wise after the event. Demanding to know "whose fault was it?" isn't helpful. In the UK it's unfortunately common for political factors to become implicated in tragedies. Commercial businesses are blamed for putting profit before safety, central government and local authorities are accused of insufficient funding of public services, politicians are at fault for inadequate regulations and legislation, and so forth. The Grenfell Tower apartment block in west London burned through in mid-2017 killing 71 people, which was terrible, but as it housed low-income tenants in a wealthy part of town there were allegations of the whole thing being an attack on a disadvantaged section of society, making bad feelings even worse. Sigh... [Blame can very often be widely distributed. That has been a theme here for many years. For example, Remember the Deepwater Horizon fiasco (R 29 49, 29.75, 29.80, 30.29). PGN]
Surely what was wanted was a semicolonectomy? [Regarding the improperly unmunged safelinks URLs, Good Catch. However, the semicolonoscopy had to come first to diagnose the problem. TNX!!! PGN]
Please report problems with the web pages to the maintainer