A good reminder that sometimes the least complex components are among the most important. http://www.bbc.com/news/uk-northern-ireland-28778728
There is a lot of unenlightened opinion doing the rounds about IOActive's discovery, reported at Black Hat recently, of vulnerabilities in Cobham's Aviator 700 and 700D SATCOM systems, some of it propagated by Black Hat reviewers (comment by Iozzo reported in http://www.reuters.com/article/2014/08/04/us-cybersecurity-hackers-airplanes-idUSKBN0G40WQ20140804 ) and some of it by IOActive themselves (see their White Paper at http://www.ioactive.com/pdfs/IOActive_SATCOM_Security_WhitePaper.pdf ) http://www.abnormaldistribution.org/2014/08/14/security-vulnerabilities-in-commercial-aircraft-satcom-kit/ contains what I hope is a sensible discussion of what this is all about. Peter Bernard Ladkin, Causalis Limited and University of Bielefeld www.causalis.com www.rvs.uni-bielefeld.de
http://lauren.vortex.com/archive/001079.html Direct from the UK comes word of one of the more dubious medical experiments I've heard of in some time, that should raise ethical red flags around the world. If you live in the Welsh, West Midlands, North East, South Central and London Ambulance Service areas, and you take no action to opt-out from a planned new University of Warwick study—and you're unfortunate enough to have a heart attack—you may randomly find yourself treated with a placebo rather than the conventional treatment of adrenaline. If you die from your heart attack, researchers will not actively seek out your relatives to inform them of how you were treated. Persons who happen to see advertisements about the study in those areas and so learn of its existence can in theory opt-out --otherwise, you're a lab rat whether you want to be or not. Researchers have a legitimate question—does adrenaline therapy in these situations do more harm than good? Unfortunately, in their attempt to avoid study bias, they have violated a basic informed consent principle of ethical experimentation. I suspect that this study stands a good chance of collapsing in the light of publicity, and the litigation potential appears enormous even for the UK. If nothing else, I would expect to see campaigns urging UK residents in the affected areas to opt-out en masse. I would opt-out if I lived there. Sometimes ostensibly "good science" is unacceptably bad ethics. [Lots more discussion followed on Dave Farber's list, including this URL from Paul Ferguson. PGN] http://www.dailymail.co.uk/news/article-2723408/Paramedics-dummy-drug-heart-attacks-Controversial-trial-patients-given-placebo-instead-adrenaline-heart-stopped.html
http://www.wired.com/2014/08/isp-bitcoin-theft/ Not much to say: the problem has been known for a long time (the referenced article cites Mudge's 1998 Congressional testimony), but little publicly visible progress has been made in solving the problem.
[Data Center Knowledge (DCK) via NNSquad] http://www.datacenterknowledge.com/archives/2014/08/13/bgp-routing-table-size-limit-blamed-for-tuesdays-website-outages/ "The amount of routes TCAMs can store is finite, as a post on The IPv4 Depletion Site blog, ran by a group of network and IT experts, explains. While workarounds have been developed to deal with this limit, not all routing equipment (especially older routing equipment) has been upgraded to use them. On Tuesday morning, the Internet felt a very distinct tremor that resulted from the size of the routing table reaching that magic number of 512,000 BGP routes. BGP is the protocol used to communicate routing information." NOTE: The "magic number" is undoubtedly "512K" (where K = 1024), which is better known as 524,288.
Bryce Covert, Think Progress, 12 Aug 2014 (via Dave Farber) <http://thinkprogress.org/economy/2014/08/12/3470204/women-pushed-out-engineering/> Nearly 40 percent of women who get a degree in engineering don't end up making it in that career, according to new research presented at the American Psychological Association convention. These women either never enter the field at all or end up leaving. Eleven percent never got an engineering job to begin with, while 21 percent left more than five years ago and 6 percent left less than five years ago. Of those who leave, “poor workplace climates and mistreatment by managers and co-workers are common reasons,'' according to the release. Two-thirds of those who left less than five years ago found a better opportunity in another field while a third stayed home with their children because their companies couldn't accommodate their caregiving needs. Of those who left more than five years ago, 17 percent cited their caregiving duties and 12 percent said they didn't have an opportunity for advancement. “These findings are likely to apply to women working in fields where there are less than 30 percent women,'' said Dr. Nadya Fouad of the University of Wisconsin-Milwaukee, who presented the findings, in a release. That makes them “more vulnerable to being pushed out because they typically aren't in the internal `good old boys' network.'' But the secret to keeping women isn't tough to figure out. The women who stayed in engineering jobs cited supportive bosses and coworkers, paths for advancement, and the ability to balance work and life. “The reasons women stay with their engineering jobs are very similar to why they leave -- advancement opportunities and work climate,'' Fouad said in the release. The research notes that women have made up more than 20 percent of engineering school graduates over the past two decades, yet just 11 percent of engineers are women. Generally, women make up 41 percent of graduates from science and engineering programs but only about a quarter of the workers in the science, technology, engineering, and math (or STEM) fields. And the steady march of progress has recently stalled. Most of the growth for women under 40 entering these fields happened between the 1970s and 1990s, but it's tapered off since then. That may be in part because women are leaving the field at a high rate. They are 45 percent more likely to leave a STEM job a year in than men. Women and black people with advanced STEM degrees are more likely to end up with a job outside the field than white men. Work/life balance is one thing that gets in the way. Women in STEM are less likely than men to have children at home: 62 percent don't have kids, compared to 57 percent of men. That's likely a sign that children are an obstacle to women staying in the field in a way that they aren't for men. Women may also be facing outright discrimination. Science professors see their female students as less competent than their male ones even when they have the same accomplishments and skills. Both genders are twice as likely to pick a man for a math job than a woman.
Antone Gonsalves, InfoWorld, 14 Aug 2014 Attackers used Google Developers and public DNS to disguise traffic between the malware and command-and-control servers http://www.infoworld.com/d/security/how-hackers-used-google-steal-corporate-data-247941 opening text: A group of innovative hackers used free services from Google and an Internet infrastructure company to disguise data stolen from corporate and government computers, a security firm reported.
Bruce Schneier, 7 Aug 2014 (via Dave Farber's IP) <https://www.schneier.com/blog/archives/2014/08/the_us_intellig.html> Ever since the *Intercept* published this story <https://firstlook.org/theintercept/article/2014/08/05/watch-commander/> about the US government's Terrorist Screening Database, the press has been writing <http://www.cnn.com/2014/08/05/politics/u-s-new-leaker/index.html?hpt=hp_t1> about a "second leaker": The Intercept article focuses on the growth in U.S. government databases of known or suspected terrorist names during the Obama administration. The article cites documents prepared by the National Counterterrorism Center dated August 2013, which is after Snowden left the United States to avoid criminal charges. Greenwald has suggested there was another leaker. In July, he said on Twitter "it seems clear at this point" that there was another. Everyone's miscounting. This is the third leaker: - Leaker #1: Edward Snowden. - Leaker #2: The person who is passing secrets to Jake Appelbaum, Laura Poitras and others in Germany: the Angela Merkel surveillance story <http://www.spiegel.de/international/germany/gchq-and-nsa-targeted-private-german-companies-a-961444.html>, the TAO catalog <http://leaksource.info/2013/12/30/nsas-ant-division-catalog-of-exploits-for-nearly-every-major-software-hardware-firmware/>, the X-KEYSCORE rules <https://www.schneier.com/blog/archives/2014/07/nsa_targets_pri.html>. My guess is that this is either an NSA employee or contractor working in Germany, or someone from German intelligence who has access to NSA documents. Snowden has said that he is not the source for the Merkel story, and Greenwald has confirmed that the Snowden documents are not the source for the X-KEYSCORE rules. I have also heard privately that the NSA knows that this is a second leaker. - Leaker #3: This new leaker, with access to a different stream of information (the NTSC is not the NSA), whom the *Intercept* calls "a source in the intelligence community." Harvard Law School professor Yochai Benkler has written an excellent law-review article on the need for a whistleblower defense. <http://benkler.org/Benkler_Whistleblowerdefense_Prepub.pdf> And there's this excellent article by David Pozen on why government leaks are, in general, a good thing. <http://harvardlawreview.org/2013/12/the-leaky-leviathan-why-the-government-condemns-and-condones-unlawful-disclosures-of-information/>
Ross Stapleton-Gray <email@example.com> wrote: On Wed, Aug 13, 2014 at 12:05 PM, Herb Lin quoted : So, in 2010, former DIRNSA Mike McConnell (From Feb 28, 2010, *The Washington Post*, How to win the cyber-war we're losing) wrote that ... “[The United States must] develop an early-warning system to monitor cyberspace, identify intrusions and locate the source of attacks with a trail of evidence that can support diplomatic, military and legal options -- and we must be able to do this in milliseconds. ... We preempt such groups by degrading, interdicting and eliminating their leadership and capabilities to mount cyber-attacks, and by creating a more resilient cyberspace that can absorb attacks and quickly recover. I was surprised to see the original Wired piece omit any mention of the DARPA Cyber Grand Challenge effort, now ramping up (they've made some awards to competitors), which is for a fully automated "capture the flag" exercise,i.e., autonomous systems, without human intervention, conducting defense (and attack) to win a bot-on-bot-on-other-bots competition. http://www.darpa.mil/cybergrandchallenge/ Personally, I think if you've got to trust that an automated system will "hack back" in faster-than-human cycles, you're playing with fire. The Cyber Grand Challenge is about automatically finding weaknesses, and automatically generating fixes. It is not about creating attacks (though, like most tools, its discoveries could feed into an offensive system). But the general point is well taken - taking humans out of the loop in an offensive weapons system is ethically questionable, and can go horribly, horribly wrong. There's a reason booby traps are illegal everywhere, and the world is finally giving up landmines. Way back in 1983, in the movie 'War Games', the driver for the plot was the removal of humans from the loop for responding to a Soviet ICBM attack. Needless to say, this Did Not Go Well. The failure mode wasn't that unrealistic either; during the Cold War both the US and SU's defenses misidentified events as attacks, and if human approval wasn't required, might have started WW3. We're now seeing vigorous debate over the ethics of deploying LAWS (Lethal Autonomous Weapon Systems, aka 'killer robots'). What's being proposed is the cyber equivalent. Snowdon in the referenced article, points out what all of us in the field already know: That its often very difficult to find a correct attribution for a cyber attack. If (hypothetically) a Russian government hacking group attacked US systems through servers located in China, would 'MonsterMind' start a US attack on Chinese systems? In the short term, fatalities could ensue if the apparent source controlled vital systems. In the long term, how would China react? Peter Trei
> Way back in 1983, in the movie 'War Games', the driver for the plot was the removal of humans from the loop for responding to a Soviet ICBM attack. It goes back earlier than War Games. For many of us, the canonical example is Dr. Strangelove. I can imagine some variation such as this occurring at Cyber Command: General Jack D. Ripper: Mandrake, do you recall what Clemenceau once said about war? Group Capt. Lionel Mandrake: No, I don't think I do, sir, no. General Jack D. Ripper: He said war was too important to be left to the generals. When he said that, 50 years ago, he might have been right. But today, war is too important to be left to politicians. They have neither the time, the training, nor the inclination for strategic thought. I can no longer sit back and allow Communist infiltration, Communist indoctrination, Communist subversion and the international Communist conspiracy to sap and impurify all of our precious bodily fluids. We can't afford a cyber mineshaft gap, can we?
David Meyer via Dave Farber, 13 Aug 2014 <http://gigaom.com/2014/08/13/snowden-us-developed-dangerous-cyberwar-tool-hacked-chinese-hospitals-and-knocked-syria-offline/> SUMMARY: In a new interview, Snowden explained how fears of an accidental cyber-war, together with concerns over surveillance of U.S. citizens' web traffic, turned him into a whistleblower. Wired's James Bamford interview of Snowden: <http://www.wired.com/2014/08/edward-snowden/> The U.S. developed a cyberwarfare tool called MonsterMind that would automatically `fire back' if it thought it detected an attempted attack on the U.S., NSA whistleblower Edward Snowden has revealed. In an interview published Wednesday in Wired, Snowden also said an intelligence officer had told him the U.S. was responsible for the 2012 disconnection of Syria from the Internet, albeit by accident. He also said the U.S. had “crossed lines'' by attacking civilian infrastructure in China. MonsterMind MonsterMind seems to have been one of the triggers for Snowden's decision to blow the whistle, along with the construction of a massive new data storage facility in Bluffdale, Utah. The tool was, according to Snowden, partly designed to look for Internet traffic patterns that could denote incoming cyber-attacks, and to block such attacks. However, it would also “automatically fire back, with no human involvement.'' This raises serious ethical implications because attacks are often routed through other countries, making it possible that automated counter-attacks could target the wrong people, perhaps civilian facilities such as hospitals. Snowden also expressed discomfort with the implications of MonsterMind for U.S. citizens communicating outside the country, telling reporter James Bamford: “The only way we can identify these malicious traffic flows and respond to them is if we're analyzing all traffic flows—that means we have to be intercepting all traffic flows. That means violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing.'' Syria and China When Syria briefly dropped off the Internet in late 2012, it was widely assumed to be the doing of President Bashar al-Assad—the country was, after all, descending into civil war. ... Dewayne-Net RSS Feed: <http://dewaynenet.wordpress.com/feed/>
FYI—If the U.S. govt claims to care so much about protecting American citizens from cybercriminals, then it must stop these cowboy NSA shenanigans, else the cyberwar version of "Perl" Harbor will likely be friendly fire from our very own NSA. It is precisely ham-handed mistakes of this sort that worries all of us about any attempts to weaken encryption. Jacob Kastrenakes, Aug 13 2014 An elite hacking unit broke a router http://www.theverge.com/2014/8/13/5998237/nsa-responsible-for-2012-syrian-internet-outage-snowden-says When Syria's access to the Internet was cut for two days back in 2012, it apparently wasn't the fault of dissenting "terrorists," as the Syrian government claimed: according to Wired, it was the fault of the US government. In a long profile of Edward Snowden published today, Wired writes what Snowden says is the truth about the Internet outage. An elite hacking unit in the National Security Agency had reportedly been attempting to install malware on a central router within Syria—a feat that would have allowed the agency to access a good amount of the country's Internet traffic. Instead, it ended up accidentally rendered the router unusable, causing Syria's Internet connection to go dark. The NSA reportedly attempted to repair the router and cover its tracks, but the agency was unable to do so. Until now, however, it appears that no evidence of the NSA's tampering actually came out. It's a pretty dramatic change in the storyline, as it had been widely assumed that the outage had been caused by one of the warring parties within Syria, be it the government itself or rebels. Syria's Internet has gone dark a number of times since then, so it isn't unreasonable to continue assuming that there are other parties at play when outages occur. Snowden's report describes an embarrassing blunder for the US though, and it'll certainly open up the list of culprits that people will consider should similar incidents occur in the future.
Scientists can now analyze the personal data on millions of people without their knowledge, and some want to bring ethical guidelines to such studies. http://www.nytimes.com/2014/08/13/technology/the-boon-of-online-data-puts-social-science-in-a-quandary.html
Brian Donohue, ThreatPost, Absolute Computrace Backdoor http://threatpost.com/millions-of-pcs-affected-by-mysterious-computrace-backdoor-2/107700 opening text: Nearly every PC has an anti-theft product called Computrace embedded in its BIOS PCI Optional ROM or its unified extensible firmware interface (UEFI). Computrace is a legitimate, trusted application developed by Absolute Software. However, it often runs without user-consent, persistently activates itself at system boot, and can be exploited to perform various attacks and to take complete control of an affected machine.
Jeremy Kirk, InfoWorld, 14 Aug 2014 Design quirks allow malware to be installed on iOS devices and cookies to be plucked from Facebook and Gmail apps http://www.infoworld.com/d/mobile-technology/the-biggest-iphone-security-risk-could-be-connecting-one-computer-248366
InfoWorld, Cloud Computing, 11 Aug 2014 Here are four cloud horror stories along with spoilers, so you can make it out alive. What happens when a cloud provider declares bankruptcy? Late last year, a cloud storage company called Nirvanix shut down and gave customers only a few weeks to move data to a different provider. According to Charles King, an IT analyst, this meant companies with terabytes or even petabytes of data in the cloud had to act quickly. "A business should always have a strong sense of the assets it has stored in the cloud, but it needs to consider those points in terms of the time and cost of retrieving them," King says. In the case of Nirvanix, one client noted that, due to the company's download bandwidth limitations, it would need 27 days, in a best-case scenario, to recover all data. "That was cutting things pretty close since they were given just 30 days' notice to remove everything," King says. [Note: The Nirvanix case was discussed in RISKS-27.49, almost a year ago. PGN]
Amid push for cash-free tolling, conscientious objectors stand strong Martine Powers, *The Boston Globe*, 13 Aug 2014 Suzanne DeLesdernier is part of a small but stubborn group of Massachusetts drivers who decline to order an E-ZPass, the state's electronic toll transponder—not because they do not know where to obtain one, or because they do not have a bank account, but because they do not agree with electronic tolling. Some of the reasons for their intransigence include: They are concerned about government surveillance. They are apprehensive about erroneous fees charged automatically to their credit cards. They disapprove of eliminating good jobs held by toll takers for decades. And they would miss the small social exchanges with toll takers, the face-to-face contact, as they pass over their fare. http://www.bostonglobe.com/metro/2014/08/12/amid-push-for-cash-free-tolling-conscientious-objectors-stand-strong/qsyXsJ8GrEnXYJfSny3EHI/story.html (I sort of find the last reason hard to believe, but I do understand the elimination of jobs aspect).
The rebuttal analogy fails since a doctor is comparable to an architect or a structural engineer in the construction trade, not to a bricklayer. An EMT (Emergency Medical Technician) would be a more fitting parallel in the healthcare domain: an EMT who needs to dress a shallow wound and apply some iodine to it does not need to attend six years of medical school to perform the task, a basic first aid training will suffice. It would be great to have all medical professionals get a MD degree (and the quality of resulting service would undoubtedly rise) but then the cost of healthcare would be even more stratospheric. There is a spectrum of software engineering tasks and the article seems to make the point that at the low end of that spectrum the tasks can be accomplished by someone with vocational training, while the percentage of high end tasks that _require_ (meaning can not be accomplished to a satisfactory level without) a CS PhD or decades of experience are rather limited. Ordinary CS grads with some experience would then occupy the middle, breezing through the low end tasks and struggling when faced with an assignment beyond their depth. Web site development in particular often amounts to nothing more than installing Wordpress, enabling a fitting set of plugins, importing content and applying the design-- none of which are classic "programming", and none of which require a CS degree to perform well.
The WSJ is correct; programming is a trade. Just like investment banking is a trade, that needs no education, training, or experience. After all, people with all of those things get it right no more often than people who have no such skills (e.g., compare index investing to "active" investing). Investment banking is just entering trades into a computer screen, right? So will the WSJ be advocating replacing all of the high-priced bankers with high school students with a couple weeks of training in how to use trading systems? No? Didn't think so.
You will get no argument from me that the underlying risks that this alleged discovery is being used by all and sundry to illustrate are both real and important. However, I still find the specific claims made by Hold Security to be somewhat implausible (laundry list of reasons, amply cited elsewhere). The further claims advanced by NYT that "a security expert not affiliated with Hold Security analyzed the database of stolen credentials and confirmed it was authentic" and that "another computer crime expert who had reviewed the data, but was not allowed to discuss it publicly, said some big companies were aware that their records were among the stolen information" frankly do nothing to dispel my suspicion; may in fact increase it. I see no names mentioned in the NYT article. Why would either of those "experts" remain anonymous? They might be under NDA covering forensic specifics, but that would hardly extend to their own identities. That anonymity prevents us from examining both their qualifications and the contention that they are unaffiliated with Hold.
> Why do you immediately rule out the obvious and completely effective fix of > having Google stop conducting what appear to be searches of my private > e-mail for potential criminal activity? ... There's a very simple reason why not: Google isn't scanning every e-mail message for child porn. It isn't even scanning them for spam. It is scanning them for targeted advertising, which is where it gets its money from. Spam detection, or child porn detection are the side-effects. "Stop Google from scanning e-mail" == "stop Google from making money" "shut down Google". Behind the door #2 is: "let Google keep conducting searches of my private e-mail and doing what they will with some of the patterns they find; have them stop doing some of those things with some of those patterns. For example, reporting potential criminal activity to law enforcement agencies, that's a big no-no." I'm sure I don't have to spell out what's wrong with this picture. PS—now that I read Dan's entire speech: Google is doing exactly what Dan says they should do. They inspect the content, that makes them "responsible for that content if it is hurtful". "Forty-eight States vigorously penalize failure to report sexual molestation of children" and "the U.S. Code says that it is a crime to fail to report a felony of which you have knowledge". Therefore if Google didn't report kiddie porn sitting in a gmail account to the feds, Google would be committing a crime. Now, Dan's alternative is may not be available in real life: you'd have to find a non-surveillance state where non-content-inspecting providers like Lavabit are allowed to exist, in the case of Google whose business is built on content inspection, they are doing the right thing. (All quotes from RISKS-28.15) Dimitri Maziuk, Programmer/sysadmin BioMagResBank, UW-Madison—http://www.bmrb.wisc.edu
Has anyone brought up the fact that the some of the hashes in the database can be for things that require search warrants? There is sure to be a path that gives the details of where the hash match was found entities that are not going to use the information for child pornography prosecution. Some examples: 1. hashes for copyrighted material. MIGHT be illegal, should need warrant 2. corporate documents MIGHT be illegal, but should need warrant 3. "love notes" probably not illegal, but may be useful in civil cases. (my evidently not-so-special other sent a message to someone not me; I have the text, but I don't know where it really went.)
Please report problems with the web pages to the maintainer