Message to Our Customers, February 16, 2016 The United States government has demanded that Apple take an unprecedented step which threatens the security of our customers. We oppose this order, which has implications far beyond the legal case at hand. This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake. The Need for Encryption Smartphones, led by iPhone, have become an essential part of our lives. People use them to store an incredible amount of personal information, from our private conversations to our photos, our music, our notes, our calendars and contacts, our financial information and health data, even where we have been and where we are going. All that information needs to be protected from hackers and criminals who want to access it, steal it, and use it without our knowledge or permission. Customers expect Apple and other technology companies to do everything in our power to protect their personal information, and at Apple we are deeply committed to safeguarding their data. Compromising the security of our personal information can ultimately put our personal safety at risk. That is why encryption has become so important to all of us. For many years, we have used encryption to protect our customers' personal data because we believe it's the only way to keep their information safe. We have even put that data out of our own reach, because we believe the contents of your iPhone are none of our business. The San Bernardino Case We were shocked and outraged by the deadly act of terrorism in San Bernardino last December. We mourn the loss of life and want justice for all those whose lives were affected. The FBI asked us for help in the days following the attack, and we have worked hard to support the government's efforts to solve this horrible crime. We have no sympathy for terrorists. When the FBI has requested data that's in our possession, we have provided it. Apple complies with valid subpoenas and search warrants, as we have in the San Bernardino case. We have also made Apple engineers available to advise the FBI, and we've offered our best ideas on a number of investigative options at their disposal. We have great respect for the professionals at the FBI, and we believe their intentions are good. Up to this point, we have done everything that is both within our power and within the law to help them. But now the U.S. government has asked us for something we simply do not have, and something we consider too dangerous to create. They have asked us to build a backdoor to the iPhone. Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software—which does not exist today—would have the potential to unlock any iPhone in someone's physical possession. The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control. The Threat to Data Security Some would argue that building a backdoor for just one iPhone is a simple, clean-cut solution. But it ignores both the basics of digital security and the significance of what the government is demanding in this case. In today's digital world, the *key* to an encrypted system is a piece of information that unlocks the data, and it is only as secure as the protections around it. Once the information is known, or a way to bypass the code is revealed, the encryption can be defeated by anyone with that knowledge. The government suggests this tool could only be used once, on one phone. But that's simply not true. Once created, the technique could be used over and over again, on any number of devices. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks—from restaurants and banks to stores and homes. No reasonable person would find that acceptable. The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers—including tens of millions of American citizens—from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe. We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them. A Dangerous Precedent Rather than asking for legislative action through Congress, the FBI is proposing an unprecedented use of the All Writs Act of 1789 to justify an expansion of its authority. The government would have us remove security features and add new capabilities to the operating system, allowing a passcode to be input electronically. This would make it easier to unlock an iPhone by *brute force*, trying thousands or millions of combinations with the speed of a modern computer. The implications of the government's demands are chilling. If the government can use the All Writs Act to make it easier to unlock your iPhone, it would have the power to reach into anyone's device to capture their data. The government could extend this breach of privacy and demand that Apple build surveillance software to intercept your messages, access your health records or financial data, track your location, or even access your phone's microphone or camera without your knowledge. Opposing this order is not something we take lightly. We feel we must speak up in the face of what we see as an overreach by the U.S. government. We are challenging the FBI's demands with the deepest respect for American democracy and a love of our country. We believe it would be in the best interest of everyone to step back and consider the implications. While we believe the FBI's intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect. Tim Cook, Apple Footer [Also noted by several others. PGN]
http://www.engadget.com/2016/02/17/google-ceo-response-to-fbi-apple-request/ Tim Cook did not mince words in a lengthy open response to the FBI's order that Apple create a backdoor to allow the agency access to a terrorism suspect's iPhone. Plenty of privacy groups and Apple customers have praised Cook's words thus far, and now one of Apple's biggest competitors is showing support for the company's stance. Google CEO Sundar Pichai just posted a series of tweets regarding Cook's letter and it seems he firmly comes down on the same side as Apple's leader.
http://arstechnica.com/security/2016/02/extremely-severe-bug-leaves-dizzying-number-of-apps-and-devices-vulnerable/ Since 2008, vulnerability has left apps and hardware open to remote hijacking. Researchers have discovered a potentially catastrophic flaw in one of the Internet's core building blocks that leaves hundreds or thousands of apps and hardware devices vulnerable to attacks that can take complete control over them. The vulnerability was introduced in 2008 in GNU C Library, a collection of open source code that powers thousands of standalone applications and most distributions of Linux, including those distributed with routers and other types of hardware. A function known as getaddrinfo() that performs domain-name lookups contains a buffer overflow bug that allows attackers to remotely execute malicious code. It can be exploited when vulnerable devices or apps make queries to attacker-controlled domain names or domain name servers or when they're exposed to man-in-the-middle attacks where the adversary has the ability to monitor and manipulate data passing between a vulnerable device and the open Internet. All versions of glibc after 2.9 are vulnerable. [Also noted by Bob Gezelter. PGN]
Woody Leonhard, InfoWorld, 16 Feb 2016 The cumulative update not only knocks out PCs' default settings, it prevents users from resetting them http://www.infoworld.com/article/3032751/microsoft-windows/windows-10-forced-update-kb-3135173-changes-browser-and-other-defaults.html
Very few customers read the small print Terms of Service (TOS), just click OK. They can be clicking away a lot of consumer rights, privacy, security. VTech revised Terms of Service: . We make zero claims of privacy & security. . If we get breached, tough luck for customers. . We won't be legally accountable, because here we warn you of the risks. "You acknowledge and agree that any information you send or receive during your use of the site may not be secure and may be intercepted or later acquired by unauthorized parties." The company claims that this practice is commonplace on the web. Various people have been searching other TOS, saying they cannot find any other company doing the same thing. "What makes this position even more absurd is that VTech is now heading into home security:" There are proposals to boycott VTech. The boycott should extend to insurance companies refusing to take them on as a customer, because they obviously are very high risk.. It should include Internet Service Providers, and Trade Shows, places where companies advertise, because all those places need to protect their customers, from such bottom feeders. I hope some authorities, competitors, blast them hard enough about this, so as to discourage imitators. EU data protection mandate goes into effect in 2 years. "As of spring 2018 any organization trading in any EU member state" - that'll include you, VTech - "that collects personal data is legally obliged to properly protect that data," Last Year, VTech "allowed itself to be hacked" by using the same practices that led to highly publicized hacks of other companies on the web. This included: . SQL injection risk the hacker originally exploited . Unsalted MD5 password hashes . no SSL encryption anywhere . SQL statements returned in API calls and massively outdated web frameworks . VTech also had multiple serious direct object reference risks . The API that returned information on both kids and parents could be easily exploited just by manipulating an ID . No authentication required When the above was exploited, VTech claimed that "it was an extremely sophisticated attack." That was total BS. http://www.govinfosecurity.com/blogs/vtech-security-fool-me-once-p-2059 http://www.troyhunt.com/2016/02/no-vtech-cannot-simply-absolve-itself.html https://www.pentestpartners.com/blog/when-vtech-meets-the-gdpr/
Hollywood Presbyterian Medical Center Pays Hackers $17K Ransom http://www.nbcnews.com/tech/security/hollywood-presbyterian-medical-center-pays-hackers-17k-ransom-n520536
metadata + machine learning + blithe incompetence => mass murder by the state http://arstechnica.co.uk/security/2016/02/the-nsas-skynet-program-may-be-killing-thousands-of-innocent-people/ firstname.lastname@example.org https://www.brodie-tyrrell.org/
Interesting article about cognitive aspects of aircraft automation. http://www.avweb.com/news/features/Steam-Gauges-Are-Safer-225682-1.html Technically advanced aircraft (TAA)—those with a primary flight display (PFD), multi-function display (MFD), and GPS—are sexy. Pilots are drawn to them like Pooh Bear to honey. Besides being eye-catching, TAA attempt to address some of the biggest problems in aviation by providing pilots with a lot of supplementary safety information. Moving maps designed to improve situational awareness make it almost impossible to get lost. Databases store more information at the touch of a button than a thirty pound chart case. We can display more weather information in the cockpit than was even available 30 years ago. Combine all that with an autopilot that provides time to gather and interpret, and you'd think we'd be a lot safer. We're not. Pilots of TAA kill themselves more often than steam gauge aviators—almost twice the rate, according to the NTSB. Technology advances address many of the leading causes of GA fatalities: loss of control, controlled flight into terrain, fuel problems, midair collisions and weather. So, where's that improved safety?
[Excerpted from] CRYPTO-GRAM, February 15, 2016 Bruce Schneier, CTO, Resilient Systems, Inc. email@example.com https://www.schneier.com For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>. This issue: <https://www.schneier.com/crypto-gram/archives/2016/0215.html> Rob Joyce, the head of the NSA's Tailored Access Operations (TAO) group -- basically the country's chief hacker—spoke in public earlier this week. He talked both about how the NSA hacks into networks, and what network defenders can do to protect themselves. Here are his "Intrusion Phases": Reconnaissance, Initial Exploitation, Establish Persistence, Install Tools, Move Laterally, Collect Exfil, and Exploit. The talk is full of good information about how APT attacks work and how networks can defend themselves. I was talking with Nicholas Weaver, and he said that he found these three points interesting: 1. A one-way monitoring system really gives them headaches, because it allows the defender to go back after the fact and see what happened, remove malware, etc. 2. The critical component of APT is the P: persistence. They will just keep trying, trying, and trying. If you have a temporary vulnerability—the window between a vulnerability and a patch, temporarily turning off a defense—they'll exploit it. 3. Trust them when they attribute an attack (e.g.: Sony) on the record. Attribution is hard, but when they can attribute they know for sure—and they don't attribute lightly. Nothing really surprising, but all interesting. Which brings up the most important question: why did the NSA decide to put Joyce on stage in public? It surely doesn't want all of its target networks to improve their security so much that the NSA can no longer get in. On the other hand, the NSA does want the general security of US—and presumably allied—networks to improve. My guess is that this is simply a NOBUS issue. The NSA is, or at least believes it is, so sophisticated in its attack techniques that these defensive recommendations won't slow it down significantly. And the Chinese/Russian/etc. state-sponsored attackers will have a harder time. Or, at least, that's what the NSA wants us to believe. Wheels within wheels.... https://www.youtube.com/watch?v=bDJb8WOJYdA http://www.wired.com/2016/01/nsa-hacker-chief-explains-how-to-keep-him-out-of-your-system/ http://www.theregister.co.uk/2016/01/28/nsas_top_hacking_boss_explains_how_to_protect_your_network_from_his_minions/ More information about the NSA's TAO. http://www.spiegel.de/international/world/the-nsa-uses-powerful-toolbox-in-effort-to-spy-on-global-networks-a-940969.html https://foreignpolicy.com/2013/06/10/inside-the-nsas-ultra-secret-china-hacking-group/ An article about TAO's catalog of implants and attack tools. Note that the catalog is from 2007. Presumably TAO has been very busy developing new attack tools over the past ten years. http://www.spiegel.de/international/world/catalog-reveals-nsa-has-back-doors-for-numerous-devices-a-940994.html http://leaksource.info/2013/12/30/nsas-ant-division-catalog-of-exploits-for-nearly-every-major-software-hardware-firmware/
[Excerpted from] CRYPTO-GRAM, February 15, 2016 Bruce Schneier, CTO, Resilient Systems, Inc. firstname.lastname@example.org https://www.schneier.com This week, I released my worldwide survey of encryption products. The findings of this survey identified 619 entities that sell encryption products. Of those 412, or two-thirds, are outside the US-calling into question the efficacy of any US mandates forcing backdoors for law-enforcement access. It also showed that anyone who wants to avoid US surveillance has over 567 competing products to choose from. These foreign products offer a wide variety of secure applications—voice encryption, text message encryption, file encryption, network-traffic encryption, anonymous currency—providing the same levels of security as US products do today. Details: * There are at least 865 hardware or software products incorporating encryption from 55 different countries. This includes 546 encryption products from outside the US, representing two-thirds of the total. * The most common non-US country for encryption products is Germany, with 112 products. This is followed by the United Kingdom, Canada, France, and Sweden, in that order. * The five most common countries for encryption products—including the US -- account for two-thirds of the total. But smaller countries like Algeria, Argentina, Belize, the British Virgin Islands, Chile, Cyprus, Estonia, Iraq, Malaysia, St. Kitts and Nevis, Tanzania, and Thailand each produce at least one encryption product. * Of the 546 foreign encryption products we found, 56% are available for sale and 44% are free. 66% are proprietary, and 34% are open source. Some for-sale products also have a free version. * At least 587 entities—primarily companies—either sell or give away encryption products. Of those, 374, or about two-thirds, are outside the US. * Of the 546 foreign encryption products, 47 are file encryption products, 68 e-mail encryption products, 104 message encryption products, 35 voice encryption products, and 61 virtual private networking products. I know the database is incomplete, and I know there are errors. I welcome both additions and corrections, and will be releasing a 1.1 version of this survey in a few weeks. The report: https://www.schneier.com/cryptography/paperfiles/worldwide-survey-of-encryption-products.pdf The data: https://www.schneier.com/cryptography/paperfiles/worldwide-encryption-product-survey-data.xls Press articles: http://arstechnica.com/tech-policy/2016/02/new-report-contends-mandatory-crypto-backdoors-would-be-futile/ https://theintercept.com/2016/02/11/new-survey-suggests-u-s-encryption-ban-would-just-send-market-overseas/ http://www.theregister.co.uk/2016/02/11/schneier_encryption_survey/ http://www.forbes.com/sites/ygrauer/2016/02/11/strong-crypto-is-widely-available-outside-the-us-according-to-bruce-schneiers-international-survey http://www.theverge.com/2016/2/11/10964172/backdoor-laws-cant-contain-global-encryption-says-new-report http://www.cso.com.au/article/593840/study-finds-anti-crypto-laws-won-t-work-an-international-stage/ http://www.csmonitor.com/World/Passcode/2016/0211/Most-encryption-products-far-beyond-reach-of-US-law-enforcement http://www.net-security.org/secworld.php?id433 Old blog posts on the project: https://www.schneier.com/blog/archives/2015/09/wanted_cryptogr.html https://www.schneier.com/blog/archives/2015/12/worldwide_crypt.html
Contrary to Peter Ladkin's opinion, I can see the Korean's ARAIB's point. Any aircraft manufacturer should take into account that there are many factors affecting human interface, some of them also cultural. The report acknowledges that the pilot had accidentally disconnected the auto throttle mechanism; how did that happen? How long would it take for tired and inexperienced pilots to notice that their instruments are not functioning as expected? The answer to this question should also take into account that this might be significantly longer if the culture the pilots come from puts a high value on obedience to authority. In short, human interface in a high risk environment is much more complicated than just the design of the system. Manufacturers should acknowledge that not all pilots were on the top of their class at the Air Force Academy (as many test pilots are) and plan accordingly.
> For example, when staff at Amgen, a Californian drug company, attempted > to reproduce the results of 53 high-profile cancer-research papers they > found that only six lived up to their original claims. This report is just hearsay - it's not clear at all what Amgen tried to do, or how seriously they tried to reproduce, and it's certainly not peer reviewed. Crucial details are missing, so it's hard to say what it means. (I personally bet that the conclusions are broadly right, but it's not a scientific study.) The Nosek study on psychology is a real example of testing reproducibility, however, and is worth reading in full. There's an increasing recognition that our approach to publishing research is fundamentally flawed and drives reporting of irreproducible research. A good recent discussion: http://bjoern.brembs.net/2016/01/even-without-retractions-top-journals-publish-the-least-reliable-science/ [p.s. Autocorrect fixed *notsp* to *not sp*. Fun!]
Rogier Wolff states: 'You can enforce rules like: "you are not allowed to go back and correct answers. Your first answer stands".' 1) Not reviewing one's answers is a very bad habit that could have nasty Real World consequences. For example, in the computing field, we have a process of checking, and, as needed, correcting answers. The programmers can not simply release the software; it has to get through QA. 2) When I was attending university, I reviewed my test answers usually multiple times. Had I not been able to, I would have lost many marks. An exam is stressful enough without insisting that the first answer is the only answer allowed.
<email@example.com> talks about expense of having rooms of dedicated computers, wired to not talk to the outside, plus staff to manage the exams. Staff is needed no matter how the exams are presented. But tests on paper can be scanned into a computer format, for automated scoring. Be careful to include personnel qualified in the hard & soft ware needed, to avoid the kind of errors recently reported with science research data. There are places for disaster recovery. A computer system can go down due to natural disaster, fire, etc. and have a contract with another place, where backups will be restored, any CPU serial # licensing arranged in advance to use the disaster recovery site, and the corporate computing interruption is minimal. Those disaster recovery sites sit idle most of the time, and would love to accept the work of exams, voting machine usage, other applications, between disasters. I have been to IBM school, more often at a University, or enterprise marketing software on IBM computers, than doing the education at an IBM office. These places have multiple classrooms, with computer terminals for every student, connected to their network, which controls what the students have access to, and whether or not there is any outside access. I imagine IBM competitors have similar educational facilities, which also could be used for other purposes. If you have more students, needing to take exams, than computer terminals at one such facility, then do not require all students to take the exam at same time. The second session can have different questions than the first session, to undermine any advantages to the second session by info gained from those in the first session. Rogier Wolff wrote about risks of a better student, with counterfeit student id, taking exam for lesser student, to get them a good store, then the practice helping the better student do even better, when being tested under their own identity. Security can be improved by cloning the Indiana State Police system. When they stop an Indiana motorist, and ask for our driver license, they compare the picture on the license with the actual driver face, who handed them the license, then take the license to patrol car, where bar code on back of license connects them to DMV photo, taken when the license was issued. How can that data info stream be spoofed? I think the crooks, in the false id biz, would need to hack the DMV, mess with the bar code in addition to the picture. Perhaps arrange some local jamming from trunk of car, so state police connection to DMV is "down" during this traffic stop.
Please report problems with the web pages to the maintainer