Christopher Mims, *The Wall Street Journal*, 3 Aug 2014 via ACM TechNews, 6 Aug 2014 One million programming jobs in the United States could go unfilled by 2020 due to the enormous mismatch between the supply and demand for computer programmers, according to the U.S. Bureau of Labor Statistics. Fortunately, a computer science degree is not necessary to get a job in programming. University courses in computer science favor theory rather than making websites, services, and apps that companies care about, writes Christopher Mims. Code-school founders say committed programming students are finding jobs whether or not they have a college degree. Computer programming is now a trade that someone can develop a basic proficiency in within weeks or months, secure a first job, and get onto the same path to upward mobility offered to in-demand, highly-paid peers, Mims says. He contends we have entered an age in which demanding that every programmer has a degree is like asking every bricklayer to have a background in architectural engineering. Anecdotal evidence also indicates that coding schools are more inclusive of women and people of color. http://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_5-c4ccx2b873x060863&
National Science Foundation, 31 July 2014, via ACM TechNews, 6 Aug 2014 The U.S. National Science Foundation's Secure and Trustworthy Cyberspace program has announced two new center-scale Frontier awards that will support large, multi-institution projects addressing grand challenges in cyber security and computer science. Frontier awards already support some 225 projects in 39 states with more than $74 million in funding. These projects include education and training initiatives, and both basic and practical computer science research. The first of the new awards will go towards the establishment of the Center for Encrypted Functionalities (CEF), a collaboration between the University of California Los Angeles (UCLA), Stanford University, Columbia University, the University of Texas at Austin, and Johns Hopkins University. CEF is led by UCLA's Amid Sahai and based on research by his team that discovered the first mathematically sound approach to encrypting functionalities, with the specific goal of achieving program obfuscation. The second award will establish the Modular Approach to Cloud Security project, which seeks to build a modular, multi-layered cloud security system. The project is a collaboration of Boston University, Massachusetts Institute of Technology, University of Connecticut, and Northeastern University researchers, and will use the Massachusetts Open Cloud as a testbed for its research. Both new projects also will help create new education and training programs focused on cybersecurity and computer science. http://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_5-c4ccx2b877x060863&
FYI—Singapore has become John Poindexter's wet dream of a surveillance state. If it weren't for the public privacy outcry, Poindexter would have done TIA in the USA. What to do, what to do, ... I know, I'll do it anyway, but in secret! The Social Laboratory Shane Harris, *Foreign Policy*, Jul 2014 [The original is a very long item, which has been pruned for RISKS. PGN] http://www.foreignpolicy.com/articles/2014/07/29/the_social_laboratory_singapore_surveillance_state In October 2002, Peter Ho, the permanent secretary of defense for the tiny island city-state of Singapore, paid a visit to the offices of the Defense Advanced Research Projects Agency (DARPA), the U.S. Defense Department's R&D outfit best known for developing the M16 rifle, stealth aircraft technology, and the Internet. Ho didn't want to talk about military hardware. Rather, he had made the daylong plane trip to meet with retired Navy Rear Adm. John Poindexter, one of DARPA's then-senior program directors and a former national security advisor to President Ronald Reagan. Ho had heard that Poindexter was running a novel experiment to harness enormous amounts of electronic information and analyze it for patterns of suspicious activity -- mainly potential terrorist attacks. The two men met in Poindexter's small office in Virginia, and on a whiteboard, Poindexter sketched out for Ho the core concepts of his imagined system, which Poindexter called Total Information Awareness (TIA). It would gather up all manner of electronic records—emails, phone logs, Internet searches, airline reservations, hotel bookings, credit card transactions, medical reports—and then, based on predetermined scenarios of possible terrorist plots, look for the digital "signatures" or footprints that would-be attackers might have left in the data space. The idea was to spot the bad guys in the planning stages and to alert law enforcement and intelligence officials to intervene. "I was impressed with the sheer audacity of the concept: that by connecting a vast number of databases, that we could find the proverbial needle in the haystack," Ho later recalled. He wanted to know whether the system, which was not yet deployed in the United States, could be used in Singapore to detect the warning signs of terrorism. It was a matter of some urgency. Just 10 days earlier, terrorists had bombed a nightclub, a bar, and the U.S. consular office on the Indonesian island of Bali, killing 202 people and raising the specter of Islamist terrorism in Southeast Asia. Ho returned home inspired that Singapore could put a TIA-like system to good use. Four months later he got his chance, when an outbreak of severe acute respiratory syndrome (SARS) swept through the country, killing 33, dramatically slowing the economy, and shaking the tiny island nation to its core. Using Poindexter's design, the government soon established the Risk Assessment and Horizon Scanning program (RAHS, pronounced "roz") inside a Defense Ministry agency responsible for preventing terrorist attacks and "nonconventional" strikes, such as those using chemical or biological weapons—an effort to see how Singapore could avoid or better manage "future shocks." Singaporean officials gave speeches and interviews about how they were deploying big data in the service of national defense—a pitch that jibed perfectly with the country's technophilic culture. [Entire middle section omitted. ... I recommend digging up the rest. PGN] The officials running RAHS today are tight-lipped about exactly what data they monitor, though they acknowledge that a significant portion of "articles" in their databases come from publicly available information, including news reports, blog posts, Facebook updates, and Twitter messages. ("These articles have been trawled in by robots or uploaded manually" by analysts, says one program document.) But RAHS doesn't need to rely only on open-source material or even the sorts of intelligence that most governments routinely collect: In Singapore, electronic surveillance of residents and visitors is pervasive and widely accepted. "In Singapore, the threshold for surveillance is deemed relatively higher," according to one RAHS study, with the majority of citizens having accepted the "surveillance situation" as necessary for deterring terrorism and "self-radicalization." Singaporeans speak, often reverently, of the "social contract" between the people and their government. They have consciously chosen to surrender certain civil liberties and individual freedoms in exchange for fundamental guarantees: security, education, affordable housing, health care. But the social contract is negotiable and "should not be taken for granted," the RAHS team warns. "Nor should it be expected to be perpetual. Surveillance measures considered acceptable today may not be tolerable by future generations of Singaporeans." At least not if those measures are applied only to them. One future study that examined "surveillance from below" concluded that the proliferation of smartphones and social media is turning the watched into the watchers. These new technologies "have empowered citizens to intensely scrutinise government elites, corporations and law enforcement officials—increasing their exposure to reputational risks," the study found. From the angry citizen who takes a photo of a policeman sleeping in his car and posts it to Twitter to an opposition blogger who challenges party orthodoxy, Singapore's leaders cannot escape the watch of their own citizens. Shane Harris is a senior staff writer at Foreign Policy and the author of the forthcoming book @War: The Rise of the Military-Internet Complex, which will be published in November 2014.
Bill Snyder, CIO.com via InfoWorld, 06 Aug 2014 A recent study of the 400 most popular iOS and Android apps reveals that nearly all free apps collect users' personal data http://www.infoworld.com/d/mobile-technology/user-beware-mobile-app-spying-you-247713 selected text: The vast majority of the most popular iOS and Android mobile apps collect a variety of personal data from users, including location details, address book contacts, and calendar information, according to a just-released survey by Appthority, a company that advises businesses on security. Here's a breakdown of the most frequently collected data: * 82 percent of the top Android free apps and 49 percent of the top Android paid apps track user location. * 50 percent of the top iOS free apps and 24 percent of the top iOS paid apps track user location You might not expect a flashlight app or a calculator to track your location, but many do.
Lucian Constantin, InfoWorld, 07 Aug 2014 Network-attached storage devices more vulnerable than home routers A security review found serious vulnerabilities in 10 popular NAS systems from multiple manufacturers http://www.infoworld.com/d/security/network-attached-storage-devices-more-vulnerable-home-routers-247875 selected text: Jacob Holcomb, a security analyst at Baltimore-based Independent Security Evaluators, is in the process of analyzing NAS devices from 10 manufacturers and has so far found vulnerabilities that could lead to a complete compromise in all of them. "There wasn't one device that I literally couldn't take over," Holcomb said Wednesday during a talk at the Black Hat security conference in Las Vegas, where he presented some of his preliminary findings. "At least 50 percent of them can be exploited without authentication," he said. Researchers from Dell SecureWorks reported in June that a hacker made over $600,000 by hacking into Synology NAS devices and using them to mine Dogecoin, a type of cryptocurrency. More recently, some Synology NAS device owners reported that their systems had been infected by a file-encrypting malware program called SynoLocker. A big concern is that many NAS vendors use the same code base for their high-end and low-end devices, the researcher said. That means the same vulnerabilities in a low-cost NAS device designed for home use could exist in a much more expensive NAS system designed for enterprise environments.
(A little more on the USB item in RISKS-28.12. PGN) Lucian Constantin, InfoWorld, 01 Aug 2014 The firmware in such devices is unprotected and can be easily overwritten by malware, researchers from Security Research Labs said http://www.infoworld.com/d/security/most-usb-thumb-drives-can-be-reprogrammed-infect-computers-247489 selected text: Researchers from Security Research Labs have developed several proof-of-concept attacks that they plan to present at the Black Hat security conference in Las Vegas next week. One of the attacks involves a USB stick that acts as three separate devices -- two thumb drives and a keyboard. When the device is first plugged into a computer and is detected by the OS, it acts as a regular storage device. However, when the computer is restarted and the device detects that it's talking to the BIOS, it switches on the hidden storage device and also emulates the keyboard, Nohl said. Acting as a keyboard, the device sends the necessary button presses to bring up the boot menu and boots a minimal Linux system from the hidden thumb drive. The Linux system then infects the bootloader of the computer's hard disk drive, essentially acting like a boot virus, he said. Another proof-of-concept attack developed by Security Research Labs involves reprogramming a USB drive to act as a fast Gigabit network card. As Nohl explained, OSes prefer a wired network controller over a wireless one and a Gigabit ethernet controller over a slower one. This means the OS will use the new spoofed Gigabit controller as the default network card.
Bill Snyder, InfoWorld, 07 Aug 2014 Patent trolls extort millions from developers and entrepreneurs, but help is on the way from the EFF and the Supreme Court http://www.infoworld.com/d/the-industry-standard/the-battle-against-stupid-software-patents-247841 selected text: Those patents are so silly it's hard to take them seriously. But you should. Predatory trolls holding preposterous patents suck millions of dollars from the pockets of entrepreneurs who don't have the time or the money to fight in court. So Ranieri, a young lawyer with a degree in math and computer science, has launched a humorous blog entitled "The Stupid Patent of the Month," in an effort to make an arcane, and frankly boring, subject more accessible to the nonlawyering public. [The link for the blog: <https://www.eff.org/deeplinks/2014/07/inaugural-stupid-patent-month>] "We wish we could catalog them all, but with tens of thousands of low-quality software patents issuing every year, we don't have the time or resources to undertake that task," she says. Instead she'll poke fun at the really bad ones while she makes a serious point: the need to continue the slow process of fixing our broken patent system. What passes as a patent innovation: 'Do it with a computer' For August, the EFF has nominated U.S. Patent 8,762,173, titled "Method and Apparatus for Indirect Medical Consultation," which was granted in June. Here's how it works: 1. Take a telephone call from patient. 2. Record patient info in a patient file. 3. Send patient information to a doctor, ask the doctor if she wants to talk to the patient. 4. Call the patient back and transfer the call to the doctor. 5. Record the call. 6. Add the recorded call to the patient file and send to doctor. 7. Do steps 1-6 with a computer. The original patent actually had steps 1-6, and it was rejected. Then step 7 was added, and it was approved. "This is a patent on a doctor's computer-secretary ... Somehow, something that wasn't patentable became patentable just by saying 'do it with a computer,'" says Ranieri.
In British Columbia, the meters presumably report usage data wirelessly, perhaps forwarding data from one installation to others (nearer a "hub") using multiple hops. The 902-928 ISM band is used to carry FH (Frequency Hopping) spread-spectrum signals generated by the meters. At least one hacking approach appears to have allowed relatively inexpensive TV/FM "dongles" working in conjunction with a PC app to receive and demodulate / display parts of the data being transmitted. One approach is documented at the following URL: http://bemasher.github.io/rtlamr/
http://www.nytimes.com/2014/08/06/technology/russian-gang-said-to-amass-more-than-a-billion-stolen-internet-credentials.html Geoff.Goodfellow@iconia.com http://geoff.livejournal.com
Jeremy Kirk, InfoWorld, 04 Aug 2014 Two researchers will show at Defcon how a Dropcam monitoring camera could turn into a Trojan horse http://www.infoworld.com/d/security/your-dropcam-live-feed-being-watched-someone-else-247566
Wikimedia via NNSquad https://wikimediafoundation.org/wiki/Notices_received_from_search_engines - - - Yes!
Herb Lin wrote: Folks, please think about a fix for this. I can think of several fix approaches. We need to hear from the NGOs and GOV agencies. Do they in fact want to be receiving notifications of bad stuff from millions of people? Currently millions, perhaps even billions, of accounts are receiving unwanted stuff on a regular basis, and the owners of those accounts are habitually deleting the bad stuff, without making any reports to any authorities. We are told that the Internet is getting clogged. 90% of the traffic is unwanted advertising, much of it for products and services which involve some kind of illegality. Does the law enforcement community want all of this forwarded to them? Or do they want a system, where the bad stuff goes into some of data base which can be analyzed by law enforcement, so they can choose to go after the most serious crimes, affecting their jurisdictions, and ignore the rest? If Google, or any other ISP, is going to report about bad stuff, then report the whole story: * Not only who has it, but also * From what address did it come? * Was it forwarded via a series of people … identify all of them also * Which are the nations and states associated with these accounts, if known. Some of them may be places where this is not illegal. * What's the date on this? Was it dated before the date the legal community says it became illegal? * When was the last time the user did anything with his-her Gmail account & did this arrive since then? Make it easier to OPT IN OPT OUT of what happens with our unwanted SPAM. Gmail has a spam filter, to catch suspected spam. They could add an OPT IN OPT OUT option for this to automatically notify some anti-spam operation, or to provide a box beside the subject etc. line, or entire contents of spam box. REPORT AND THEN DELETE, meaning report this spammer to relevant authorities, then delete from my e-mail. Other e-mail software providers could supply a similar feature. I forward some of my spam to KNUJON (no junk backwards), which combines what I forward, with other customers, to identify the spammers, and put them in the slammer, working with different kinds of non-profits for the different kinds of spam. I cannot forward all my spam to KNUJON, because my ISP has flawed security. They can conclude that I am the bad guy, or that my computer is infected, just because I am forwarding some problem, that they did not detect when it was incoming. Their customer service lacks the mentality to comprehend the concept of KNUJON service. Our e-mail and browser software now has aids to help us see suspected spam, other problems, on arrival, and us have it automatically go to a spam mail box, or have other actions taken. The state of art has false positives and false negatives because there is a constant war between the developers of this software, and the developers of the badware. We could ask the protection developers to add a flag “suspected illegal” into a mail box by nature of suspicion. We could then OPT IN OPT OUT do what about this. * Report all of it, including false positives * Report some of it * Report none of it, just delete it Browsers could get an add-on. We arrive at some site which seems suspicious to us. I have found sites promoting the assassination of political leaders, selling products I believe illegal for me to buy, promoting all sorts of hatreds. In the past, I have screen printed the offending info, including url, then taken this to my local police station, suggesting-asking that it be forwarded to the FBI, Secret Service, or whatever agency seems relevant. I show my id when I do this. My proposed add-on would have a click on icon associated with the browser options, where we can oops exit if we did not mean to do that. Up pops a screen where we can select what's objectionable from a list of possibilities, or key in text describing the other, such as: * Activity promoted, which I think is illegal … key in text what * Child Porn here * Nigerian Scam * Phishing suspected * Stock pump and dump suspected * Terrorist site * Treason here * Virus delivery Each of the standard possibilities would have a standard NGO or GOV organization to which the link report would be sent, such as stock swindles to the SEC, and/or relevant GOV organization in some other nation, if either the witness, or the site, apparently located in that nation, based on url, and any street address or phone # involved. A few years ago, I received an advertisement in the snail mail, trying to sell me child porn. I took it to the local post office & tried to have something constructive done about what I thought was a crime. Well, this is in the eyes of the beholder, a matter of opinion. It was not objectionable material to the local Post Master representative. An offer was made to me, which I accepted. The Post Office would notify the sender address that they were to cease sending anything to my address, and if they persisted, there would be legal trouble for them. In consequence of this, I became flooded, for a short time, with exact same advertising from other from-addresses. Apparently the first place had sold my address to their peers, since the postal notification to them had confirmed my address was valid. Many years ago, before caller-id, I was the recipient of harassing phone calls. I contacted my phone service provider to ask what could be done about = it. I learned that the phone company had a limited budget to deal with this particular issue, and I was not getting enough of these calls to justify them doing anything for me. It was sufficiently annoying to me, that I got my phone # changed. The phone company charged me a fee, for changing my phone # for no good reason. Alister William Macintyre (Al Mac)
In RISKS-28.13, Alister Wm Macintyre comments on the NEST matter and asks the question: "Wasn't the TARGET breach a variation on this?" Based on the information at http://krebsonsecurity.com/2014/02/email-attack-on-vendor-set-up-breach-at-target/ ... I'd have to say the answer to his question is "No."
Please report problems with the web pages to the maintainer