via NNSquad http://worldabcnews.com/911-emergency-services-go-down-across-the-us-after-centurylink-outage-techcrunch/ CenturyLink, one of the largest telecommunications providers in the U.S., provides Internet and phone backbone services to major cell carriers, including AT&T and Verizon. Datacenter or fiber issues can have a knock-on effect to other companies, cutting out service and causing cell site blackouts. In this case, the outage affected only cellular calls to 911, and not landline calls.
https://www.nytimes.com/2018/12/27/magazine/air-force-hypoxia-pilots-navy.html The Air Force and the Navy are rolling out new hardware and software for their trainer aircraft to stop the oxygen-deprivation problems that have plagued pilots for several years.
I don't know if Huawei devices have "kill switches" or "backdoors", but if they did, the big iron-y would be rich indeed. For decades, the 5i's have pushed for "backdoors" in encryption and pseudo RNG's, and the laughable number of "backdoors" in Cisco and Juniper routers belie any plausible deniability; thar be NSL's, most likely. Let's call a spade a spade; let's call them "blowbackdoors", because the U.S. intelligence community is now reaping what it sowed. The intel community's current freakout is raw jealousy—if the 5G device manufacturers in the 5i countries could be forced to insert "kill switches" and "backdoors" into their own products and distribute them worldwide, it would be done in a New York minute—so they know full well whereof they speak. Also, ignoring its own history of winning wars through manufacturing prowess, the U.S. has now ceded the vast majority of its manufacturing to China, only to wake up just yesterday to the fact that it is now more "taker" than "maker" ? Duh!! https://www.schneier.com/blog/archives/2018/08/backdoors_in_ci.html https://www.reddit.com/r/technology/comments/90gpd5/backdoors_keep_appearing_in_ciscos_routers/ https://www.computerworlduk.com/security/security-backdoors-that-heped-kill-faith-in-security-3634220/ https://www.schneier.com/blog/archives/2015/12/back_door_in_ju.html https://www.technologyreview.com/s/612556/the-6-reasons-why-huawei-gives-the-us-and-its-allies-security-nightmares/ The 6 reasons why Huawei gives the US and its allies security nightmares The biggest fear is that China could exploit the telecom giant's gear to wreak havoc in a crisis. Martin Giles and Elizabeth Woyke December 7, 2018 The detention in Canada of Meng Wanzhou, Huawei's CFO and the daughter of its founder, is further inflaming tensions between the US and China. Her arrest is linked to a US extradition request. On December 7 a Canadian court heard that the request relates to Huawei's alleged use of Skycom Tech, a company that dealt with Iranian telecom firms, to sell equipment to Iran between 2009 and 2014 in contravention of US sanctions on the country. China says her detention is a human rights violation and is demanding her swift release. Behind this very public drama is a long-running, behind-the-scenes one centered on Western intelligence agencies' fears that Huawei poses a significant threat to global security. Among the spooks' biggest concerns: There could be "kill switches" in Huawei equipment ... The Chinese firm is the world's largest manufacturer of things like base stations and antennas that mobile operators use to run wireless networks. And those networks carry data that's used to help control power grids, financial markets, transport systems, and other parts of countries' vital infrastructure. The fear is that China's military and intelligence services could insert software or hardware "back doors" into Huawei's gear that they could exploit to degrade or disable foreign wireless networks in the event of a crisis. This has led to moves in the US to block Chinese equipment from being used. ... that even close inspections miss Since 2010, the UK has been running a special center, whose staff includes members of its GCHQ signal intelligence agency, to vet Huawei gear before it's deployed. But earlier this year, it warned that it had "only limited assurance" that the company's equipment didn't pose a security threat. According to press reports, the center had found that some of Huawei's code behaved differently on actual networks from the way it did when it was tested, and that some of its software suppliers weren't subject to rigorous controls. Back doors could be used for data snooping Huawei claims its equipment connects over a third of the world's population. It's also handling vast amounts of data for businesses. That's why there's fear in Western intelligence circles that back doors could be used to tap into sensitive information using the firm's equipment. This would be tricky to do undetected, but not impossible. Huawei doesn't just build equipment; it can also connect to it wirelessly to issue upgrades and patches to fix bugs. There's concern that this remote connectivity could be exploited by Chinese cyber spies. The company is also one of the world's biggest makers of smartphones and other consumer devices, which has raised the prospect that China might exploit these products for espionage. In May, the US Department of Defense ordered retail stores on US military bases to stop selling phones from Huawei and ZTE, another big Chinese tech giant, because of fears they could be hacked to reveal the locations and movements of military personnel. The rollout of 5G wireless networks will make everything worse Telecom companies around the world are about to roll out the next generation of cellular wireless, known as 5G. As well as speeding up data transfers, 5G networks will enable self-driving cars to talk to each other and to things like smart traffic lights. They'll also connect and control a vast number of robots in factories and other locations. And the military will use them for all kinds of applications, too. This will dramatically expand the number of connected devices--and the chaos that can be caused if the networks supporting them are hacked. It will also ramp up the amount of corporate and other data that hackers can target. Both Australia and New Zealand have recently banned the use of Huawei equipment in new 5G wireless infrastructure. This week, the UK's BT followed suit. Chinese firms will ship tech to countries in defiance of a US trade embargo The US has been investigating claims that Huawei shipped products with US tech components to Iran and other countries subject to a US embargo. In the court hearing, a lawyer for the Canadian government said that Ms Meng is accused of telling US bankers there was no connection between Skycom and Huawei, when in fact there was. The alleged fraud caused the banks to make transactions that violated US sanctions against Iran. Chinese officials have repeatedly said they don't consider China's companies to be bound by other nations' trade edicts. Huawei isn't as immune to Chinese government influence as it claims to be Huawei has repeatedly stressed it's a private company that's owned by its employees. The implication is that it has no incentive to cause customers to lose confidence in the integrity of its products. On the other hand, its governance structures are still something of a mystery, and its founder, Ren Zhengfei, who was once an officer in the Chinese People's Liberation Army, keeps a low profile. Such things "make you question just how much independence it really has," says Adam Segal, a cybersecurity expert at the Council on Foreign Relations in New York. In its defense, Huawei can point to the fact that no security researchers have found back doors in its products. "There's all this concern, but there's never been a smoking gun," says Paul Triolo of the Eurasia Group. While that's true, it won't change the view of the US, which is stepping up its efforts to persuade its allies to keep Huawei out of all their networks. This story was updated on December 7 to include details of a court hearing in Canada about Ms Meng's detention.
https://www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html While idled at a controlled stop without backup driver, can an AV passenger issue a command to "rabbit through the intersection" and evade onslaught by "stick and stone" wielding protesters? Should an AV w/o backup driver, under hostile domestic operating conditions, be required to satisfy a specific regulation? See https://www.nhtsa.gov/laws-regulations for airbags, etc. Certain individuals perceive AV technology as a threat. Their aggressive behavior mirrors the 19th Century Luddites who damaged mechanized weaving machinery in protest against job displacement. Neo-Luddites will likely become more effective if and when they deploy WiFi and Bluetooth disruption stacks and tools targeting AVs. https://en.wikipedia.org/wiki/Luddite "According to a manifesto drawn up by the Second Luddite Congress (April 1996; Barnesville, Ohio), Neo-Luddism is 'a leaderless movement of passive resistance to consumerism and the increasingly bizarre and frightening technologies of the Computer Age.'"
Here's the other side of the Oregon Software Engineer story from 2014. PGN https://motherboard.vice.com/en_us/article/yw798m/oregon-unconstitutionally-fined-a-man-dollar500-for-saying-i-am-an-engineer-federal-judge-rules?utm_source=reddit.com A federal district court has ruled that the state of Oregon illegally infringed on a man's First Amendment rights for fining him $500 because he wrote "I am an engineer" in a 2014 email to the state's Engineering Board. The court ruled that the provision in the law he broke is unconstitutional, which opens the door for people in the state to legally call themselves "engineers."
https://www.npr.org/2018/12/29/680920575/computer-virus-infects-print-production-systems-tribune-publishing-says Also: Suspected malware attack causes major LA Times newspaper delivery interruptions https://www.latimes.com/local/lanow/la-me-ln-times-delivery-breakdown-20181229-story.html
The computers, sensors and software in cars are getting so smart they may eventually detect whether the driver and passengers are happy or sad, comfortable or uncomfortable, alert or distracted. And as a result, driving automobiles can be made safer and more enjoyable. “Driver monitoring is extremely important for active safety systems as well as automated systems,'' says Phil Magney, founder and principal at VSI Labs, an automotive technology applied research firm-based in St. Louis Park, MN. “It may have been a nice-to-have feature beforehand. Now you can say it definitely is a must-have feature.'' Nevertheless, he says, car occupant monitoring overall remains in a state of flux—particularly regarding user experience. ... For example, Shapiro says a camera inside the vehicle could determine a driver's attentiveness by detecting their eye blink rate and sensing their head pose. These checks could also determine if the driver is close to falling asleep. This could be merged with input from outside sensors that detect a pedestrian preparing to cross the car's path. And the car may then determine if it needs to issue a collision warning and autobrake sooner than it would otherwise, to give a tired or distracted driver extra time to react, Shapiro says. Emotion-sensing technology could lead the car to take actions proactively, such as playing certain music or adjusting cabin temperature. Or it may make suggestions and engage in a conversation with a passenger, for instance, offering to lower a window if lip-reading software senses someone complaining about being hot. https://www.cta.tech/News/i3/Articles/2018/November-December/Car-Smarts-The-Future-of-Vehicle-Tech.aspx Didn't HAL read lips? That didn't end well.
https://www.scientificamerican.com/article/drones-used-to-find-toy-like-butterfly-land-mines/ "In field trials conducted in September 2017 at a New York state park, the team was able to pick up almost 78 percent of the mines during four trials. That is not yet good enough to fully replace survey work by ground teams. But it could help narrow down the locations and general layout of [Russian-made] PFM-1 minefields, says Alex van Roy, deputy head of operations at the Geneva-based Swiss Foundation for Mine Action (FSD)—and a mine action specialist who formerly served in the Australian Army." Risk: Over-reliance on image processing to resolve and detect a mine to defuse, and eventually declare a "mine-free" zone when undetected residuals remain. Similar problems exist for other domains where image inspection is applied (diagnostic x-rays, etc.).
“Just a test that went to a few orders of magnitude more people than intended... sorry about that,'' he wrote in one tweet. The feature was intended as a test but was released widely by mistake, the head of Instagram said. Closing and reopening the app turns it off. https://www.nytimes.com/2018/12/27/technology/instagram-update-horizontal-scroll.html The risk? Testing with live data and unsuspecting (live) users. Also, causing hysteria with first-world problems: Instagram briefly changed how users moved through their feeds Thursday morning, forcing them to swipe left or tap through horizontally rather than scroll vertically. If it had been Apple, it would have been a six-finger dance on the screen. <https://www.nytimes.com/services/mobile/apps/
https://lauren.vortex.com/2019/01/02/usa-wants-to-restrict-ai-exports-a-stupid-and-dangerous-idea When small, closed minds tackle big issues, the results are rarely good, and frequently are awful. This tends to be especially true when governments attempt to restrict the development and evolution of technology. Not only do those attempts routinely fail at their stated and ostensible purposes, but they often do massive self-inflicted damage along the way, and end up further empowering our adversaries. Much as Trump's expensive fantasy wall ("Mexico will pay for it!") would have little ultimate impact on genuine immigration problems—other than to further exacerbate them—his Commerce department's new plans for restricting the export of technologies such as AI, speech recognition, natural language understanding, and computer vision would be yet another unforced error that could decimate the USA's leading role in these areas. We've been down this kind of road before. Years ago, the USA federal government placed draconian restrictions on the export of encryption technologies, classifying them as a form of munitions. The result was that the rest of the world zoomed ahead in crypto tech. This also triggered famously bizarre situations like t-shirts with encryption source code printed on them being restricted, and the co-inventor of the UNIX operating system—Ken Thompson—battling to take his "Belle" chess-playing computer outside the country, because the U.S. government felt that various of the chips inside fell into this restricted category. (At the time, Ken was reportedly quoted as saying that the only way you could hurt someone with Belle was by dropping it out of a plane—you might kill someone if it hit them!) As is the case with AI and the other technologies that Commerce is talking about restricting today, encryption R&D information is widely shared among researchers, and likewise, any attempts to stop these new technologies from being widely available, even attempts at restricting access to them by specific countries on our designated blacklist of the moment, will inevitably fail. Even worse, the reaction of the global community to such ill-advised actions by the U.S. will inevitably tend to put us at a disadvantage yet again, as other countries with more intelligent and insightful leadership race ahead leaving us behind in the dust of politically motivated export control regimes. To restrict the export of AI and affiliated technologies is shortsighted, dangerous, and will only accomplish damaging our own interests, by restricting our ability to participate fully and openly in these crucial areas. It's the kind of self-destructive thinking that we've come to expect from the anti-science, "build walls" Trump administration, but it must be firmly and completely rejected nonetheless. [Geoff Goodfellow noted this item. PGN] https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/ ] [This is really short-sighted, and resembles efforts from the early crypto wars by attempting to put export controls on cryptography, or backdoors that only trusted entities can use. PGN]
A while ago there were a number of stories about investigations into serious assaults at a private school. These assaults have gone way beyond hazing and into sexual assault, and even, because the episodes were filmed, possible child pornography charges. The situation has been very disturbing. Hazing, and hazing culture, are often "justified" by the assertion that "mild" forms of hazing are harmless. This statement ignores two important facts. One is that hazing inherently relies upon a culture of silence. In any such situation there can be no controls, and therefore no ability to ensure that "mild" hazing does not escalate. The second point is that hazing is a form of bullying, and it has been amply demonstrated that any form of bullying has long term negative impacts, both on the victims, and on the perpetrators. I've never understood hazing. Not really. It happens a lot in sports, and I've never been into sports. It happens in some professions, but not the ones I've worked in. As far as I can determine, hazing, and all the culture that goes with it, indicates that your job is either a) not very important, or b) doesn't require any skills. I can't say I've experienced it in information security. I started in malware research, which has always been occupied by charter members of "Egos-BackwardsR-Us." Back when I started you had to be not just a systems programmer, but a specialist systems programmer to make a significant contribution to the field, and those guys have always been the elite. Even so, if you studied and made even a modest contribution, you were in. I wasn't a systems programmer, and my contributions were very modest. But I was accepted. It's probably because the job was so big, and the workers so small (or too few). Anybody who wanted to help was welcome. Anybody who wanted to do some research would be given some tips and starters. Anybody who helped was in the community. Nobody had to jump through artificial hoops because the real barriers to entry were already formidable enough. (Actually, there were some in computer security, back then, who did try to keep you out. They were the ones who knew barely enough to charge for computer security consulting. Some of them claimed that computer viruses were not a security issue—mostly because they didn't know what computer viruses were.) Come to think of it, I've never really seen hazing in the tech field, as such. Oh, there are jerks, I grant you. There are those who are so into the technology that social skills, human communication, and even personal hygiene take a very distant back seat to whatever they are working on right now. But, generally speaking, they don't try to keep you out. They may have given up trying to teach people what it is that they are doing but, if you take the time to try and learn, they are generally delighted to have someone to talk to. (They may not show it very well.) (The closest I've ever seen hazing in tech is the ITIL certification. Since ITIL is a library, it's hard to figure out how to certify that someone knows it. Since it's hard to assess that, you just drown the poor candidate in work and hope that the ones who don't know it will give up before they get to the end of the process. I think PMP runs it a close second.) I suppose I have to address the issue of women in tech. Yes, women are definitely underrepresented in tech. And, yes, there is a lot of irrational bias against women, as you find in any male-dominated field. (Not just from the techies: I've found it in management, too, where you'd think they should know better.) I do not want to minimize the issue: misogyny anywhere is ridiculous and wasteful. But sometimes it doesn't take much to make geeks realize they've been sexist jerks, and make some (possibly modest) changes. In one department I came in to manage, they regularly went to strip bars after work, and had a signed poster from one of the "performers" up on the wall. As I was hiring the first (female) secretary for the office, I noted that this wall decoration might be subject to less prominent placement. I never saw the poster again, and the team also removed the porn directory on the development server (which I'd never mentioned). I can't claim that all is sweetness and light in the security community. There are in-fights: there are personality clashes. And there are those who are in it just to claim status. The community usually sorts them out in short order. Status, in security, is most often achieved by helping others. If you can answer newbie questions; if your answers are true and useful; then you have status. If you try and claim status, and try and hold others back in order to hold onto it, you are the one who gets shunned. In information security, we have too much work, and too few resources. We don't have time to waste on hazing. We also aren't going to block anyone who actually wants to help. In any security community I've been part of, newbies are welcome. OK, dumb questions may get sarcastic responses, but they have to be pretty seriously stupid to make the grade. Otherwise, if someone wants to get in and help, those of us who can answer questions do. Ask, and a pointer will be given. Seek, and ye shall be given a direction to go find. Knock, and the door will be opened, and you'll be hustled in, and usually put to work right away.
[From Lauren Weinstein's Network Neutrality Squad] https://techcrunch.com/2019/01/02/chromecast-bug-hackers-havoc/ A hacker, known as Hacker Giraffe, has become the latest person to figure out how to trick Google's media streamer into playing any YouTube video they want—including videos that are custom-made. This time around, the hacker hijacked thousands of Chromecasts, forcing them to display a pop-up notice that's viewable on the connected TV, warning the user that their misconfigured router is exposing their Chromecast and smart TV to hackers like himself. Not one to waste an opportunity, the hacker also asks that you subscribe to PewDiePie, an awful Internet person with a popular YouTube following. (He's the same hacker who tricked thousands of exposed printers into printing support for PewDiePie.) Google should have fixed this Chromecast bug years ago. On the other hand, UPnP has always been a train wreck, and that's not Google's fault.
via NNSquad http://www.kurdistan24.net/en/news/e6a0b65e-84fa-447b-9ed4-5df8390961d3 Google incorporation [sic] removed a map outlining the geographical extent of the Greater Kurdistan after the Turkish state asked it to do so, a simple inquiry on the Internet giant's search engine from Wednesday on can show. "Unavailable. This map is no longer available due to a violation of our Terms of Service and/or policies," a note on the page that the map was previously on read. Google did not provide further details on how the Kurdistan map violated its rules. The map in question, available for years, used to be on Google's My Maps service, a feature of Google Maps that enables users to create custom maps for personal use or sharing through search. Because the map was created and shared publicly by a user through their personal account, it remains unknown if their rights have been violated or if they will appeal. If the Turkish government wants the map removed as shown to users INSIDE TURKEY, fine. But Turkey should not have veto rights over maps shown to the rest of planet, just like the EU should not have the right to censor search results globally. Google's willingness to slide down this lowest-common-denominator razor blade is dangerous to an extreme.
A problem that *I* have with this sort of thing is the idea that the answer to any problem is legislation—it can only fix things if it's enforceable *and* enforced. In the UK, a common response to the Gatwick drone incident was "why was this allowed to happen?", as if tougher regulations would stop illicit drones without further action. For instance, we have very strict firearms laws in the UK (to the extent that an Olympic sport is illegal), but most gun crime involves the use of illegal guns, so even-stricter laws won't help. In the case of Google, in recent years there have been occasional incidents like the following example in the UK: when one wants to obtain or renew a passport, driver's licence, travel visa, or suchlike, or pay a road toll, one normally contacts the appropriate authority and pays the fee directly. What happens is that people Google for "passport/licence/visa/whatever" and are presented with a choice of sites, then one of two things may happen: (1) They actually get the site of an agency, who promise a prompt, efficient service at a competitive price (the site may imply that this can only be done through an agency, as I believe is the case in some countries), so they get their new passport/licence/visa/whatever but unexpectedly pay an unnecessary supplemental charge. (As I understand it, this is cheeky but legal.) (2) They get an impostor site which looks exactly like an official one, but again they may either pay an unnecessary fee and/or personal details are captured for identity theft. The usual response of victims is to say "why is Google allowed to show links to unofficial sites?", as if Google should have a legal obligation to verify the bona fides of every site indexed, which sounds impractical to me, especially considering different countries' legal systems which may be encountered. In the days of printed directories for land- line telephones, there used to be a note in the small print that any professional or business descriptions shown were as supplied by subscribers and not independently verified, so for instance a guy listed as 'dentist' may not actually be approved to practise dentistry, as it was not practical for a telephone company to check details like this. Incidentally, a commentator in a newspaper (don't have details to hand) recently predicted that the World Wide Web would likely become the 'Splinternet', i.e., a series of walled gardens, in 5-10 years; I feel that this will happen much sooner.
It's worth reading the one-page report and do a little background check, e.g. https://www.nextgov.com/it-modernization/2018/03/irs-system-processing-your-taxes-almost-60-years-old/146770/ and https://en.wikipedia.org/wiki/Customer_Account_Data_Engine For those too lazy to follow the links, two of the applications involved are at sixty years old "the oldest code in .gov" and are made of 20M lines of assembly that IRS, IBM, Northrop Grumman, and a bunch of assorted subcontractors been failing to port to The Big Iron since the turn of the century. Back in 2000 it was COBOL and Java were going to show what they can do for the American taxpayer. Now it's zLinux to the rescue.
> * 33% of U.S. Nobel Laureates since 1901 have been immigrants. > * 40% of American doctoral degrees were awarded to noncitizens. > * More than 25% of American entrepeneurs were born overseas. People in New York live longer than people elsewhere in the U.S. Why? Because 38% of New Yorkers are immigrants, and immigrants live longer than people born in the US, per this 2014 paper: https://www.popcouncil.org/uploads/pdfs/councilarticles/pdr/PDR401Preston.pdf
So, are we to conclude from this item that Google ought to be faithful to the laws and regulations of the respective nations in which it operates? If so, then what about Google's censored searches for China? Oh, but that's different, right?
Umm, not using Gmail? You agreed to Terms which give Google the right to MODIFY contents too (the right persists in the new Terms that take effect on 22 January 2019). The "limitations" on that right are woolly enough to permit Google to even alter email, which may come in handy when people start sending or receiving too much email critical of Google.. In the contexts of online agreements you may enjoy the Freefall cartoon at http://freefall.purrsia.com/ff2900/fc02870.htm
I know that sadly, anything Google does is perfectly legal; my post was meant mainly to attract attention to their unfair conduct.
https://www.mtr.com.hk/archive/corporate/en/press_release/PR-18-108-E.pdf Released on 19DEC2018, the incident investigation summary report found: "The Panel concluded that the root cause was the different software counter re-initialization arrangements of the two connected systems when the re-initialization was activated at the incident time on 16 October 2018. Since the four lines are connected, the inconsistent re-initialization situation led to repeated re-synchronization causing instability in sector computers. The software counter re-initialization algorithm, the differences in the counter re-initialization arrangements between the Alstom and Siemens systems and the possible impact on the train service were not known to the operators and maintainers, nor were they explicitly described in the Operation and Maintenance Manuals." RISK: Communication gap between multiple vendors governing train signaling system journey counter re-initialization protocol alignment. Appears that software engineers did not account for train journey counter reset conditions as part of system integration test for signaling system release qualification. Non-existent disclosure of counter re-initialization state dependency by either vendor implies there was a common operational assumption about, but not verification of, self-consistent signaling platform state management. No mention of counter word-length in the summary report.
Please report problems with the web pages to the maintainer