Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
http://www.bbc.co.uk/news/uk-23241791 As described, this incident has the symptoms that I saw more than 10 years ago when the National Airspace System (NAS) crashed. NAS does the flight data processing for en-route traffic in the London and Scottish FIRs and (unless it has been rewritten recently) is based on code written in the 1960s for US airport approaches, (in the Jovial programming language and with substantial dynamic data overlays because 1960s hardware had little RAM). If NAS crashed and the auto restarts failed (typically because there was persistent data that causes further crashes) then the flight data available to the controllers at the Swanwick ATC centre aged to the point where Swanwick had to enter SWIMM (Swanwick In Manual Mode), which involved stopping departures of aircraft on the ground that were destined for UK airspace and reducing Swanwick traffic as quickly as possible. The transition into SWIMM was a period of high workload taking (as I recall) 20-30 minutes. The transition out of SWIMM was also stressful for the controllers, so it usually waited for a period of low traffic. I suggested in the 1990s and early 2000s that a formal specification of the NAS should be commissioned, followed by a complete rewrite, because there were areas of the code that no-one understood well enough to modify, and functional changes were being implemented either by changing the airspace data (I called it "lying to NAS") to bring about the desired behaviour or by making modifications outside the no-go areas of the NAS code. Unfortunately, I was insufficiently persuasive. I don't know the current state of the NAS (but would like to).
It shouldn't take a rocket scientist to design to prevent a mistake like this. But on 9 Aug 2013, Anatoly Zak reports on his own site, RussianSpaceWeb.com, that investigators have determined the culprit was the "critical angular velocity sensors, DUS, installed upside down." http://arstechnica.com/science/2013/07/parts-installed-upside-down-caused-la st-weeks-russian-rocket-to-explode/
CNN reports that a PayPal customer received an electronic statement showing a credit balance of US$ 92 Trillion. When he logged into his account, the balance was correct. PayPal apologized. Unanswered question: Imagine the possible consequences if an automated system had processed the statement and swept the balance to another institution. A good example for safety traps in automated systems. Original CNN article at http://www.cnn.com/2013/07/17/tech/paypal-error/index.html Bob Gezelter, http://www.rlgsc.com
"The online money-transfer firm said it would offer to make a donation to a charity of Mr Reynolds' choice." The article claims the erroneous sum was $92,233,720,368,547,800; the sum of the donation was not stated... Full story at: http://www.bbc.co.uk/news/world-us-canada-23352230 (Being a computer nerd I had to check—this number is almost exactly 2^63 * 0.01, so the credit was probably meant to be 1 cent).
This is a story about government incompetence on the grossest, most unforgivable scale. Here's how the Economic Development Administration unnecessarily spent $2.75 million to fight a common case of malware. Warning: much innocent hardware was lost. Yes, even the mice. In December 2011 the Economic Development Administration (an agency under the US Department of Commerce) was notified by the Department of Homeland Security that it had a malware infection spreading around its network. These things happen, but what came next was truly exceptional. The EDA's IT people -- including its CIO—had a meltdown. The EDA's IT crowd determined that its network had been infected with a persistent, nation-state attack on its systems. So they isolated their department's hardware from other government networks, cut off employee e-mail, hired an outside security contractor, and started systematically destroying $170,000 worth of computers, cameras, mice, etc. It gets crazier. From the report, prepared for the Dept. of Commerce: EDA's CIO concluded that the risk, or potential risk, of extremely persistent malware and nation-state activity (which did not exist) was great enough to necessitate the physical destruction of all of EDA's IT components. EDA's management agreed with this risk assessment and EDA initially destroyed more than $170,000 worth of its IT components, including desktops, printers, TVs, cameras, computer mice, and keyboards. By August 1, 2012, EDA had exhausted funds for this effort and therefore halted the destruction of its remaining IT components, valued at over $3 million. EDA intended to resume this activity once funds were available. However, the destruction of IT components was clearly unnecessary because only common malware was present on EDA's IT systems. Destroying cameras? And mice? Over malware? Are you serious? Worse, the EDA continued destroying components until it could no longer afford to destroy them. In fact, the agency intended to continue destroying gear just as soon as it got more funds approved to do so. Uhh... okay! And no, it does not end there. It turns out the malware infection was absolutely routine. All the EDA had to do was isolate the affected components, remove the malware, reconnect the hardware and move on. NOAA, which received a notice at the same time as EDA, completed this operation in one month. The overall cost of EDA incompetence? $2.75 million—approximately half of the agency's IT budget. Here it is, neatly enumerated into smaller idiotic segments: Malware is scary, so in a way, we're sympathetic to the government agency that got infected and had a bit of a panic attack. But our sympathy disappears when we learn that its response to the malware betrayed a basic misunderstanding of malware and how it works. And remember, kids! Those are your tax dollars working hard in all the wrong places. [Department of Commerce via Federal News Radio via The Verge] http://gizmodo.com/government-destroys-170k-of-hardware-in-absurd-effort-708412225
To be on topic: If you design a fool-proved system, nature comes up with a new fool. http://au.businessinsider.com/ubs-has-been-fined-30000-for-a-typo-that-caused-it-to-sell-shares-at-1-of-their-worth-2013-7 http://www.asic.gov.au/asic/asic.nsf/byheadline/13-180MR+UBS+Securities+Australia+Ltd+pays+$30%2C000+infringement+notice+penalty?openDocument A trader enters a sell-order into the system with a sell price of $0.165 (instead of the intended $16.50). A filter is triggered and is ignored by the trader. Because of the low price, the system forwards the order to a second party for authorization who authorizes it despite the fact, that he/she received two warnings and the price is 99% below the current market-price. The (obviously) fast-filled order was later canceled, so no real damage has been done here but the australian ASIC now fined UBS for this. The system will now be changed to be fool-proved (again).
Scot J. Paltrow and Kelly Carr, Reuters, 2 Jul 2013 "DFAS [Defense Finance and Accounting Service], for its part, inherited a pay operation that even at the time was an antique—a 20-year-old Air Force system that DFAS renamed the Defense Joint Military Pay System, or DJMS. It ran, and still runs, on Cobol, a computer language that dates to 1959. Most of the Cobol code the Pentagon uses for payroll and accounting was written in the 1960s, according to 2006 congressional testimony by Zack Gaddy, director of DFAS from May 2004 to September 2008. Wallace, the Army assistant deputy chief of staff, says the system has “seven million lines of Cobol "code that hasn't been updated'' in more than a dozen years, and significant parts of the code have been `corrupted'. The older it gets, the harder it is to maintain. As DFAS itself said: “As time passes, the pool of Cobol expertise dwindles.'' Further, the system is nearly impossible to update because the documentation for it—explaining how it was built, what was in it, and how it works -- disappeared long ago, according to Kevin McGraw. He retired recently after working 30 years in DFAS's Cleveland office, most of that time responsible for maintaining the part of DJMS that handles Navy pay. “It's hard to make a change to a program if you don't know what's in there.'' http://preview.reuters.com/2013/7/9/wounded-in-battle-stiffed-by-the-pentagon Jim Reisert AD1C, <jjreisert@alum.mit.edu>, http://www.ad1c.us
The Post Office has admitted that software defects have occurred with a computer system at the centre of a bitter dispute with some of its 11,500 sub-postmasters across the UK. More than 100 say they were wrongly prosecuted or made to repay money after computers made non-existent shortfalls. Some of them lost their homes as a result and a few went to prison. The Post Office said the report showed its system was effective but said it would improve training and support. Over the past year, independent investigators Second Sight, who were employed by the Post Office, have been examining a handful of the sub-postmasters' claims. Although their review found no evidence of systemic problems with the core software, it did find bugs in it. It pinpointed two specific occasions, in 2011 and 2012, when the Post Office identified defects itself that resulted in a shortfall of up to 9,000 pounds at 76 branches. The Post Office later made good those losses and the sub-postmasters were not held liable. Full story at: http://www.bbc.co.uk/news/uk-23233573 Yet another case of "computer being taken as infallible." Reminds me of recent chip-and-pin card fraud that banks insist the customer is liable, rather than somebody being able to exploit a security flaw in the chip-and-pin system. Somebody has next to no chance of proving they were not responsible for fraud. [Also noted by Andy Cole and Martyn Thomas. PGN]
http://thenextweb.com/uk/2013/07/15/sony-drops-appeal-and-pays-250000-uk-fine-for-data-lost-in-2011-playstation-network-hack/
One of the major security threats that remains poorly understood and poorly addressed in today's systems involves the risks of insider misuse. The risks are relevant to the ongoing discussions of surveillance, critical national infrastructures, election integrity, and lots more. Providing better system and networking technologies as well as better administrative practices is a huge challenge. A recent article exposes some misuses in the National Crime Information Center (NCIC) database, although this should not be news to RISKS readers. It discusses "a batch of corruption cases in recent years against NYPD officers accused of abusing the FBI-operated National Crime Information Center database to cyber snoop on co-workers, tip off drug dealers, stage robberies and—most notoriously—scheme to abduct and eat women." (A police academy instructor testified at the trial of Gilbert Valle, who was convicted in March in a bizarre plot to kidnap, cook and cannibalize women.) [Source: Tom Hays, Associated Press, NYC cases show crooked cops' abuse of FBI database, PGN-ed] http://news.yahoo.com/nyc-cases-show-crooked-cops-abuse-fbi-database-162152158.html Many years ago, I noted that the NCIC access had almost no authentication once enabled, and no differential access controls—typically with a police officer signing a paper log at the beginning of each shift and the system then accessible to anyone with physical access (including the nighttime cleaning crew). Long ago, we noted the case of the Arizona ex-officer who tracked down his ex-girlfriend and murdered her. In my Computer-Related Risks book (Addison-Wesley 1995, but unfortunately mostly all still relevant), I cited a GAO report that enumerated a surprisingly large number of insider misuses of law-enforcement databases. Yes, this is a very old problem, and yes, it still exists. Perhaps it is a little better now in some respects? But apparently not: with much more information online, more opportunities exist for misuse!
I recently came across two interesting anecdotes in the medical IT field, which are interesting in that the illustrate the risks that come with access controls when they are insufficiently granular and poorly understood. Some care is needed in the design of access control, and users/administrators need appropriate education in their use. Both of these issues appeared in the context of medical imaging systems, but it would be possible that they could appear in the context of other forms of medical record management system. Case 1 An administrator for a radiology department information system had previously attended the hospital and had a file on the system on which he managed. He had set the software's "confidential file" flag on his record, to protect against nosy colleagues. One day, when unwell himself, he was admitted to the hospital at which he worked. The duty emergency doctor ordered a chest X-ray. There was a problem, however. The X-ray receptionist could not find his record in order to book the X-ray attendance. The confidential flag had rendered his file completely invisible and inaccessible to non-administrator users. As there was no system administrator on duty, the most practical solution was to hand over his user name and password a trusted member of the X-ray staff and ask them to remove the confidential flag, so that he could have his X-ray (and have his doctor be able to view it). [Technically a new file could have been created, but this would have later required the delicate housekeeping task of merging multiple case-records, and would still have meant that medical staff were unable to access his historical records for comparison] The issue in this case was that the "confidential flag", intended for protection of staff or celebrity records, rendered records accessible only to system administrators, with no way for an individual user to override this. Only system administrators could override the flag, with no way to offer this right to certain groups, e.g. senior doctors. Case 2 Many hospitals in the UK have recently come to the end of a long contract for their PACS (Picture archiving and communication system) infrastructure; large servers holding medical imaging files. There was a need to migrate the data to new systems which were being procured by individual hospitals. In some cases, the incumbent provider was less helpful than hoped for, refusing to offer a method of migrating the data to a new system until after the system had been decommissioned. (An unacceptable risk, as this would have meant a period when the hospital had no access to historical medical records, and of unknown duration). In view of this, many hospitals employed specialist consultancy firms to migrate the data from the incumbent systems onto a temporary store in industry-standard format. The normal way in which this was done was for a software tool to masquerade as an image processing workstation, run a query (e.g. MRI scans performed on 18/07/2013), and then archive all the images that are returned from the search. This had gone well. The industry standard data had been indexed and imported into the new system in time for "go live". The original servers were decommissioned at the end of contract, and hard drives shredded. A few months later, the one of the consultants sent a circular to their clients, explaining that they had come across a problem. Any files with the "confidential flag" set, would not have been returned by the search, and therefore would not have been migrated.... - - - These two anecdotes illustrate the problem with access controls, especially "silent" access controls, that give no indication that a dataset has been pruned. This is a particular problem in the medical field, as the work necessarily includes emergencies, and there are a large number of staff who may have a legitimate requirement to access confidential data at short notice. Software vendors have been very varied in terms of their handling of this requirement. The above "administrator only" solution is a common one in the industry, despite its significant risks.
http://www.huffingtonpost.com/2013/07/08/florida-banned-computers_n_3561701.html?utm_hp_ref=3Dmostpopular Florida Accidentally Banned All Computers, Smart Phones In The State Through Internet Cafe Ban: Lawsuit Rick Scott reportedly called the ban, "the right thing to do for our state." When Florida lawmakers recently voted to ban all Internet cafes, they worded the bill so poorly that they effectively outlawed every computer in the state, according to a recent lawsuit. In April Florida Governor Rick Scott approved a ban on slot machines and Internet cafes after a charity tied to Lt. Governor Jennifer Carroll was shut down on suspicion of being an Internet gambling front—forcing Carroll, who had consulted with the charity, to resign. Florida's 1,000 Internet cafes were shut down immediately, including Miami-Dade's Incredible Investments, LLC, a caf=E9 that provides online services to migrant workers, according to the Tampa Bay Times. The owner, Consuelo Zapata, is now suing the state after her legal team found that the ban was so hastily worded that it can be applied to any computer or device connected to the Internet, according to a copy of the complaint obtained by The Miami Herald. The ban defines illegal slot machines as any "system or network of devices" that may be used in a game of chance. And that broad wording can be applied to any number of devices, according to the Miami law firm of Kluger, Kaplan, Silverman, Katzen & Levine, who worked with constitutional law attorney and Harvard professor Alan Dershowitz. The suit maintains that the ban was essentially passed "in a frenzy fueled by distorted judgment in the wake of a scandal that included the Lieutenant Governor's resignation" and declares it unconstitutional. Read the full complaint here. [I am reminded once again of California's first attempt at a computer crime bill, which would make it illegal to `read, write, alter, or delete' information in a computer. When I confronted one of the law enforcement folks, the response was “But we would never use that against someone who was not doing anything wrong.'' Weak on the concept? PGN ]
[via Robert N. M. Watson] New York City recently installed a bike sharing program, named Citibike, sponsored in part by Citibank. The system consists of a set of heavy 3-speed bicycles with integrated GPS and a set of "Bike Stations" where people can check bicycles out and back in. Annual members get an RFID key that they use to take out a bicycle for up to 45 minutes, after which there are extra charges. Casual users pay, using a credit or debit card, at a kiosk attached to the bike station. One feature of the system is that if a bicycle needs repair someone can press a repair button, marked with a wrench, next to the bike that is having problems. Once the button is pressed, a red light comes on and the bicycle cannot be removed from the station until a maintenance person comes by to look at the bike. There is no timeout, and there is no requirement to put in your key or type in the code that you get from the kiosk. This DOS against the system is already being applied by causal vandals, who push the repair button on whole racks, taking them out of service until a repair person can show up. Citibike says that they are working to fix this problem in the system.
Glenn Greenwald on security and liberty * Secret files show scale of Silicon Valley co-operation on Prism * Outlook.com encryption unlocked even before official launch * Skype worked to enable Prism collection of video calls * Company says it is legally compelled to comply http://www.guardian.co.uk/world/2013/jul/11/microsoft-nsa-collaboration-user-data
Chris Paoli, *Redmond Magazine*, 12 Jul 2013 New information alleges Microsoft provided a backdoor for the NSA to access chat and e-mail information from Outlook and Skype. http://redmondmag.com/articles/2013/07/12/microsoft-prism-involvement.aspx opening paragraph: Recently leaked documents described Microsoft working hand in hand with the National Security Agency (NSA) to break encryption and provide access to its customers' data through the NSA's Prism surveillance program.
Ted Samson, InfoWorld, 12 Jul 2013 Malicious hackers could exploit the backdoors into StoreOnce and StoreVirtual hardware to gain root access http://www.infoworld.com/t/data-security/hp-admits-undocumented-backdoors-in-two-separate-storage-lines-222614 opening paragraph: HP has owned up to undocumented backdoors in members of its StoreOnce D2D Backup and StoreVirtual Storage product lines that can grant malicious hackers root access to the systems' OS. A fix for the HP StoreOnce D2D Backup Systems already exists; the company said it would deliver a patch for the StoreVirtual gear by July 17.
Richard Perez-Pedia, 16 Jul 2013 America's research universities, among the most open and robust centers of information exchange in the world, are increasingly coming under cyberattack, most of it thought to be from China, with millions of hacking attempts weekly. Campuses are being forced to tighten security, constrict their culture of openness and try to determine what has been stolen. University officials concede that some of the hacking attempts have succeeded. But they have declined to reveal specifics, other than those involving the theft of personal data like Social Security numbers. They acknowledge that they often do not learn of break-ins until much later, if ever, and that even after discovering the breaches they may not be able to tell what was taken. Universities and their professors are awarded thousands of patents each year, some with vast potential value, in fields as disparate as prescription drugs, computer chips, fuel cells, aircraft and medical devices. http://www.nytimes.com/2013/07/17/education/barrage-of-cyberattacks-challenges-campus-culture.html
Nicole Perlroth and David E. Sanger, *The New York Times*, 13 Jul 2013 On the tiny Mediterranean island of Malta, two Italian hackers have been searching for bugs—not the island's many beetle varieties, but secret flaws in computer code that governments pay hundreds of thousands of dollars to learn about and exploit. The hackers, Luigi Auriemma, 32, and Donato Ferrante, 28, sell technical details of such vulnerabilities to countries that want to break into the computer systems of foreign adversaries. The two will not reveal the clients of their company, ReVuln, but big buyers of services like theirs include the National Security Agency—which seeks the flaws for America's growing arsenal of cyberweapons—and American adversaries like the Revolutionary Guards of Iran. All over the world, from South Africa to South Korea, business is booming in what hackers call "zero days," the coding flaws in software like Microsoft Windows that can give a buyer unfettered access to a computer and any business, agency or individual dependent on one. Just a few years ago, hackers like Mr. Auriemma and Mr. Ferrante would have sold the knowledge of coding flaws to companies like Microsoft and Apple, which would fix them. Last month, Microsoft sharply increased the amount it was willing to pay for such flaws, raising its top offer to $150,000. But increasingly the businesses are being outbid by countries with the goal of exploiting the flaws in pursuit of the kind of success, albeit temporary, that the United States and Israel achieved three summers ago when they attacked Iran's nuclear enrichment program with a computer worm that became known as "Stuxnet." The flaws get their name from the fact that once discovered, "zero days" exist for the user of the computer system to fix them before hackers can take advantage of the vulnerability. A "zero-day exploit" occurs when hackers or governments strike by using the flaw before anyone else knows it exists, like a burglar who finds, after months of probing, that there is a previously undiscovered way to break into a house without sounding an alarm. ... http://www.nytimes.com/2013/07/14/world/europe/nations-buying-as-hackers-sell-computer-flaws.html
Complaints by those in the federal Do Not Call Registry are on the rise as telemarketers use robocalls and 'spoofing.' David Lazarus, *Los Angeles Times*, 15 Jul 2013 Dan Yeh has been on the federal government's Do Not Call Registry for years. And for a while, it seemed like the leave-me-alone system worked just fine. Not anymore. "There's been a real surge recently," Yeh, 72, told me. "I've been getting five or six calls a day, at all hours, seven days a week." The Huntington Beach resident isn't alone. I've heard similar complaints from dozens of other people. Regardless of having registered a phone line with the Federal Trade Commission as a telemarketer-free zone, a growing number of consumers are saying that some businesses are ignoring their stated preference and calling anyway. A particular annoyance: automated robocalls that get you on the line before looping in a human telemarketer. Such calls frequently use "spoofed" lines that hide their origin or make it look as if the call is from someone you know. ... http://www.latimes.com/business/la-fi-lazarus-20130716,0,6060357,full.column
DH Kass, Google patches a gap in security on Android (finally), ITBusiness, 10 Jul 2013 http://www.itbusiness.ca/article/google-patches-a-gap-in-security-on-android-finally opening paragraph: Google Inc. has only just patched a security vulnerability in the Android operating system that could have allowed hackers to change 99 per cent of all applications into malware. Yet Google has known about the flaw for months, says a new report.
Jeremy Kirk, InfoWorld, 17 Jul 2013 Alternative fixes released for Android 'master key' vulnerability Many Android devices may still be vulnerable if operators haven't sent out updates http://www.infoworld.com/d/mobile-technology/alternative-fixes-released-android-master-key-vulnerability-222879 [selected text] More fixes are appearing for a pair of highly dangerous vulnerabilities exposed earlier this month in the Android mobile operating system. Security vendor Webroot and ReKey, a collaboration between Northeastern University in Boston and vendor Duo Security, released software on Tuesday that detects if an Android device is vulnerable and applies a patch. Application markets and websites not run by Google have posed a risk for Android users. Security researchers have found numerous examples of popular applications that have been modified to deliver secret code that can spy on users.
selected text: [The article has a lot of percentages, but these are especially noteworthy:] The Bit9 data showed that 93 percent of organizations have a version of Java on some of their systems that's at least five years old. Fifty-one percent have a version that's between five and 10 years old.
Lucian Constantin, InfoWorld, 16 Jul 2013 The malware is digitally signed and is probably used in targeted attacks, researchers from F-Secure said http://www.infoworld.com/d/security/new-mac-malware-confuses-users-right-left-file-name-tricks-222817
"The demand stunned the hospital employee. She had picked up the emergency room's phone line, expecting to hear a dispatcher or a doctor. But instead, an unfamiliar male greeted her by name and then threatened to paralyze the hospital's phone service if she didn't pay him hundreds of dollars. Shortly after the worker hung up on the caller, the ER's six phone lines went dead. For nearly two days in March, ambulances and patients' families calling the San Diego hospital heard nothing but busy signals. The hospital had become a victim of an extortionist who, probably using not much more than a laptop and cheap software, had single-handedly generated enough calls to tie up the lines." http://j.mp/13DkBIt (*LA Times* via NNSquad)
I take good ideas where I can find them, and this article [1], which surprisingly enough is in "Law Technology News", discusses how to build versatile and reusable software. The article asks some good questions, and its focus is not really just on software for law offices but on custom software directly. So if you have to work with software developed in-house or by contractors for your company or department's use, it's worth a read. In case you're not really up on these two words, "versatility" means the software handles problems more robustly, which reduces risk. "Reusable" means that it is capable of being modified (without a huge amount of effort) to do other things than what it was originally developed for, which conceivably can reduce total cost of ownership (and the risk involved in the investment to develop it in the first place). [1] http://www.law.com/jsp/lawtechnologynews/PubArticleLTN.jsp?id=1202611396558&kw=How_to_Build_Versatile_and_Reusable_Software Paul Robinson <paul@paul-robinson.us> http://paul-robinson.us (My Blog)
Bill Vlasic, 5 Jul 2013 The engineers working on Honda's new Acura MDX luxury sport utility vehicle were obsessed with giving customers more—more space in the rear seat, more fuel economy from a high-tech engine, and above, all, more apps, maps and connectivity. But there was one feature they wanted less of: buttons. In an effort to simplify the newest Honda vehicle, which went on sale in June 2013, the product team was determined to streamline the instrument panel. For the new MDX model, more than 30 buttons have been eliminated. The change was emblematic of the challenge confronting automakers in the age of the connected car. How does a car company give customers the technology they crave without overwhelming them with complicated controls that can impair their ability to drive safely? ... http://www.nytimes.com/2013/07/06/business/designing-dashboards-with-fewer-distractions.html
http://www.nbcwashington.com/traffic/transit/Metro-Identifies-Problem-With-Emergency-Call-Buttons-on-Trains-212165001.html One sentence in the above article highlights another risk. "Metro also began spot checks by safety officers of intercoms on trains in service Wednesday morning." There is a terrible tendency of once a system is in place and working to simply assume that it is continuing to work. It would be a good idea to check these sorts of things regularly. What else is not getting checked?
Long-time readers of RISKS may recall when URLs were frowned on here because of their transience. Now, the problem is walled gardens. Judging from the summary in 27.36, I would find the article hilarious. Unfortunately, without signing in to Google or Facebook, I can not. I do not have an account with either. Security concerns, you know. Quora has a lovely non-answer on the page: "Why do I need to sign in? Quora is a knowledge-sharing community that depends on everyone being able to pitch in when they know something." How does my not being able to sign in make me able to pitch in?
Alarming number of databases across US are storing details of Americans' locations—not just government agencies Ed Pilkington in New York, guardian.co.uk, 17 Jul 2013 Millions of Americans are having their movements tracked through automated scanning of their car license plates, with the records held often indefinitely in vast government and private databases. A new report from the American Civil Liberties Union has found an alarming proliferation of databases across the US storing details of Americans' locations. The technology is not confined to government agencies—private companies are also getting in on the act, with one firm National Vehicle Location Service holding more than 800m records of scanned license plates. http://www.guardian.co.uk/law/2013/jul/17/million-american-license-plate-privacy-tracking You Are Being Tracked: How License Plate Readers Are Being Used to Record Americans' Movements http://www.aclu.org/technology-and-liberty/you-are-being-tracked-how-license-plate-readers-are-being-used-record http://www.aclu.org/files/assets/071613-aclu-alprreport-opt-v05.pdf
I read the submission by Henry Baker about license plate readers. It's what we call Automatic Number Plate Recognition (ANPR) here in the UK. The system doesn't take pictures, just reads the plates electronically. The ANPR systems have a real-time link to the Police, Insurance and Driver Licensing databases. If a car is scanned that shows a potential offence, an alert sounds and displays the reason why the car is suspected to be illegal. The officers can then stop the car as soon as it is safe to do so in order to investigate. They have powers to impound unsafe and uninsured cars on the spot. I live 4 miles from Silverstone race circuit and last weekend we had the Formula 1 race here. A series of these ANPR systems are used to scan the vehicles arriving at the car parks, and it's amazing how many people they catch each year. The overwhelming majority of people here like the fact that the police and government use it. The people who don't like it are, for the most part, the irresponsible ones who they catch driving without insurance, road tax, in 'cloned' cars on false plates, while disqualified for some other offence, in cars reported as stolen or who haven't had the mandatory annual saftey inspection for cars more than 3 years old (MOT). It means there is less chance of my car being hit by someone without insurance, or who has poorly maintained brakes or whatever. It also improves the amount of revenue raised by the government which (ostensibly) is for road repairs and improvements. Yes, there are systems that scan plates on all cars on major roads, but since the authorities can use a court order to obtain my cellular records to track my movements if they have just cause, I'm not concerned. There is also the issue of potential abuse by those with access to the systems. I know we have a very tough auditing regime for any manual enquiries made to any of the databases by the users. A significant percentage of them are chosen at random and they have to explain why they made their enquiry. I do have an issue if they keep the data on law-abiding citizens for an inordinate length of time, but our European Data Protection laws provide checks and balances that I am (mostly) happy with.I think that is the essence of this article—some countries don't have sufficient checks and balances for this kind of technology to be used responsibly. My point is this—don't change the surveillance system, change the legislation governing its use.
Here's something you can add to the summary. You can read RISKS through Google Groups (what used to be Deja News before Google bought it), and you can even send responses back to the moderator through the interface. http://groups.google.com/group/comp.risks
Please report problems with the web pages to the maintainer