Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
FYI—There's only an outcry because we happened to find out about this one; it wouldn't surprise me to find that this type of activity is going on at hundreds (thousands?) of locations around the world. I believe that Seattle's muni wifi mesh network can (& does) do this type of tracking. I'm curious if the iPhone iOS8's new mac spoofing interferes with this system... http://www.theverge.com/2014/6/9/5792970/ios-8-strikes-an-unexpected-blow-against-location-tracking http://www.telegraph.co.uk/travel/travelnews/10997539/Big-Brother-airport-installs-worlds-first-real-time-passenger-tracking-system.html 'Big Brother' airport installs world's first real-time passenger tracking system Civil liberty groups criticise a new tracking device at Helsinki Airport that can monitor passengers' footsteps, from arrival at the car park to take-off By Soo Kim, 29 Jul 2014 All mobile phones logged into the Wi-Fi network at Helsinki Airport will be monitored by an in-house tracking system that identifies passengers' real-time movements. The technology has been criticised by privacy advocate groups, but is said to be aimed at monitoring crowds and preventing bottlenecking at the airport, which sees around 15 million passengers a year, Bloomberg reports. About 150 white boxes, each the size of a wireless Internet router, have been placed at various points around the airport. Equipped with tracking technology from the Finland-based retail analytics company Walkbase, each device is designed to collect the unique identifier numbers of all mobile phones which have Wi-Fi access switched on, without the user being notified. Passengers can also "opt-in" for other services, by logging into the network via an application such as an airline app or retail store app, to receive sales offers from the airport;s 35 shops and 32 restaurants and cafes, in addition to any relevant flight information. Currently at its initial phase, the full tracking system is expected to be in place by the end of this year which could enable shops to specifically target passengers that are within their vicinities, such as a deli that could alert a passenger walking by of a certain item on sale. All data collected is said to be in aggregated form, preventing any personal information from being seen by Finavia Oyi, the Finnish Civil Aviation Administration operating the airport, as the software discards any unique identifiers of devices, claims Tuomas Wuoti, the CEO at Walkbase. But software security analysts find it hard to believe “location tracking is only left at statistics'' levels. “The fact that my movements are tracked is a scarier thought than someone knowing which websites I visit,'' Antti Tikkanen, director of security response at the software maker F-Secure Oyj (FSC1V), told Bloomberg. The technology was also met with concern from customers at the US-based department store retailer Nordstrom where it was tested last year, who criticised it for monitoring unwary customers. Passenger privacy concerns are *extremely important* to the group and the anonymous monitoring respects customers' privacy, according to Heikki Koski, vice president of new services at Finavia Oyi. “We're looking at great paybacks from this investment. We can manage the airport better, we can predict where bottlenecks might come and analyse everything more thoroughly.''
http://www.washingtonpost.com/local/crime/federal-review-stalled-after-finding-forensic-errors-by-fbi-lab-unit-spanned-two-decades/2014/07/29/04ede880-11ee-11e4-9285-4243a40ddc97_story.html A Washington Post investigation reveals that Justice Department officials have known for years that flaws in forensic techniques and weak laboratory standards may have led to the convictions of innocent people across the country, raising the question: How many more are out there? Read related story.
This paper details dollar costs, but not the biggest cost, of the NSA's surveillance program, which is the chilling effect on free speech and the destruction of the Constitution. http://oti.newamerica.net/publications/policy/surveillance_costs_the_nsas_impact_on_the_economy_internet_freedom_cybersecurity Danielle Kehl, Kevin Bankston, Robyn Greene, Robert Morgus, Surveillance Costs: The NSA's Impact on the Economy, Internet Freedom & Cybersecurity, New America Foundation, 29 Jul 2014 Read the full paper (pdf). http://oti.newamerica.net/sites/newamerica.net/files/policydocs/Surveilance_Costs_Final.pdf or a short summary (pdf) here. http://oti.newamerica.net/sites/newamerica.net/files/policydocs/Surveillance_Costs_Short%20Version.pdf It has been over a year since The Guardian reported the first story on the National Security Agency's surveillance programs based on the leaks from former NSA contractor Edward Snowden, yet the national conversation remains largely mired in a simplistic debate over the tradeoffs between national security and individual privacy. It is time to start weighing the overall costs and benefits more broadly. While intelligence officials have vigorously defended the merits of the NSA programs, they have offered little hard evidence to prove their value—and some of the initial analysis actually suggests that the benefits of these programs are dubious. Three different studies—from the President's Review Group on Intelligence and Communications Technologies, the Privacy and Civil Liberties Oversight Board, and the New America Foundation's International Security Program -- question the value of bulk collection programs in stopping terrorist plots and enhancing national security. Meanwhile, there has been little sustained discussion of the costs of the NSA programs beyond their impact on privacy and liberty, and in particular, how they affect the U.S. economy, American foreign policy, and the security of the Internet as a whole. This paper attempts to quantify and categorize the costs of the NSA surveillance programs since the initial leaks were reported in June 2013. Our findings indicate that the NSA's actions have already begun to, and will continue to, cause significant damage to the interests of the United States and the global Internet community. Specifically, we have observed the costs of NSA surveillance in the following four areas: Direct Economic Costs to U.S. Businesses: American companies have reported declining sales overseas and lost business opportunities, especially as foreign companies turn claims of products that can protect users from NSA spying into a competitive advantage. The cloud computing industry is particularly vulnerable and could lose billions of dollars in the next three to five years as a result of NSA surveillance. Potential Costs to U.S. Businesses and to the Openness of the Internet from the Rise of Data Localization and Data Protection Proposals: New proposals from foreign governments looking to implement data localization requirements or much stronger data protection laws could compound economic losses in the long term. These proposals could also force changes to the architecture of the global network itself, threatening free expression and privacy if they are implemented. Costs to U.S. Foreign Policy: Loss of credibility for the U.S. Internet Freedom agenda, as well as damage to broader bilateral and multilateral relations, threaten U.S. foreign policy interests. Revelations about the extent of NSA surveillance have already colored a number of critical interactions with nations such as Germany and Brazil in the past year. Costs to Cybersecurity: The NSA has done serious damage to Internet security through its weakening of key encryption standards, insertion of surveillance backdoors into widely-used hardware and software products, stockpiling rather than responsibly disclosing information about software security vulnerabilities, and a variety of offensive hacking operations undermining the overall security of the global Internet. The U.S. government has already taken some limited steps to mitigate this damage and begin the slow, difficult process of rebuilding trust in the United States as a responsible steward of the Internet. But the reform efforts to date have been relatively narrow, focusing primarily on the surveillance programs' impact on the rights of U.S. citizens. Based on our findings, we recommend that the U.S. government take the following steps to address the broader concern that the NSA's programs are impacting our economy, our foreign relations, and our cybersecurity: * Strengthen privacy protections for both Americans and non-Americans, within the United States and extraterritorially. * Provide for increased transparency around government surveillance, both from the government and companies. * Recommit to the Internet Freedom agenda in a way that directly addresses issues raised by NSA surveillance, including moving toward international human-rights based standards on surveillance. * Begin the process of restoring trust in cryptography standards through the National Institute of Standards and Technology. * Ensure that the U.S. government does not undermine cybersecurity by inserting surveillance backdoors into hardware or software products. * Help to eliminate security vulnerabilities in software, rather than stockpile them. * Develop clear policies about whether, when, and under what legal standards it is permissible for the government to secretly install malware on a computer or in a network. * Separate the offensive and defensive functions of the NSA in order to minimize conflicts of interest.
*Foreign Policy* via NNSquad http://www.foreignpolicy.com/articles/2014/07/29/the_crypto_king_of_the_NSA_goes_corporate_keith_alexander_patents "Keith Alexander, the recently retired director of the National Security Agency, left many in Washington slack-jawed when it was reported that he might charge companies up to $1 million a month to help them protect their computer networks from hackers. What insights or expertise about cybersecurity could possibly justify such a sky-high fee, some wondered, even for a man as well-connected in the military-industrial complex as the former head of the nation's largest intelligence agency?" - - - A world class jerk. That's all I can say without violating my own public language guidelines.
https://www.eff.org/privacybadger "Privacy Badger is a browser add-on that stops advertisers and other third-party trackers from secretly tracking where you go and what pages you look at on the web. If an advertiser seems to be tracking you across multiple websites without your permission, Privacy Badger automatically blocks that advertiser from loading any more content in your browser. To the advertiser, it's like you suddenly disappeared. Privacy Badger blocks spying ads and invisible trackers. It's there to ensure that companies can't track your browsing without your consent. This extension is designed to automatically protect your privacy from third party trackers that load invisibly when you browse the web. We send the Do Not Track header with each request, and our extension evaluates the likelihood that you are still being tracked. If the algorithm deems the likelihood is too high, we automatically block your request from being sent to the domain. Please understand that Privacy Badger is in beta, and the algorithm's determination is not conclusive that the domain is tracking you." Justin C. Klein Keane, MA MCIT, IT Sr Project Leader, Information Security University of Pennsylvania, School of Arts & Sciences
Nest as a peeping tom turkey. I sure hope that Google gives a hoot, because otherwise the hackers will be robin us! Smart Nest Thermostat: A Smart Spy in Your Home https://www.blackhat.com/us-14/briefings.html The Nest thermostat is a smart home automation device that aims to learn about your heating and cooling habits to help optimize your scheduling and power usage. Debuted in 2010, the smart NEST devices have been proved a huge success that Google spent $3.2B to acquire the whole company. However, the smartness of the thermostat also breeds security vulnerabilities, similar to all other smart consumer electronics. The severity of security breach has not been fully embraced due to the traditional assumption that thermostat cannot function more than a thermostat even though users are enjoying its smartness. Equipped with two ARM cores, in addition to WiFi and ZigBee chips, this is no ordinary thermostat. In this presentation, we will demonstrate our ability to fully control a Nest with a USB connection within seconds (in our demonstration, we will show that we can plug in a USB for 15 seconds and walk away with a fully rooted Nest). Although OS level security checks are available and are claimed to be very effective in defeating various attacks, instead of attacking the higher level software, we went straight for the hardware and applied OS-guided hardware attacks. As a result, our method bypasses the existing firmware signing and allows us to backdoor the Nest software in any way we choose. With Internet access, the Nest could now become a beachhead for an external attacker. The Nest thermostat is aware of when you are home and when you are on vacation, meaning a compromise of the Nest would allow remote attackers to learn the schedule of users. Furthermore, saved data, including WiFi credentials, would now become available to attackers. Besides its original role of monitor the user's behavior, the smart Nest is now a spy rooted inside a house fully controlled by attackers. Using the USB exploit mentioned above, we have loaded a custom compiled kernel with debug symbols added. This enables us to explore the software protocols used by the nest, such as Nest Weave, in order to find potential vulnerabilities that can be remotely exploited. Loading a custom kernel into the system also shows how we have obtained total control of the device, introducing the potential for rootkits, spyware, rogue services and other network scanning methods, further allowing the compromise of other nodes within the local network. presented by Yier Jin & Grant Hernandez & Daniel Buentello
In RISKS 28.10, there was a report of a canceled presentation at BlackHat on a Tor compromise. A recent Tor blog article identifies a security breach involving malevolent nodes. From the posting: “On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks..." The original Tor blog article can be found at: https://blog.torproject.org/blog/tor-security-advisory-relay-early-traffic-confirmation-attack Bob Gezelter, http://www.rlgsc.com
Woody Leonhard | InfoWorld, 28 Jul 2014 Slow/dropped Wi-Fi, heating issues, and lockups persist, but Surface Pro pro's manual driver installation procedure may help http://www.infoworld.com/t/tablets/surface-pro-3-problems-linger-despite-three-firmware-patches-in-month-247079
Lucian Constantin, InfoWorld, 29 Jul 2014 Attackers can impersonate trusted developers to gain powerful privileges on the OS, researchers from Bluebox Security say http://www.infoworld.com/d/mobile-technology/android-vulnerability-allows-malware-compromise-most-devices-and-apps-247208
FYI—I guess it's time for a USB condom? Andy Greenberg, *WiReD*, 31 Jul 2014 http://www.wired.com/2014/07/usb-security/ Computer users pass around USB sticks like silicon business cards. Although we know they often carry malware infections, we depend on antivirus scans and the occasional reformatting to keep our thumbdrives from becoming the carrier for the next digital epidemic. But the security problems with USB devices run deeper than you think: Their risk isn't just in what they carry, it's built into the core of how they work. That's the takeaway from findings security researchers Karsten Nohl and Jakob Lell plan to present next week, demonstrating a collection of proof-of-concept malicious software that highlights how the security of USB devices has long been fundamentally broken. The malware they created, called BadUSB, can be installed on a USB device to completely take over a PC, invisibly alter files installed from the memory stick, or even redirect the user's Internet traffic. Because BadUSB resides not in the flash memory storage of USB devices, but in the firmware that controls their basic functions, the attack code can remain hidden long after the contents of the device's memory would appear to the average user to be deleted. And the two researchers say there's no easy fix: The kind of compromise they're demonstrating is nearly impossible to counter without banning the sharing of USB devices or filling your port with superglue. “These problems can't be patched,'' says Nohl, who will join Lell in presenting the research at the Black Hat security conference in Las Vegas. “We're exploiting the very way that USB is designed.'' Nohl and Lell, researchers for the security consultancy SR Labs, are hardly the first to point out that USB devices can store and spread malware. But the two hackers didn't merely copy their own custom-coded infections into USB devices' memory. They spent months reverse engineering the firmware that runs the basic communication functions of USB devices—the controller chips that allow the devices to communicate with a PC and let users move files on and off of them. Their central finding is that USB firmware, which exists in varying forms in all USB devices, can be reprogrammed to hide attack code. “You can give it to your IT security people, they scan it, delete some files, and give it back to you telling you it's *clean*,'' says Nohl. But unless the IT guy has the reverse engineering skills to find and analyze that firmware, “the cleaning process doesn't even touch the files we're talking about.'' The problem isn't limited to thumb drives. All manner of USB devices from keyboards and mice to smartphones have firmware that can be reprogrammed -- in addition to USB memory sticks, Nohl and Lell say they've also tested their attack on an Android handset plugged into a PC. And once a BadUSB-infected device is connected to a computer, Nohl and Lell describe a grab bag of evil tricks it can play. It can, for example, replace software being installed with with a corrupted or backdoored version. It can even impersonate a USB keyboard to suddenly start typing commands. “It can do whatever you can do with a keyboard, which is basically everything a computer does,'' says Nohl. The malware can silently hijack Internet traffic too, changing a computer's DNS settings to siphon traffic to any servers it pleases. Or if the code is planted on a phone or another device with an Internet connection, it can act as a man-in-the-middle, secretly spying on communications as it relays them from the victim's machine. Most of us learned long ago not to run executable files from sketchy USB sticks. But old-fashioned USB hygiene can't stop this newer flavor of infection: Even if users are aware of the potential for attacks, ensuring that their USB's firmware hasn't been tampered with is nearly impossible. Nobody can trust anybody,'' says Nohl. But BadUSB's ability to spread undetectably from USB to PC and back raises questions about whether it's possible to use USB devices securely at all. “We've all known if that you give me access to your USB port, I can do bad things to your computer,'' says University of Pennsylvania computer science professor Matt Blaze. “What this appears to demonstrate is that it's also possible to go the other direction, which suggests the threat of compromised USB devices is a very serious practical problem.'' Blaze speculates that the USB attack may in fact already be common practice for the NSA. He points to a spying device known as Cottonmouth, revealed earlier this year in the leaks of Edward Snowden. The device, which hid in a USB peripheral plug, was advertised in a collection of NSA internal documents as surreptitiously installing malware on a target's machine. The exact mechanism for that USB attack wasn't described. “I wouldn't be surprised if some of the things [Nohl and Lell] discovered are what we heard about in the NSA catalogue.'' Nohl says he and Lell reached out to a Taiwanese USB device maker, whom he declines to name, and warned the company about their BadUSB research. Over a series of e-mails, the company repeatedly denied that the attack was possible. When WIRED contacted the USB Implementers Forum, a nonprofit corporation that oversees the USB standard, spokeswoman Liz Nardozza responded in a statement. “Consumers should always ensure their devices are from a trusted source and that only trusted sources interact with their devices. Consumers safeguard their personal belongings and the same effort should be applied to protect themselves when it comes to technology.'' Nohl agrees: The short-term solution to BadUSB isn't a technical patch so much as a fundamental change in how we use USB gadgets. To avoid the attack, all you have to do is not connect your USB device to computers you don't own or don't have good reason to trust—and don't plug untrusted USB devices into your own computer. But Nohl admits that makes the convenient slices of storage we all carry in our pockets, among many other devices, significantly less useful. “In this new way of thinking, you can't trust a USB just because its storage doesn't contain a virus. Trust must come from the fact that no one malicious has ever touched it, You have to consider a USB infected and throw it away as soon as it touches a non-trusted computer. And that's incompatible with how we use USB devices right now.'' The two researchers haven't yet decided just which of their BadUSB device attacks they'll release at Black Hat, if any. Nohl says he worries that the malicious firmware for USB sticks could quickly spread. On the other hand, he says users need to be aware of the risks. Some companies could change their USB policies, for instance, to only use a certain manufacturer's USB devices and insist that the vendor implement code-signing protections on their gadgets. Implementing that new security model will first require convincing device makers that the threat is real. The alternative, Nohl says, is to treat USB devices like hypodermic needles that can't be shared among users—a model that sows suspicion and largely defeats the devices' purpose. “Perhaps you remember once when you've connected some USB device to your computer from someone you don't completely trust. That means you can't trust your computer anymore. This is a threat on a layer that's invisible. It's a terrible kind of paranoia.''
Antone Gonsalves, InfoWorld, 28 Jul 2014 Cyber criminals are able to launch distributed denial-of-service attacks against websites by pretending to be Google Web crawlers http://www.infoworld.com/d/security/cyber-criminals-ride-google-coattails-in-ddos-attacks-247075
Jeremy Kirk, Infoworld, 31 Jul 2014 Symantec has published recommendations for mitigating the danger. A zero-day flaw in a software driver in Symantec's widely used Endpoint Protection product may be tricky to fix. http://www.infoworld.com/d/security/no-patch-yet-zero-day-in-symantec-endpoint-protection-software-driver-247384
Loek Essers, InfoWorld, 29 Jul 2014 Authorities should act immediately to stop this new vast expansion of Facebook's data collection and user profiling, US and EU privacy groups said http://www.infoworld.com/d/security/privacy-groups-call-action-stop-facebooks-site-user-tracking-plans-247186
http://lauren.vortex.com/archive/001078.html If you ever wonder why it seems like politicians around the world appear to have decided that their political futures are best served by imposing all manner of free speech restrictions, censorship, and content controls on Web services, one might be well served by examining the extent to which Internet users feel that they've been mistreated and lied to by some services—how their trust in those services has been undermined by abusive experiments that would not likely be tolerated in other aspects of our lives. To be sure, all experiments are definitely not created equal. Most Web service providers run experiments of one sort or another, and the vast majority are both justifiable and harmless. Showing some customers a different version of a user interface, for example, does not risk real harm to users, and the same could be said for most experiments that are aimed at improving site performance and results. But when sites outright lie to you about things you care about, and that you have expected those sites to provide to you honestly, that's a wholly different story, indeed—and that applies whether or not you're paying fees for the services involved, and whether or not users are ever informed later about these shenanigans. Nor do "research use of data" clauses buried in voluminous Terms of Service text constitute informed consent or some sort of ethical exception. You'll likely recall the recent furor over revelations about Facebook experiments—in conjunction with outside experimenters—that artificially distorted the feed streams of selected users in an effort to impact their emotions, e.g., show them more negative items than normal, and see if they'll become depressed. When belated news of this experiment became known, there was widespread and much deserved criticism. Facebook and experimenters issued some half-hearted "sort of" apologies, mostly suggesting that anyone who was concerned just "didn't understand" the point of the experiment. You know the philosophy: "Users are just stupid losers!" ... Now comes word that online dating site OkCupid has been engaging in its own campaign of lying to users in the guise of experiments. In OkCupid's case, this revelation comes not in the form of an apology at all, but rather in a snarky, fetid posting by one of their principals, which also includes a pitch urging readers to purchase the author's book. OkCupid apparently performed a range of experiments on users—some of the harmless variety. But one in particular fell squarely into the Big Lie septic tank, involving lying to selected users by claiming that very low compatibility scores were actually extremely high scores. Then OkCupid sat back and gleefully watched the fun like teenagers peering through a keyhole into a bedroom. Now of course, OkCupid had their "data based" excuse for this. By their claimed reckoning, their algorithm was basically so inept in the first place that the only way their could calibrate it was by providing some users enormously inflated results to see how they'd behave, then studying this data against control groups who got honest results from the algorithm. Sorry boy wonders, but that story would get you kicked out of Ethics 101 with a tattoo on your forehead that reads "Never let me near a computer again, please!" Really, this is pretty simple stuff. It doesn't take a course in comparative ethics to figure out when an experiment is harmless and when it's abusive. Many apologists for these abusive antics are well practiced in the art of conflation—that is, trying to confuse the issue by making invalid comparisons. So, you'll get the "everybody does experiments" line—which is true enough, but as noted above, the vast majority of experiments are harmless and do not involve lying to your users. Or we'll hear "this is the same things advertisers try to do—they're always playing with our emotions." Certainly advertisers do their utmost to influence us, but there's a big difference from the cases under discussion here. We don't usually have a pre-existing trust relationship with those advertisers of the sort we have with Web services that we use every day, and that we expect to provide us with honest results, honest answers, and honest data to the best of their ability. And naturally there's also the refrain that "these are very small differences that are often hard to even measure, and aren't important anyway, so what's the big deal?" But from an ethical standpoint the magnitude of effects is essentially irrelevant. The issue is your willingness to lie to your users and purposely distort data in the first place—when your users expect you to provide the most accurate data that you can. The saddest part though is how this all poisons the well of trust generally, and causes users to wonder when they're next being lied to or manipulated by purposely skewed or altered data. Loss of trust in this way can have lethal consequences. Already, we've seen how a relatively small number of research ethical lapses in the medical community have triggered knee-jerk legislative efforts to restrict legitimate research access to genetic and disease data—laws that could cost many lives as critical research is stalled and otherwise stymied. And underlying this (much as in the case of anti-Internet legislation we noted earlier) is politicians' willingness to play up to people's fears and confusion—and their loss of trust—in ways that ultimately may be very damaging to society at large. Trust is a fundamental aspect of our lives, both on the Net and off. Once lost, it can be impossible to ever restore to former levels. The damage is often permanent, and can ultimately be many orders of magnitude more devastating than the events that may initially trigger a user trust crisis itself. Perhaps something to remember, the next time you're considering lying to your users in the name of experimentation. Trust me on this one.
*Daily Dot* via NNSquad http://www.dailydot.com/lol/amelia-bedelia-wikipedia-hoax/ "I jokingly edited an author's page five and a half years ago. Today, it's still there, and cited by numerous well-respected sources." - EJ Dickson - - - "Quality Control" ...
Lauren Weinstein's Blog Update, 29 Jul 2014 http://lauren.vortex.com/archive/001078.html If you ever wonder why it seems like politicians around the world appear to have decided that their political futures are best served by imposing all manner of free speech restrictions, censorship, and content controls on Web services, one might be well served by examining the extent to which Internet users feel that they've been mistreated and lied to by some services—how their trust in those services has been undermined by abusive experiments that would not likely be tolerated in other aspects of our lives. To be sure, all experiments are definitely not created equal. Most Web service providers run experiments of one sort or another, and the vast majority are both justifiable and harmless. Showing some customers a different version of a user interface, for example, does not risk real harm to users, and the same could be said for most experiments that are aimed at improving site performance and results. But when sites outright lie to you about things you care about, and that you have expected those sites to provide to you honestly, that's a wholly different story, indeed—and that applies whether or not you're paying fees for the services involved, and whether or not users are ever informed later about these shenanigans. Nor do "research use of data" clauses buried in voluminous Terms of Service text constitute informed consent or some sort of ethical exception. You'll likely recall the recent furor over revelations about Facebook experiments—in conjunction with outside experimenters—that artificially distorted the feed streams of selected users in an effort to impact their emotions, e.g., show them more negative items than normal, and see if they'll become depressed. When belated news of this experiment became known, there was widespread and much deserved criticism. Facebook and experimenters issued some half-hearted "sort of" apologies, mostly suggesting that anyone who was concerned just "didn't understand" the point of the experiment. You know the philosophy: "Users are just stupid losers!" ... Now comes word that online dating site OkCupid has been engaging in its own campaign of lying to users in the guise of experiments. In OkCupid's case, this revelation comes not in the form of an apology at all, but rather in a snarky, fetid posting by one of their principals, which also includes a pitch urging readers to purchase the author's book. OkCupid apparently performed a range of experiments on users—some of the harmless variety. But one in particular fell squarely into the Big Lie septic tank, involving lying to selected users by claiming that very low compatibility scores were actually extremely high scores. Then OkCupid sat back and gleefully watched the fun like teenagers peering through a keyhole into a bedroom. Now of course, OkCupid had their "data based" excuse for this. By their claimed reckoning, their algorithm was basically so inept in the first place that the only way their could calibrate it was by providing some users enormously inflated results to see how they'd behave, then studying this data against control groups who got honest results from the algorithm. Sorry, boy wonders, but that story would get you kicked out of Ethics 101 with a tattoo on your forehead that reads "Never let me near a computer again, please!" Really, this is pretty simple stuff. It doesn't take a course in comparative ethics to figure out when an experiment is harmless and when it's abusive. Many apologists for these abusive antics are well practiced in the art of conflation—that is, trying to confuse the issue by making invalid comparisons. So, you'll get the "everybody does experiments" line—which is true enough, but as noted above, the vast majority of experiments are harmless and do not involve lying to your users. Or we'll hear "this is the same things advertisers try to do—they're always playing with our emotions." Certainly advertisers do their utmost to influence us, but there's a big difference from the cases under discussion here. We don't usually have a pre-existing trust relationship with those advertisers of the sort we have with Web services that we use every day, and that we expect to provide us with honest results, honest answers, and honest data to the best of their ability. And naturally there's also the refrain that "these are very small differences that are often hard to even measure, and aren't important anyway, so what's the big deal?" But from an ethical standpoint the magnitude of effects is essentially irrelevant. The issue is your willingness to lie to your users and purposely distort data in the first place—when your users expect you to provide the most accurate data that you can. The saddest part though is how this all poisons the well of trust generally, and causes users to wonder when they're next being lied to or manipulated by purposely skewed or altered data. Loss of trust in this way can have lethal consequences. Already, we've seen how a relatively small number of research ethical lapses in the medical community have triggered knee-jerk legislative efforts to restrict legitimate research access to genetic and disease data—laws that could cost many lives as critical research is stalled and otherwise stymied. And underlying this (much as in the case of anti-Internet legislation we noted earlier) is politicians' willingness to play up to people's fears and confusion—and their loss of trust—in ways that ultimately may be very damaging to society at large. Trust is a fundamental aspect of our lives, both on the Net and off. Once lost, it can be impossible to ever restore to former levels. The damage is often permanent, and can ultimately be many orders of magnitude more devastating than the events that may initially trigger a user trust crisis itself. Perhaps something to remember, the next time you're considering lying to your users in the name of experimentation. Trust me on this one.
Lindsey Boerma, CBS News, 30 Jul 2014 http://www.cbsnews.com/news/synthetic-id-theft-why-are-fraudsters-targeting-your-childs-identity/
I assume that you are referring to the "1,018" that should have been 10 to the 18th. I just looked at the page, and it did not have the wording around the error. Either it was a misquote, or a correction has been done and more than just the bad number was changed. When numbers look weirdly way too small, I suspect missing superscripting. [Also noted in part by Gary Hinson. PGN]
But I'm fairly certain that pirated bits don't smell any different than regular bits. I would expect the pirated bits to have a faint whiff of rum about them.
What about Bits in Heat ? ;-)
Please report problems with the web pages to the maintainer