Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
The risks involved when using Microsoft Word, which merely hides text when it appears to have been deleted, have been covered before. Today, however, I encountered a extreme example which nearly fooled me. A computer company responded to my request for a quotation for disc drives by sending me an email with the quotation as a Word attachment. As a user of Unix and Linux systems, I find Word files mildly annoying, but I can decode most of them easily using the Unix utility word2x; this works quite well except on files which contains graphics. This time, however, the resulting text file revealed a quite different letter, intended for someone at the Univerity of Strathclyde, for a completely different set of equipment. When I copied the file to a Windows box and used Word to view it, it did not show this at all, only the quotation which I had requested. So: one Word file is capable of producing two entirely disjoint texts. The Unix "strings" utility also revealed only the Stratclyde quotation, so it appears that the deleted text is left as ASCII, while the undeleted text is encoded in some other way. How odd. The risk: not only that you may reveal information you did not want to reveal, in some cases you may reveal nothing else. Clive Page, Dept of Physics & Astronomy, University of Leicester
With Vodafone Australia if you want to check your voicemail from a public phone (because your battery has gone flat) you just dial your own mobile number and then interrupt the voicemail greeting by pressing * for the menu. It then asks for your security code. What is my voicemail security code? I called Vodafone to ask. After they verified it was me (by a phone password) they told me that if I had never set it, the default password is 3333. Another girl in the office next to me just tried hers also and it did the same thing. The risk? Need to check on your friends', your ex's, your boss', your children's voicemail?
Today's Washpost had a followup to the story of a tourist stricken at the FDR Memorial on the Mall. http://www.washingtonpost.com/wp-dyn/articles/A6959-2001May9.html It seems DC 911 was unable to process the call because the US Park Police on the scene had no street address for the Memorial. The 911 system didn't know how to find a major feature on the Mall. (They've now added entries to the system for at least some landmarks...) As a result, the victim waited for 30 minutes (and had to be defibrillated at the scene) before a USPP helicopter finally came & picked him up. (The pilot clearly knew where FDR is sitting..) The Risk? We have replaced local dispatchers & their knowledge of geography, with a dumb database of finite size, staffed by people many miles away. That database assumes every reported location has a *known* address. That's far from true; anyone know a street address for a Metrorail station, or the T in Boston? (Irony: the emergency airshafts on Metro *do* have posted addresses!) And Richard Jewell had a similar problem in Atlanta while trying to report that bomb. Another angle: Can your 911 PSAP accept a lat/long from someone hurt in an accident on a rural road, but with a GPS? I've read accounts of those who gave up trying. Moral: Your database better be prepared for exceptions... like Les Ernest's race declaration.
The final step in the introduction of the Euro in most of Europe is imminent. There are just eight months left until, in a major operation surely to involve lots of chaos, national currencies will be exchanged for Euro coins and paper money. The introduction of Euro currency is the last step in a process that was officially started on 1 january 1999 when the exchange rates between the various currencies comprising the Euro was fixed. Banks, stock exchanges and multinationals were quick to convert and have been doing business using Euros for over two years. This gradual introduction, where transactions in both local currency and Euros are intermingled gives rise to interesting errors. This is the account of the first one of many I have run into and of many more that are yet to occur in the year to come. When I am in France, I regularly dine out in a lovely restaurant called Le Burgonde, in Nolay (Bourgogne). This january I picked up a discarded credit card receipt off the garage floor (I am very sloppy with those little pieces of paper). The slip contained the payment of our last family visit to the restaurant and was for the grand total of FFr 3500 (about USD 490), which is pretty steep considering the restaurant doesn't even have one Michelin star. I checked my credit card statements at home and it turned out that the restaurant bill was first debited for 560 Euro and later corrected to 560 FFr. Thursday I asked the 'patronne' about this error and the correction. She explained to me that in preparation for the Euro the restaurant was provided with a new card machine that can switch between francs and euros. She showed me how this works. The keypad of the card reader has a number of unlabeled coloured keys. The yellow one, which is the apparently correction key has a convenient second function that switches the machine between francs & euro modes. Of course it's a key that can easily get pressed by accident when you pick up the reader from its cradle. The patronne said that when she complained to the credit card company to have them correct the erroneous transactions, they confirmed that they have thousands of such errors every day. Paul van Keep
The reality of the Euro is less than eight months away. So you'd expect companies, who have been allowed to use the Euro since 1999 would be used to it by now. But the opposite is quite true. My company (Sumatra) started pricing it's products in Euro two years ago. Although our accounting was still done in guilders we mailed out invoices using the Euro as the base currency and added prices in dutch guilders as well. The problem is that the Euro amount is only slightly less than half the Euro amount, so a lot of our customers entered the wrong value in their systems and paid less then half or what they owed us. We managed to improve things for a while by adding a big bubble graphic pointing to the Euro total on the invoice containing the text 'Bedrag in Euro' . However since the start of this year things got worse again. We decided to be smart and not wait for the last moment; so we switched our whole accounting over to the Euro on 29 december 2000. We now no longer send invoices in two currencies but in Euro only. Of course the old problem immediately returned. One customer whom we invoiced for EUR 1190 paid only EUR 540. This confusion is currently rampant throughout the E.U. The chances of this happening in Spain or Italy, where there are respectively two and three orders of magnitude between local currency and the Euro, are very slim. But especially in Ireland, Germany and the Netherlands where the difference is small (0.7, 1.9 & 2.2) a lot of incorrect payments are made. A quick guestimate reveals enormous costs as a result of these errors. About 20% of the companies have switched to Euro based accounting. Just taking the three countries mentioned above there are close to 300.000 companies invoicing in Euro. Lets assume that 1in 20 payments are wrong on, say, 500 invoices a year. At a cost of 8 euro per error to correct, the cost over this year alone must be at least EUR 60M. Paul van Keep
[From David Farber's IP distribution] Thieves R Us: Computer makers are building equipment on the assumption that we are all copyright outlaws Mike Godwin, The American Lawyer, 18 Apr 2001 Every year or two I upgrade to a newer, faster Mac laptop, and this means I go through a now-familiar ritual of hooking up the new machine to the old one through a cable or local area network and copying everything -- software, data (including my MP3 music collection), and settings — to the new machine. So you can imagine my surprise and horror when I heard reports recently that a new standard for consumer hard drives would make this kind of copying difficult or maybe even impossible. The reports may have been at least partially wrong, as it turns out. But I think they raise important issues, and ones we ought to be thinking about now. The notion that hard drives might be hard-wired to prevent copying first collided with my consciousness in January. That's when I heard about a technology known as CPRM, which stands for Content Protection for Recordable Media. It's being developed by an industry group known as The 4C Entity, with the backing of IBM, Toshiba, and Matsushita. CPRM, it turns out, was the basis of a flood of criticism against The 4C Entity after a single news story appeared in December in a British online computer journal called The Register. Titled "Stealth Plan Puts Copy Protection Into Every Hard Drive," the article began with an arresting lead: "Hastening a rapid demise for the free copying of digital media, the next generation of hard disks is likely to come with copyright protection countermeasures built in." Okay, that got my attention. The article went on to say that standard-setting bodies were being asked to adopt CPRM for hard disks. Each disk would have a unique identifier that would help prevent unauthorized copies. The article suggested that this padlock could be built into drives as early as this summer. The reaction was quick and harsh. By the next day, computer activists, including millionaire software entrepreneur John Gilmore, had circulated the story to mailing lists and other online forums. Gilmore called CPRM "the latest tragedy of copyright mania in the computer industry." He warned that under the standard, users "wouldn't be able to copy data from [their] own hard drive to another drive, or back it up, without permission from some third party." Industry spokesmen were quick to respond that the protesters misunderstand the technology and that their concerns are overblown. The 4C Entity said that CPRM isn't even designed or licensed for "generic hard disks." It is instead meant for use with other digital media, such as MP3 players and writeable DVDs. The group also says the technology will be optional for computer manufacturers. The standard would simply specify a common digital signal facilitating CPRM technology, but it would not mandate that the signal be present and turned on in a device. These qualifications have not mollified Gilmore and other critics, who raise the prospect that technologies like CPRM will push the digital electronics industry into producing only equipment and tools with little or no capability for unlicensed copying. Now, at this point you might say, "So what? What's wrong with designing hardware in a way that prevents you from breaking the law?" I think the best answer to this is: Nothing, so long as it doesn't block you from lawful stuff you need to do. Consider: It's certainly possible today to build a car that will never go over the legal speed limit. Perhaps speed-related injuries and fatalities are enough of a reason for the auto industry to produce low-speed cars. But then it would be impossible for drivers to do things they legally have a right to do, and often need to do, such as accelerating safely onto a freeway or accelerating to avoid a road hazard. And a car that can do those lawful things can also break the speed limit. Yet we don't assume that the owner of such a car is a likely speeder. Put more broadly: Technologies that empower people don't discriminate between good uses and bad. So if we build constraints into our computer systems that prevent infringement, we're also making it impossible for users to engage in all sorts of lawful copying. Except for the most ardent IP hard-liners, most people accept that it is a fair use to make private, personal copies of music and movies. But the proposed standard could prevent that sort of activity. It's worth comparing these digital rights management technologies to the copy protection schemes that were the rage back in the 1970s and early 1980s — the first decade and a half of the microcomputer revolution. Back then, plenty of commercial software — not just games, but also productivity software like word processors and spreadsheets — was coded to prevent copying. Routine tasks like backing up a hard drive and migrating to upgraded systems were an incredible chore. With backups in particular, the software discouraged activities that normal, prudent computer users ought to be doing. As you may remember (and certainly can imagine), this caused a lot of users to gripe. Some developers responded by creating programs that circumvented the copy protection. In the long term, however, most software vendors moved away from copy protection altogether; they began to rely on copyright enforcement and the customers' needs for support and upgrades to protect their interests. You generally need to own licensed copies of software in order to get support when you have problems. The vendors also began lowering the price of software so that it seemed both reasonable and equitable to pay for it rather than copy it. The primary reason that software vendors moved away from copy protection schemes is that they were confronted with competitors that offered similar products without copy protection and with lower prices. In other words, market forces (Microsoft was not yet considered a monopoly) pushed software companies into more rational setups and better relationships with their customers. But if copy protection is built into standard computer storage devices, whether hard drives or anything else, what competitors will I be able to turn to? Even my Macintosh PowerBook, which you might think is free from standards imposed in the Wintel world, relies on an IBM standard-issue hard disk. There's another complication. The Digital Millennium Copyright Act expressly outlaws the dissemination of tools that can be used to circumvent technologies that control access to, or copying of, copyrighted works. I can't even circumvent those technologies myself. Courts have said that it's illegal even when the underlying purpose of the copying (fair use for a classroom presentation or permitted by license) is lawful. Even if the license of my word processor allows me to make archival copies of the software, it's still illegal for me to use circumvention tools to do so. This combination of law and hardware means that there's a real possibility that someday soon I won't be able to choose between computer products that employ such schemes and those that don't. If that day comes, I don't know how the market will respond, but I know how I will. To the extent possible, I'll stop buying new computer equipment altogether. I'm guessing at least some other computer buyers will make that decision, too. This will mean I won't have the fastest and best computer equipment anymore, but I'm betting I can stay afloat by haunting used-computer stores for a long time to come. And I'll have the pleasure of knowing that the computer equipment, MP3 device, or CD burner, etc., that I'm buying doesn't have built into it the assumption that I'm a copyright infringer. Mike Godwin is chief correspondent of IP Worldwide. His e-mail address is firstname.lastname@example.org. [For IP archives see: http://www.interesting-people.org/ .]
In other words, what you are saying is "information can be shared with group A when box B is checked and group C when box B is not checked, therefore information can be shared with group A or C regardless of the state of box B." That is true if and only if groups A and C have exactly the same membership. It is easy to get trapped in logical fallacies if one does not include the legal context of contractual agreements in the analysis. In reasonable jurisdictions(*), there are two classes of third parties: those that Citibank can unconditionally share information with (e.g., law enforcement officials), and those with whom Citibank cannot lawfully share information without your permission. Checking boxes on the appropriate form would seem to be a reasonable indication of your desire to grant or deny such permission, which would affect the legal status of certain third parties. Indeed, quoted sections of the Citibank agreement would seem to acknowledge that this is the case, although other sections of the document would seem to contradict this. The Citibank privacy agreement seems to be written in no language I can understand, whether legal, plain, or otherwise... (*) Of course, I'm not making any assertion about whether Citibank is actually located in a reasonable jurisdiction...
That reminds me of a story a friend of mine told about sixdegrees.com. For those who don't remember the service, it would allow you to enter a list of people you know, as well as how you know them (friend, coworker, brother, etc.), and then let you communicate with people who are two or three levels away from you. A few days after she broke up with her boyfriend, she was looking at her user preferences and decided to update them to reflect this fact. Imagine her surprise when the sixdegrees cheerfully told her that the following message had just been mailed to her ex-boyfriend: This is a notification that [my friend's name] has cancelled your status as boyfriend. The RISKS? Lending very personal information to a company and assuming that they will not do anything undesirable with it. - Nikita
The real risk here is a legacy of open loop signaling. In this case, the billing algorithm is implicit — there is no protocol that allows one to determine and manage the costs of connectivity. We see the same kind of problem when area codes are split instead of overload — ones infrastructure changes invisibly and, for the user, perversely. These were all designed in a more naive era when it was assumed that every action had a human in the loop. What used to seem like clever ideas such as using 1 to mean a toll (charge call) or 900 to charge to a special card "card" (ones phone bill), now seem like kludges. This is isn't to pick on the phone company. We see the same thing when a part number code has implicit semantics. And, alas, the DNS is a flashpoint here — a modern example of old thinking that rolls together disparate mechanisms. The risk is in carrying old thinking ahead while the world changes. It's the antithesis of many of the "Risks" entries in that it comes from not embracing or, at least, understanding technology. Not so much the artifacts of technology but the concepts underlying it. There is a technical term for things that go bad because of external changes - bit-rot. (not to be confused with bitrot, the gait of a semi-horse). Autocatalysis is a related technical concept. Bob Frankston <http://www.frankston.com>
Here in Ottawa, we have an extensive bus system with many large bus stations along the routes - kind of like a poor man's above ground subway. In these stations, there are TV monitors with a display showing the number of minutes until the next bus comes, with one line of information for each of the several routes serving that station. Recently, while passing by one of these monitors, I noticed that it had a Windows-style pop-up box showing. This box was in turn covered by another pop-up with a complaint from Dr. Watson that it couldn't write to some file. The monitor remained in this condition for several days, leading our group here at the office to conclude that there must be a PC in a closet at the station that was in need of some attention, rather than the monitors being driven from some central computer. Probably simply clicking on the OK button on the second pop-up would clear the error boxes. Two weeks later, nothing has changed, the pop-ups are happily burning themselves into the monitor. I'm entertaining myself now watching to see when it finally gets fixed. (Is this what they mean by a bus error?) The risk? Deploying a field system that should work 24/7 with (apparently) no way of remotely monitoring it so that you know if it has failed, and requiring someone to physically go and visit the machine in order to just click a mouse to remedy an all too predictable error condition.
In RISKS-21.36 Kent Borg noted that his UPS failed to power his server when the mains power was restored. The problem can probably be attributed to the way Kent installed and used the UPS rather than the UPS design. To power-down a UPS after its batteries signal that they are close to empty by expecting it to switch-off when the batteries are completely drained is incorrect due to a number of subtle race conditions (another risk). Consider first of all the scenario expected by Kent: 1. Mains power is interrupted 2. Computer is now powered by the UPS 3. UPS batteries signal a low condition 4. Computer gracefully halts 5. UPS dies as batteries are completely drained 6. Computer switches off as UPS power is interrupted 7. Mains power is restored 8. Computer restarts Consider now the first race condition: mains power is restored between steps 4 and 5. The UPS will restore power and the computer will wait idly in its halted state. One can counter, that many computers have automatic power management so that (in step 4) they can be shut off instead halted; when power power is restored the computer will correctly restart. Enter now the second race condition: power is restored DURING the shutdown sequence (this sequence can last for several minutes on servers running database applications). The computer will now complete its shutdown sequence and switch itself completely off despite being fed with mains power. How can one handle these problems? The communication protocol of most UPSs supports a software command to switch-off the UPS. Thus the last action of step 4 is to soft switch-off the UPS (and consequently the computer). When power is restored both will correctly restart. Note that the implementation of this sequence is not trivial: the UPS software I am familiar with, operates as a user process; when the computer is ready to halt, user processes have died and filesystems are unmounted making it difficult to send that last command to the UPS. Conclusions 1) UPS software is not optional (if you are searching for an implementation have a look at the open-source Network UPS Tools - NUT <http://www.exploits.org/nut/>). 2) The correct installation of UPS software is not trivial. Diomidis Spinellis <http://www.eltrun.aueb.gr/dds/>
> the Kent who is now in the market for a UPS with > a simple hard power switch that will stay "on". This feature may be somewhat difficult to find. Even a simple risk assessment of such a feature makes it look like a really big Risk. If the UPS can't recharge until power returns, but it immediately allows the attached equipment to start up, then there is a high possibility that just a few moments later the main power will drop out again, leaving the UPS unable to provide backup power and allowing the attached equipment to suffer an immediate power loss with no warning. I can't find a recent reference for this fact, but the idea still seems to make sense - the most likely time for power to cut out is just after it came back on. At one time I ran a PC as a power monitor (yes, letting it fail and restart), and this was a consistent pattern. Major outages were often preceded by several small ones, and also sometimes followed by small ones. Completely standalone outages were rather rare. Perhaps a Risks reader with more power industry background can supplement my limited experience. The bigger lesson seems to be that power security, like data security, is a process. It is a Risk not to treat it as such. Alternatively, perhaps he can find a UPS which will restart the attached equipment AFTER it recharges. Chris Smith <email@example.com>
Please report problems with the web pages to the maintainer