Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
It has been suggested in a previous RISKS that single keystroke errors may just be an Urban Myth. Unfortunately not - in the GEORGE 3 operating system (which used to run on ICL 1900 series computers) the command to edit a file was "ed" and the command to erase a file was "er". The letters "d" and "r" are conveniently next to each other on the keyboard. Apart from this one aberration the George 3 system was a great improvement on all its successors! Geoff. Lane., University of Manchester Regional Computer Centre
For the "single-character" doubters: The Soviet Mars probe was mistakenly ordered to "commit suicide" when ground control beamed up a 20 to 30 page message in which a single character was inadvertently omitted. The change in progam was required because the Phobos 1 control had been transferred from a command center in the Crimea to a new facility near Moscow. "The [changes] would not have been required if the controller had been working the computer in Crimea." The commands caused the spacecraft's solar panels to point the wrong way, which would prevent the batteries from staying charged, ultimately causing the spacecraft to run out of power. [From the SF Chronicle, 10 Sept 88, item (page A11), thanks to Jack Goldberg.]
Stanford University's $115 million linear collider has been shut down after several months' efforts failed to get it running properly. Although there seems to be nothing basically wrong with the system, it is "simply so complicated that, despite the best efforts of more than 100 people, they have not been able to keep all its complex parts working together long enough to get results." Since spring they have "fought a succession of glitches and breakdowns in the machine's myriad magnets, computer controls, and focusing devices." [Source: San Francisco Chronicle, 13 September 1988, p. A2]
Recently, I was in a hotel room in the Washington, DC area. The TV in the room had a remote control that was not, as is often done, anchored to the bedside table, but did have this theft-deterant notice on it: "This remote control will only work on Beeblebrox Hotel TVs. REMOTE WILL DAMAGE your home TV sets." The first sentence I believe, the second I absolutely do not. I can not imagine what form the damage might take, unless the IR coming from the remote is so bright that it would burn out the sensor in an "ordinary" TV or VCR. So, is this notice a lie, to decrease the likelyhood of theft? That's all I could figure, but it sure reduced my opinion of Beeblebrox Hotel for putting such a silly notice on the thing. Why am I posting this to RISKS? Well, suppose it's true! What damage could I do with this "infrared laser"? Will it hurt my eyes? If I had an HP-28 calculator, or similar device, which uses an optical connection for the printer, could I accidentally damage that? Had I been an actual paying guest I would have harassed them about it, but I was just visiting and it was on a weekend, so I doubted I'd find out anything useful. Technical information: The remote (and TV) were made by General Electric, it was powered by two AAA cells, and seemed to be a typical IR controller, but with minimal functions. "Beeblebrox" is not the true name of the hotel ;-). Jim Williams
Path: sq!utfyzx!utgpu!utzoo!attcan!uunet!mcvax!unido!sbsvax!greim From: greim@sbsvax.UUCP (Michael Greim) Date: 7 Sep 88 09:29:05 GMT Organization: Universitaet des Saarlandes, Saarbruecken, West Germany Here are some computer follies published some time ago. >From Jack Campin (jack@cs.glasgow.ac.uk) on Nov 27 1987 >I have had the doubtful privilege of looking after an ICL 3930 over the last >couple of years. This machine has a prodigious number of ways to reboot; most >of them are reasonably documented, but one - I think the one you use when you >want to save an image of a set of virtual machines to disk to speed up future >routine reloads - comes up with a prompt: >ENTER DATE AND TIME >and leaves you guessing. It only accepts ONE date format, and the manuals >nowhere say what it is. I first got the answer on Monday morning on the first >of January 1986 and it's this: >TUE.1986/01/07__08.32 >Intuitive, eh? - I think it took about four phone calls before I found someone >at ICL who knew what it ought to be. >From Tim Olson (tim@amdcad.UUCP) on Dec 1 1987 >Back to the original discussion, here is an example Alan Kay gave in a >talk at Stanford about 2 years ago (paraphrased by me and my potentially >faulty memory!): > >To test out new user interfaces, Xerox would videotape novice users >working with the system. In one particular instance, one person was to >perform a task that required a DoIt command at the end (from a pull-down >menu). He kept repeating the cycle of performing everything up to the >DoIt, pulling down the menu, going to the DoIt entry in the menu, >muttering something under his breath, then quitting out of the menu. > >Upon review of the tape, the researchers discovered that the person was >muttering "DOLT!.. I'm not a dolt". They then realized that DoIt (with >>an uppercase I) *did* look like the word "dolt" in the sans-serif font >they had for the system. They later changed it to "doit" (lowercase >'i'). >From Clif Flynt (clif@.chinet.UUCP) 15 Dec 1987 : >In article <1943@ncr-sd.SanDiego.NCR.COM> matt@ncr-sd.SanDiego.NCR.COM (Matt Costello) writes: >>The real problems in interface design generally occur because of >>unstated assumptions. We had a hilarious incident occur here >>recently... >> >> Imagine our suprise when a worried secretary called >>to say that she had been able to fit only 5 of the disks into the >>disk drive. >> > A similar incident happened to a friend, diagnosing a floppy disk read >problem over the phone. > "Have you cleaned the disk?" He inquired, thinking that the heads might >be dirty. > "I'll try it and call you back", said the person at the other end, and >about 10 minutes later called back to inform my friend. "I took the disk >out of that black wrapper, and you were right, it was covered with brown >dusty stuff. I cleaned that all off, but it still doesn't work." > >>> There is also the tale of the DP manager who wanted to make sure that >nobody would overwrite the data on his tapes. He filled the slots where >the write-enable rings would go with epoxy, so that no-one could put > write enable ring in. He didn't realize that ANYTHING in that slot will >enable the tape for writing. > > Another friend of mine tells the tale of a system where people >could log in OK as long as they sat in front of the terminal. >If they stood in front, then their password was rejected. > It finally turned out that two key-caps on the keyboard had been swapped. >When people sat, they put their fingers on the 'home row' and typed, >but standing, they typed with two fingers, and looked at the key-caps to >see which keys to press. Michael T. Greim, Universitaet des Saarlandes, FB 10 - Informatik (Dept. of CS) Bau 36, Im Stadtwald 15 D-6600 Saarbruecken 11, West Germany voice: +49 681 302 2434
In response to the Geoff. Lane msg of Mon, 12 Sep 88 09:32:13 BST; "IFF and the Vincennes" in which he stated: "a) NO combat fighter plane will ever go into combat with its IFF system operating - for obvious reasons!" I must disagree. My understanding is that there are 3 catagories in which a "bogy" will be placed, depending on the IFF, or absence of IFF: 1> Friend 2> Foe 3> Unknown IF a ship finds itself in a COMBAT situation and detects an aircraft which is approaching and which is not of catagory 1, then the ship will more than likely fire. The only way that an aircraft can be determined to be a FRIEND is either by having correct IFF or by visual comfirmation. An aircraft with NO IFF, will be of catagory 3 (Unknown), but if considered approaching in a threatening manner (ship's determination not the pilot) will quickly be changed by default to catagory 2 (Foe) and will be fired upon. You might ask what is to prevent an "enemy" aircraft from being classified as a FRIEND? Elaborate measures ARE in place to prevent this from happening, HOPEFULLY they are adequate. It is because it is easier to "turn the IFF off" (becoming catagory 3 rather than 2) than break the codes necessary to become catagory 1, that makes the Unknown aircraft so likely to be fired upon in a combat situation. So my argument is that if a friendly aircraft is operating in an area where there are also friendly forces, it had best keep its IFF "ON" or "risk" that it's own forces may shoot it down. In the "heat of battle" each individual ship must make fast decisions based on the information it has available to it at that time (IFF). Those decisions ultimately determine the fate of the ship/crew/mission. Case in point: Vietnam, 1972 I was the operator of MR3, Missile Radar #3 (AN/SPG 51-C) on the USS Towers (DDG-9) off the coast of Haiphong Harbor, North Vietnam, approx. 3AM. We were in the process of shelling various railway yards and also taking fire from 175mm shore batteries when a low-flying, high-speed aircraft was detected heading towards our ship at approx. 12 miles distance, with no IFF. The plane was immediately assumed hostile, both MR2 & MR3 were assigned the target. MR3 "locked on" first. 2 "birds" (standard - missiles) were loaded on the launcher, and the launcher was assigned to MR3. At that time the target was within only 1 - 2 seconds from being fired upon. It was a US F-4 phantom fighter. He detected our "intent to launch" and QUICKLY turned on his IFF. The launcher was unloaded (you don't want to leave live missiles on the rail when you're taking hostile fire from shore batteries!) and MR3 was then unassigned. IFF was the only thing that prevented us from firing at, and more than likely shooting down, one of our own aircraft. I guess my point is that having your IFF "turned off" doesn't really buy you anything, at least not in a "combat" situation. Perhaps in a "sneak attack" during peace time, when you would more than likely be given the benefit of the doubt, but not once a conflict has started. Anti-ship weapons (and their launch platforms) have become too sophisticated, their warheads too powerful, for a Captain to risk his ship & crew on being wrong. Dennis
Clifford Johnson (RISKS-7.51) complained about the public's disinterest in disasters vs. their interest in the lottery, even though the former's odds of occurring are much greater. I'm afraid the public's view is understandable even from the statistical point of view: the odds of winning the lottery are slim, but it does happen to somebody somewhere every week; a nuclear disaster is rare, and so far each of the few that did happen caused less casualties than a major airliner crash, and all the victims were concentrated in a small area. Anyone outside such an area is safe. It's this 'lumping' of consequences that distorts the calculation of statistical odds. Amos Shapir, National Semiconductor (Israel) P.O.B. 3007, Herzlia 46104, Israel Tel. +972 52 522261 TWX: 33691, fax: +972-52-558322
In RISKS-7.49, Mark Moore writes about a public-domain software catalog containing an article claiming that MS-DOS "virus" programs do not exist. I view this with a certain glee, because for several years I've been attempting to follow up each story about viruses I hear; so far, the story has either faded into the distance, or I have been told that they have the virus isolated, but won't show it to me. While I accept that people running academic computer centers, in particular, have some justification for taking a paranoid attitude (though I wasn't approaching them from within as a student), I've been telling people for some time that by covering up viruses the way they do, they are going to lead people to believe it's all a myth, which in the long run is bad. So let me just say, "I told you so." to those who've been concealing the evidence. — David Dyer-Bennet, Terrabit Software ...!{rutgers!dayton | amdahl!ems | uunet!rosevax}!umn-cs!ns!ddb ddb@Lynx.MN.Org, ...{amdahl,hpda}!bungia!viper!ddb Fidonet 1:282/341.0, (612) 721-8967 hst/2400/1200/300
> Modified games must have some sort of mechanism (either mechanical or human) > to pay off a win. ... jim frost The gambling mechanism already exists in most vending mchines these days, and could be easily justified as part of a videogame. This mechanism is a change slot. If the game gives change under computer control, it can easily be modified to handle the payoff as well. Also, many video-games these days have a 'challenge mode', where you can send in for a tee-shirt if you beat a particularly hard level. Perhaps this could be considered gambling? Peter da Silva, Ferranti International Controls Corporation
Readers seeking more information about car engine computer hacking are directed to the article "Electronics puts its foot on the gas" in the May 1988 issue of "IEEE Spectrum." The article profiles a couple of companies working in this area. While one company had reverse-engineered source code and was using in-circuit emulators to debug their changes, another was merely substituting values into an array they'd located. The tone of the article was not as negative as that quoted from "The Australian" by George Michaelson in RISKS DIGEST 7.39. A company specializing in BMWs had done a lot of business directly with dealers desperate to fix acceleration problems in some customers' cars.
Please report problems with the web pages to the maintainer