Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
The Federal Administration's Fremont (Oakland) air-traffic control center was "off-the-air" yesterday (9 Aug 1995) due to a local power outage. Power died at 7:13 a.m. as technicians were doing maintenance. Both the archaic system and the even more archaic backup system were down. The center lost all radar and radio contact with airborne planes, which is most unusual — although it has happened before. The immediate effects continued for over one hour, and the aftereffects lasted long after that as planes were kept on the ground waiting for the mess to clear. (The Oakland center covers an 18-million square mile circle including the northern California border and San Luis Obispo to the south, more than half way to L.A.) Fortunately, the weather over northern California was good, and en-route flights could operate visually. [Source: David Dietz, San Francisco Chronicle, 10 Aug 1995, p. 1. David noted that in the past four months, computers have failed 20 times at the ATC centers in Chicago, Washington DC, Dallas-FortWorth, Cleveland, and New York. Long-time RISKS readers are not surprised.
I'm merely re-forwarding this item, whose original author is not given. I saw it in sci.engr.safety, to which it was forwarded by Kermit Carlson (email@example.com) at Fermilab.
" ...The defective units are Fluke 70 Series 2 and Fluke 77 Series 2 The problem occurs when the DMM is used to test D.C. voltages between 500 and 1000 volts. If the test leads are connected to the D.C. source terminals with reversed polarity, the display will read 0 volts. If the leads are then reversed to the correct polarity, the display remains locked on the zero volts reading. Therefore, someone could be testing a 1000 volt D.C. source, and obtain a reading of zero. If this meter is being used to verify a de-energised state, there is a risk of unintended contact with a lethal voltage...."
The memo does indicated that this false zero reading and the locked-up state has been verified by the manufacturer.
I'm not familiar with the particular model in question, but presumably it has a digital display, and hence is fair game for RISKS. For those who don't know, the odd-sounding Fluke is a genuine company name.Mark Brader, firstname.lastname@example.org SoftQuad Inc., Toronto
The Usenet sci.math FAQ list discusses, among other things, the question of whether it is appropriate to consider that 0^0 (where ^ means exponentiation) has a well-defined value or not. The following paragraph, arguing that 0^0 should be defined as 1, is stated to appear at page 162 of "Concrete Mathematics: A Foundation for Computer Science" (Addison Wesley, 1989, ISBN 0-201-14236-8) by Ronald L. Graham, Donald E. Knuth, and Oren Patashnik:
| Some textbooks leave the quantity x undefined, because the functions
| 0^0 and x^0 have different limiting values when 0^x decreases to 0.
| But this is a mistake. We must define x^0=1 for all x , if the
| binomial theorem is to be valid when x = 0 , y = 0 , and/or x = -y .
| The theorem is too important to be arbitrarily restricted! By
| contrast, the function 0^x is quite unimportant.
Obviously the first two quoted lines are garbled. In fact, the corresponding text in the book actually reads:
Some textbooks leave the quantity 0^0 undefined, because the functions x^0 and 0^x have different limiting values when x decreases to 0.
(Except that it uses proper exponentiation signs.)
I emailed the list's maintainer, Alex Lopez-Ortiz, to point out what I assumed was a typo. He replied to say that actually the error was the result of a bug — he had used a program to remove LaTeX math formatting codes from an online copy of the paragraph, and it had apparently shifted each formula by one position in the text!Mark Brader email@example.com SoftQuad Inc. Toronto
It is not clear that the possibility of emulating hardware in software, and thus the possibility of creating software that emulates hardware so so closely as to be subject only to the same failure modes as hardware, says anything at all about the potential reliability of software in other contexts. If I understand things correctly, most hardware is self referential, intended to do a limited number of clearly specified things well. Most software, on the other hand, is directed at a real-world task or tasks ... which can easily be either incompletely understood or misunderstood altogether. It may be the case that what hardware is normally supposed to do is better understood in the CS world than all the strange things non computer scientists ask software to do. If this is the case, software is less reliable — not because of the medium, but because a vaguer definition of, and lesser understanding of, the task to be performed leaves more room for error.
Or, the reliability of software is less because the risk of human error is greater, in the tasks which it is customarily applied to.Raymond Turney
Well, I am now one of those whose cellular-phone number was duplicated. According to AirTouch Cellular, this was discovered when the pattern of calls using my number changed (from 1-2 minutes a day to 1 hour a day!). The detection of this type of change is clearly (IMHO) a good use of technology.
While talking to the customer service rep, I asked about the risk of digital phones to the hearing impaired (and their hearing aids) as reported in an earlier risks. She said that the European phones were higher powered and that the American phones, because of their lower power, were safe. I asked her to ask that this be checked anyway.
Date: Wed, 9 Aug 1995 08:24:46 -0500
From: Susan Kinney <skinney@MAIL.MBA.WFU.EDU>
Subject: Kane v. McDonnell Douglas
To: Multiple recipients of list ISWORLD <ISWORLD@IRLEARN.UCD.IE>
Have any of you written up the Kane Carpet Co. v. McDonnell Douglas Corp. suit as a case? Below are details if you are unfamiliar with the case. Susan Kinney
MISTRIAL DECLARED IN KANE CARPET CASE, The Record, April 6, 1994; Pg. D03
Seven months into a jury trial in which the Kane Carpet Co. claimed that it had gone bust because of a computer, a Superior Court judge has declared a mistrial. Judge Mark A. Baber issued the decision this week after Kane's former president, Richard Lehmbeck, testified that he had suffered a nervous breakdown affecting his recall of events in 1989, when the computer went on line; and in 1990, when Kane folded. The onetime Secaucus-based distributor of floor coverings has claimed that a McDonnell Douglas Corp. computer system destroyed its business in a few weeks as complaints about incorrect invoices and duplicate deliveries piled up. McDonnell Douglas denies any responsibility for Kane's demise. The case is expected to be rescheduled for a new trial.Dan Stone, Associate Professor, Univ. of Illinois, Dept. of Accountancy, Champaign, IL 61820 217-333-4537 firstname.lastname@example.org fax: 217-244-0902
Date: Tue, 01 Aug 1995 15:29:05 -0400
From: Dave Farber <email@example.com>
Subject: Australia next to ban PGP [unverified info ...]
From: firstname.lastname@example.org (Ross Anderson)
Australia's proposed crypto policy:
p 34: `the needs of the majority of users of the infrastructure for privacy and smaller financial transactions can be met by lower level encryption which could withstand a normal but not sophisticated attack against it. Law enforcement agencies could develop the capability to mount such sophisticated attacks. Criminals who purchased the higher level encryption products would immediately attract attention to themselves.'
He mentioned that his department considered itself a suitable repository for the government central decrypting unit, which would decrypt traffic for local police forces. He also wants to escrowed keys for banks and other organisations allowed to use strong crypto.
Centralising the wiretap capability with the AG is represented as a useful safeguard against abuse of power by local police forces. It would be presented as a `data recovery' facility in order to reassure the voters.
Centralisation will enable the AG to acquire the capability to use ``more sophisticated techniques in circumstances where the key cannot, for whatever reason, be recovered from escrow''.
So the technical parameters would appear to be: 40 bit keys for the masses, 56-bit escrowed keys for the banks, and a Wiener machine sitting in Orlowski's office. Belt, braces and string.
Curiously enough, he quotes a `Review of long Term Cost Effectiveness of Telecommunications Interception' as saying that ``Encryption by targets of their communications (both voice and data) is not considered as a problem for TI at present in Australia'' and goes on to say that ``there has been comparatively little market for voice encryption products, although they have been readily available''.
He even produces some good arguments for the EFF, such as that much of the intelligence comes from the call log data and from calls to third parties such as airlines and hotels which are not encrypted.
He also says that the OECD countries will hold a meeting on National Cryptography Policies later this year. While at the conference, I found out that a classified meeting took place this March in Germany between the signals intelligence agencies of the developed countries, plus Australia and South Africa, at which the assembled spooks agreed to press their governments to bring in escrow and/or weak crypto.
Australia seems rather eager to lick Uncle Sam's boots on this issue. I wonder what the payoff was?
Prof. David Parnas makes a good point that it would be nice for each of us to contribute more original success or failure stories and to rely less on the popular press for our data. I'd like to expand on this idea, and, in the future, I will try to contribute some unpublished stories.
As someone who works for a company that sells fault-tolerant computers, which are designed never to fail, I've often wondered why failures of our systems (unfortunately, being human beings, neither we nor our systems are perfect) aren't reported in the press. It finally dawned on me that the answer is very simple: our customers do not want their failures reported. Failures reflect poorly on the person who selected or implemented the system, on the department responsible, and on the business and its image in the marketplace. Failures hurt customers and lose business. (After all, that's why they are buying our equipment in the first place!)
If you call a company whose systems are dead on the floor, some may admit it, but I also know of instances where they have claimed to be "updating their files" or "on holiday" or almost anything that would get you to go away and not get angry at them.
Another aspect of this denial and cover-up is that even when the fault and responsibility for the failure is clearly within the MIS department, we, the vendor, will be blamed. I just have had to learn to accept that aspect of the computer business. (We certainly ARE to blame some of the time, but we get blamed far more times).
While it is healthy for society as a whole to expect perfection and to not tolerate failure, we also have to find a way to get society to expect honest explanations and total ownership of responsibility. Having worked with Japanese, European, and American companies, my feeling is that the Japanese are the best at this, the Europeans are second, and the Americans are far behind. All of them will berate you for a failure and want a quick workaround and a solid fix, but typically only the Japanese want to know what was the root case of the failure, and what process changes you are instituting to avoid this problem in the future. This kind of customer pressure to "do the right thing" is incredibly helpful within the company towards ensuring that we understand and truly correct the problems. I wish more customers did this.
Total Quality Management (TQM) programs seem to be a pretty good tool for fixing these problems within a company. In my personal, everyday life I've made two small changes to try to help the world at large. One is to refuse to accept the stock phrase "it was a computer error" when I receive it; the other is to ask why the failure occurred and what will be done to prevent it from happening again.
The end result of this human tendency to cover-up failures, deny responsibility, and let vendors off too easily is that we are getting neither the awareness nor understanding (to borrow from Parnas) that is possible. I'm sure that the press is able to report on only a small fraction of the actual failures that occur; I'd guess less than 1%. Airline pilots have a way to report problems anonymously; perhaps we need a similar program in the systems business.Paul Green, Director, VOS Planning, Stratus Computer, Inc., Marlboro, MA 01752 (508) 460-2557 Paul_Green@vos.stratus.com FAX: (508) 460-0397
The FACTS: These stories are NOT TRUE.
The app does not send any user info that the user is not aware of and not explicitly agreed to. In particular, the app does not send any files such as config.sys, autoexec.bat, or the registry — just the info that was on the screen and that the user said Yes to.
Nor does the registration application look out on the network. It only looks at the PC the app is being run on.
[My sincere apologies to Microsoft, and to Paul Saffo who was a completely innocent bystander. He did not write the piece in RISKS-17.21, and I should either have not run it or else run it without his identity, because he did not submit it to RISKS. Thanks to Brad for making the effort to clarify the issues. I always greatly appreciate first-hand accounts in RISKS. PGN]
[The FAQ is too long for RISKS, but is available for anonymous FTP in RISKS-17.24msfa . PGN]
Please report problems with the web pages to the maintainer