I was entering exam grades in my class spreadsheet file. I formatted the column as percent. Then I started typing in the grades as usual. I typed 89% for 89% sometimes. Other times, I typed in .89.
Fortunately, before I printed the results, I sorted in decreasing order of score on the exam. I noted that one of my usual top students was missing. Checking, I found that he has the lowest score in the class.
The new version of Excel for Windows 95 allows you to type numbers like 89% as 89%, .89, OR 89. It figures that if you are typing 89, you mean percent.
Well my student received a 100% score. So I typed a 1 (1.00). It literally gave him 1%.
The new Microsoft algorithm had run into an ambiguous case. Numbers below 1 were recognized properly, while numbers above 1 were recognized properly. Logically, however, the two methods of treating numbers clash as 1. Is the proper thing to do in such cases to throw up a dialog box, explain the situation, and ask the person if he or she wants to make this 1% or 100%? The dialog box could even explain the (I believe) new way of handling percentages.
As we add more and more intelligence to software, it becomes easier to use in most cases but rather dangerous in other cases. Just as programmers need to consider boundary values when the test program with data, designers of intelligence will need to consider boundary values. They should then "open the box" and bring the user into the decision.Raymond R. Panko, Decision Sciences Dept., College of BusAdmin, U. of Hawaii
About a week ago I received an electricity bill, which I paid the same day.
The following morning I received another bill. Since I usually receive each bill once per quarter I phoned the electricity company. A search on their computer revealed that the second bill had been issued because my account had been closed. The lady who answered the phone agreed to reinstate my account and told me to ignore the bill.
Yesterday I received another bill, but this time in somebody else's name - somebody I had never heard of. I phoned the electricity company again. A search on the computer showed that this person had given my address as their new address when they closed their account - a check on the paper record showed that this wasn't just a keying error at the electricity company.
It seems that to close an account you must leave a forwarding address. This person had for their own reasons (I have a plausible but unsubstantiated hypothesis) given my address. The side-effect is that my account got closed as well.
I was amused, however, to note the magic letters "CR" next to the amount on his bill: the person responsible had disappeared while the electricity company still owed him 20 pounds, because they had overestimated his previous bill.Mark
I've seen no discussion of a risky behavior that I think is prevalent in the software industry: isolating yourself from your customers.
Most software companies make it hard to submit bug reports to them. I have tried many times to submit bugs and suggestions to Microsoft, for example, via e-mail addresses, automated "product wish" phone lines, etc. In most cases, the suggestion disappears into thin air, with no feedback to indicate that it ever reached a human. And the same bugs appear in release after release. Other companies are similar.
Manufacturers should actively solicit this kind of feedback. They even could offer some kind of certificate or "gold star" or small amount of money to the first person to report each serious bug.
The risks (of not communicating) to the manufacturer include:
The following story, which the *Providence Sunday Journal* picked up from the *Seattle Post Intelligencer*, might be of interest to RISK readers.--John Mello
BELLEVUE, Wash. - While the computer industry celebrated the Internet as the Next Big Thing at the Comdex trade show in Las Vegas last week, a computer crime expert offered a cautionary tale of the perils that may await those who venture online. Richard Bernes, supervisor of the FBI's high-tech squad in San Jose, Calif., said that in 90 percent of the cases his office handles, the Internet is the unlocked window in cyberspace through which thieves crawl. "This whole phenomenon is still new and still growing," Bernes said. "In time, the dust will settle, but now it's pretty wild and woolly."
In the rush to commercialize the Internet's World Wide Web, companies are working to develop standards and software that safeguard vital corporate databases as well as financial information for electronic banking and online commerce. But hackers pose a significant threat. The potential for theft of credit card numbers transmitted over the Internet "is phenomenal at this point," Bernes said.
Bernes was the keynote speaker at a Bellevue seminar on high-tech theft, sponsored by the American Electronics Association, tile Washington Software Association and the Chubb Group of Insurance Companies.
Industrial espionage and theft of intellectual property and trade secrets are also on the rise. Technology-related crimes cost U.S. businesses up to $8 billion a year, and that will likely balloon to $200 billion in four
years, according to the American Insurance Service Group. In roughly two-thirds of the cases, the theft was an inside job pulled by employees, former employees, or suppliers and vendors.
High-tech crimes are no longer exclusively of the white-collar variety. The scarcity of computer chips has fostered a nationwide rise in burglary and strong-arm robbery involving semiconductor manufacturers, computer makers and other electronics firms, Bernes said. Up until 1990, there had never been a report of an armed chip theft. This year, he said, 46 cases were reported through May 25. The incidents have been increasingly violent. In Portland's Silicon Forest, thieves wearing Halloween costumes smashed through a plate glass window In a late-night raid at a semiconductor plant. Thirteen workers were handcuffed and gagged, and $2 million worth of memory chips were lifted.
Bernes urges companies to beef up security and educate workers on how they can prevent breaches in work place safeguards. To curtail hacking and network incursions, companies should install software "firewalls," use encryption for sensitive e-mail information and data-bases, and change employee pass-words frequently. "We like to call chip theft the crime of the '90s," Bernes said. "I think cyberspace (theft) will be the crime of the next century."
> ... has the demo guy try to transfer a 50MB file
> from a 370 a few miles away. The catch is the PC has only a 20MB drive.
> Dr. J's argument is that if the software is written properly it will
> error out and say what's wrong. So they start the transfer and *poof*
> down goes the PC and the 370.
I can't speak to MVS, but a couple of lifetimes ago, when I was a systems programmer assigned to maintain VM/SP for my organization, one of the easiest ways to kill a VM system was to try to load a program larger than the available real memory space. This was reported and demonstrated to IBM many, many times, but as of 1981 (when I quit playing on "big iron"), it was never fixed.Tom Zmudzinski email@example.com The preceding _may_ have been the greatest work of fiction since vows of fidelity were included in the French wedding ceremony. Make of it what you will.
<> ... in the S.F. Chronicle about the "Solomon Project" from New York Univ.
<> Law School. This is a grand and implausible scheme to decide legal<br /> <> cases by computer without bothering with a jury.</p>
Gee, someone must have been reading _Monument_ by J. G. Biggle (?).
This hoax parallels a recurring court scene wherein two legal councils insert dime-sized disks of "evidence" into a justice machine, to pachinko-style bings and bongs [Japanese pinball machines, very lively] -- finally one inserts a super-zap citation (there being no law [in the book] that limits property taxes, so long as they are applied uniformly), and promptly wipes out his opponent's case (BONG!). Announcing this to be a hoax is the BONG!
Maybe there's too much "S.F." in the "S.F. Chronicle"? ;-D
[This calls for Al Pacinko playing a electronic samurai wargame to the death against an old-style warlord.]
While this particular project may be a hoax, the inspiring idea is certainly not. Prof. Robert Kowalski at Imperial college in London is/was closely associated with a project to encode UK immigration law (so far as I remember) as an expert system of logic program clauses.
I personally find the idea naive and, more importantly, deeply repugnant (actually `deeply repugnant' doesn't capture the half of what I think). Kowalski would, of course, disagree with me. If anyone is interested in following the debate, there have been several books (collections of philosophical papers) on the use of computers in law, which contain discussions of the implications of this sort of use of computers. They should be available in any decent jurisprudence library.Sean Matthews, Max-Planck-Institut fuer Informatik, D-66123 Saarbruecken,
Oh dear. What a can of worms. Surely the aim of computers is to carry out those tasks that are too complex, too boring, or too expensive for humans to do?
Is there not a major RISK that the extremely labour-intensive process of justice is rapidly being priced out of reach of the majority of citizens, crumbling away one of the twin pillars of the state? Computerisation, though a RISKy process, should be considered. The mere fact that I'm d****d if I can see how to introduce automation to the judicial process doesn't mean that it shouldn't be attempted.
BTW, Why on earth can't the decision be appealed? Is some sort of agreement to binding arbitration involved?Max Hadley, Stewart Hughes Ltd., School Lane, Chandlers Ford,
Steve Witham writes:
> If you ignore issues other than the encryption (& assume it's RSA, say), > and ignore what quantum computers might be able to do, then the difficulty > of encrypting goes up with something like the cube of the size of the key, > but the difficulty of cracking goes up with something like the exponential > of the size of the key.
<> The problem is that CPU time becomes less costly at the same ratio for <> the encrypter as for the encryption breaker.
> This assumption is wrong. ... the encrypt and break functions are of > different complexity.
> For example the RSA public key cipher used for key exchange, when small > public exponents are used the encryption process is of O(n^2), and > decryption O(n^3), but the time to break is a function running in super > polynomial time involving factoring a product of two large primes.
Both of these arguments assume something that is manifestly not true; namely that we understand why cryptosystems are hard to break and have confidence in their strength.
In fact there is only one cryptosystem whose strength rests on a firm theoretical foundation: the Vernam one-time pad. Other systems have proven to be resistant to attack *in practice* (DES is the best example), but the bases for their strength are not at all well understood. RSA in particular depends for its strength upon the intractability of a classical problem (factoring) that is *believed* but not *proven* to be hard.
When cryptography was used only in limited contexts and not widely understood, this situation was fairly safe. If cryptography is, as some advocate, to become the guarantor of integrity of some substantial portion of our national wealth, via electronic commerce, digital cash, or other technologies, we ought to step back and ask whether we really have enough confidence in our cryptography to make this kind of a bet. A $20B US reward (or more) puts an entirely new spin on the problem of fast factoring.
As an aircraft mechanic who once witnessed a returning aircraft run out of fuel on the taxiway with 5,000 pounds of fuel showing on the gauges, I tend to disagree. Fuel quantity instrument systems on aircraft do fail frequently and are not very accurate. Personal experiences include refueling aircraft and just happening to notice the amount of fuel pumped did not add up properly to what the quantity gauge showed. I've also defueled and depuddled fuel tanks and the gauges still showed nearly 1,000 pounds of fuel on an aircraft that only held 7,000 pounds total. When filled the aircraft read correctly. (An aside: The fuel low light on this particular model of aircraft is connected to the quantity gauge and not some independent sensor or float. If the gauge malfunction, you didn't get a low fuel warning either!)
In a nutshell, I believe that the aircrew should use their original estimates and not depend on an in-flight fuel measurement.Jerry Whittle Belleville, Illinois, USA firstname.lastname@example.org
Andrew Koenig describes a flight he took, for which the flight plan give a destination other than where he thought he was going, with the idea that it would be revised in flight.
I find I must differ about how well the risk is truly managed in this case. It seems to me that this is instead a case of the system (in this case, the 10% fuel padding rule) getting in the way and being circumvented. Would it
not be more straightforward to use a realistic fuel estimate, with a planned refueling stop should consumption prove higher than expected? It was not too long ago that a plane landed in Brussels when the passengers thought they were going to Frankfurt; certainly the filing of fictional flight plans can only serve to increase the risk of a recurrence of this kind of incident?
My point, I guess, is that a risk management system that forces the airlines to use ruses to get around (evidently) overly repressive requirements can only add risks of its own.Jonathan Corbet, Natl Center for Atmospheric Research, Atmospheric Technology
RISKS-17.46 and 17.47 had a few mentions of programming assertions. I want to point out that all assertions are not created equal.
Asserting that a memory pointer is not NULL before dereferencing it is a good test to leave in. Usually. Unless you're inside a tight loop of heavily-used code, and you've verified that all routes to that code already guarantee that the pointer is non-NULL -- this is often a trivial proof.
Another example: Barton Massey states that:
+ I'd rather have my hospital heart-rate monitor crash than report that my heart is still beating when it has, in fact, stopped.
That assumes that the behavior sans assertions is to continue assuming the previous state. That's a depressing possibility, but what if your choice were between crashing and continuing to display current status but without a printout? At some stage of testing, killing the program if it can't get a buffer with which to print may make sense. But closer to live testing, you'd rather keep functioning as much as possible.
Yes, there are other tools besides assertions (log to a message file, emit a sound, dial a modem and wake up the programmer). But those tend to suffer similar types of limitations, especially in embedded systems.
There is no simple rule about keeping assertions in production code that will achieve the best results in even a majority of the cases.
In my view, each sizable project needs to include analysis and design of its hierarchy of failure modes, and this is in fact what C++ is trying to formalize with its "exception" classes: classes of "exceptional conditions", detected by the program or the run-time library or the O/S, for which the response can be different (and can depend on the context in which the exceptional condition arose). It's a new feature, not particularly simple, so we'll see just how well it's utilized over the next few years.
Oh, yes, Peter da Silva wrote:
> Suns maze of patches might be tough to figure out, but at least they admit
> they've got bugs and do their best to fix them. Bill Gates, ruler of the
> free world, remains in his state of denial.
I don't normally rush to the defense of Microsoft, but... FYI, Microsoft publishes *extensive* bug lists, with workarounds or fixes when appropriate. It's available on Compuserve, by CD ROM (with a good search engine), and maybe via the Internet (I haven't checked). My understanding of Chairman Bill's statement was that there weren't any showstoppers in current shipping Microsoft products, and this may be true: they're all usable for most people. Recall that Sun has much more control over the computer platform, faces many fewer combinations of third-party hardware and software, and has higher prices. And probably a better-trained class of user.Larry West LarryWest@msn.com (MS Exchange or UUEncoded attachments only)
It appears as though I was hasty in implicating a software bug in the crash of the X-31 aircraft. Some others have, correctly, pointed out that the software was most likely not designed to detect icing of the air data probe, and acted exactly as the programmers intended. Airspeed is generally measured by an air data computer, and subsequently passed on to other aircraft systems, such as a flight control computer. Problems with air data probes (pitot-static systems) are normally detected by the air data computer comparing readings from 2-3 different probes, but the X-31 may have had only one probe, being a small experimental research aircraft.
One colleague noted that the probe heater (which normally prevents icing) was disconnected during this flight, and the pilot was not correctly briefed. Thus, the immediate cause of this incident was a break in communication between maintainers and the pilot.
Note that this is the only incident in recent aviation history that could be attributed to pitot tube icing, but as aircraft become more difficult to control without the assistance of computers, this information takes on added criticality.
I think we all remained in agreement, however, that there is a hole in the system design, and that the crash is directly attributable to the use of a computer to drive the control surfaces. A well-known failure of a single sensor (or set of sensors) can cause catastrophic failure of this aircraft. This might be appropriate for research aircraft, but it is instructive to remember it so that we may avoid similar actions in systems to which the public is exposed.Andrew C. Fuller, E-Systems, ECI Div., Box 12248, St. Petersburg, FL 33733
Andrew Fuller and Martin Gomez in RISKS-17.46 continue the discussion on the X-31 airspeed sensor icing, and come to the reasonable conclusion that there wasn't a software bug. But should we Kiel-haul Fuller for suggesting that design failures are requirements failures?
Two points made by a Dryden colleague, contained in my article on the X-31 accident in RISKS-16.89, seem to be worth repeating. Firstly, airplanes with mechanico-hydraulic control systems (F4s) have been lost for similar reasons. Therefore, calling the causal sequence a `software bug' is misleading. Secondly, multiple systems may be involved: a flight control system, which implements the flight control laws; systems that send signals to wiggle control surfaces.
Even though there was a failure, it is premature to conclude it may have been a requirements failure, as Andrew Fuller seems to suggest. A failure in the interaction between systems may be a design failure, without being a requirements failure. Consider the AFTI F-16 example from John Rushby's WWW-pages
The software was not at fault -- the processors did what they were programmed to do. The requirements were fine -- the system didn't fulfill them. The design was at fault. Best to keep the three concepts separate.Peter Ladkin
[Kiel pasa? Kielbasa nova. PGN]
SOFTWARE DOCUMENTATION AND INSPECTION
An Intensive Course for the Software Professional by David L. Parnas
McMaster University, Hamilton, Ontario
January 22-25, 1996
The Telecommunications Research Institute of Ontario (TRIO) is pleased to present an intensive course on software engineering. A previously presented three-day course on software inspection and documentation has been combined with an optional introduction that provides a broader, more basic, introduction to software design principles.
Dr. David L. Parnas is an internationally known expert and researcher in the area of software engineering. He is well known as the first person to write about "information hiding", which is the basis of modern object-oriented techniques. He has also been a leader in the development of mathematical models for software description. Dr. Parnas, whose motto is "connecting theory to practice," initiated and led the US Navy's Software Cost Reduction project. He has also been a consultant to the Canadian Atomic Energy Control Board, US Navy, IBM's Federal Systems Division and many other organizations.
He has won several "best paper" awards, has an honorary doctorate from the ETH in Zurich, and has provided education for many industrial organizations. Dr. Parnas is also a project leader for the Telecommunications Research Institute of Ontario, a Fellow of the Royal Society of Canada and the ACM, and a Senior Member of the IEEE.
Software is critical to the operation of modern companies and is frequently a key component of modern products. Some pieces of software are particularly critical; if they are not correct, the system will have serious deficiencies. Systematic inspection is essential. This course teaches a procedure for software inspection that is based on a sound mathematical model and can be carried out systematically by large groups. The software inspection procedure combines methods used at IBM, work originally done at the US Naval Research Laboratory for the A-7E aircraft, and procedures applied to the inspection of software at the Darlington Nuclear Power Generating Station. The method has been refined and enhanced by the Software Engineering Research Group at McMaster University. It can be applied to software written in any imperative programming language.
This three-day course will give participants an understanding of the way that mathematics can be used to document and analyze programs. Participants should be experienced programmers and willing to use some basic mathematics. The mathematics are classical and take up only a few hours of the course, however, it is fundamental to the method. It is expected that the participants will be accustomed to reading code written by others and it will be helpful if they can read Pascal.
The method is language independent. On the third day, participants inspect code written in a language that they use in their workplace. This course presents an approach to active design reviews that has the reviewers writing precise documentation about the program and explaining their documentation to an audience of other reviewers. A significant part of each day is spent using the ideas that have been presented. On the last day, participants inspect a small, working program that they bring with them from their company. In previous courses, unexpected discrepancies were found in every program that was inspected.
For more information on the course (23-25 Jan 1996) and on the optional Introduction to Software Engineering (22 Jan 1996), please contact Ms. Jan Arsenault, McMaster University, Faculty of Engineering - JHE-201A Hamilton, ON Canada L8S 4L7 email@example.com
Tel: (905) 525-9140 ext. 24910, Fax: (905) 577-9099
Please report problems with the web pages to the maintainer