[Forwarded by John Rushby <RUSHBY@csl.sri.com>. PGN] The German Pilot Union confirms that the European A320 does have navigation problems (abnormal behaviour of the altimeter), but ONLY during ONE of the COMMON LANDING manoeuvres, namely during the manoeuvre carried out by the crashed A320! Lufthansa (German Airlines) admits that AIRBUS has advised all the companies using the A320 to avoid THIS landing manoeuvre, but added that there is no doubt about the security of the plane itself: "There is no reason to withdraw the A320 from service!" the chief pilot said. Former A320 accidents: 26.06.1988 Habsheim near Mulhouse (France) Air France 3 dead 14.02.1990 Bangalore (India) Indian Airlines 90 dead But all these accidents didn't have any impact on the success of A320! Philippe
T.C.Bennett@syse.salford.ac.uk wrote: >In Risks 13.05 Romain Kroes, Secretary General of SPAC is quoted as saying "... >it has been clear to us that the crews were caught out by cockpit layout" > > Surely this statement implies that there is a problem with training >rather than the software or the crew. No, it means there's a problem with the cockpit. There's a great tendency to fix poor interfaces by training, in the industry. The majority of a transition program, for example, is devoted to keyboarding on the flight management system, at the expense of basic skills. Indeed, there's a trend to shorten training programs, not expand them. In highly automated cockpits, there is also a well-recognized problem of the pilot being too far out of the loop; however, manufacturers do not seem inclined to change current design practices (although there are some promising refinements in the 777). This was a problem even with the "second generation" aircraft of the late 1960's; it doesn't take much imagination to see how far glass aircraft have gone. My tolerance for "fixing it through training" extends to one (1) mistake. Airliner manufacturers, however, are carrying through complete cockpit management concepts to broad families of aircraft, though, which is absurd. email@example.com writes: >Before we speculate over the cause of the crash we ought to bear in mind that >there are vested interests in the A320. Airbus Industrie is knocking spots of >Boeing and MD in sales. The US FAA can easily take the side of US >manufacturers, using "software safety" as an excuse. But probably won't: Boeing wants to sell its 777, which is be a fly-by-wire airliner. The majority of customers will come from Europe and the Third World. The time for the FAA to take a stand was in 1987, and it didn't. You may recall Europe dragging its heels when certifying the 747-400, in apparent retribution for the lengthy FAA special conditions for the A320 (less than a third of which dealt with the FBW control system, and even then, very superficially). >We should also be aware that the A320 is far from unique in having computer >control. Many many commercial aircraft have computers controlling some or all >aircraft subsystems. The 757/767/A310/A300-600 were the first generation to even step in this direction, and, as you say, all four relegated authority to *systems*. Safety-critical systems generally had a failsafe human interface. The airplanes all let the pilot have complete flight control. The A320/A330/A340/777 are the first generation which genuinely filter ALL inputs through a computer. The Airbus FBW aircraft have hard limits; the 777 will have "soft" limits (which can be overridden if the pilot so desires). The latter simplifies design, and relies on pilot competence; the former assumes pilot INcompetance, and greatly increases the complexity of the code needed. ** >The 747-400 >manifested a very serious auto-throttle bug, causing loss of engine power (See >RISKS 10.04). To which the solution was to "click it off" and assert manual control. The autopilot, that is. It isn't clear whether one would always have the same success with the A320's full-authority digital engine control (FADEC-- a central component of almost all the large fans, including the -400, but Airbus exploited the features extensively with the A320). >security by the autopilot. It could also be argued that the training of pilots >when flying in "glass cockpits" is inadequate. However, the A320 is not unique >in having these problems. I differ: the A320 cockpit is unique in the secure feeling it generates. I kid you not: I've been in a lot of cockpits; the A320 comes closest to being a spaceship. It has a TOTALLY insulated feel to it. Even a veteran A320-basher like myself felt decidedly cozy when I visited one a couple of years ago. The A320 pilots I know are aware of the threat, and make an attempt to fly "manual" as much as possible, with or without the airline's blessing. This concept has somewhat belatedly come to most airlines running third-generation airlines as well, which makes one wonder whether ALL the automation--which is designed to be operated from takeoff to landing, for maximum profitability--is even worth it, if nobody wants to use it beneath 18,000'. >banning the 747-400, etc), we must consider what things were like before >computers. [...] > >Do RISKS readers seriously believe that ground crews >with clipboards and pocket calculators made less mistakes than the A320 fuel >control system? Yes. The issue is not reliability, it's economy. The dispatch reliability of the A320 still lags a couple of points behind aircraft such as the 737 or 727. But it *may* be more profitable (the airplane is yet to deliver on a few promises). In the case of the A320 fuel control system, the issue was how best to eliminate the F/E position (at $2 mil/year/plane in savings). As far as the fuelling mechanism goes, the ones on the L-1011 and 747 are at least as sophisticated as the one on the A320. And, when all else fails, there're always the non-tech dripsticks, regardless of airplane. :-) IMHO, the A320 is an aircraft with a considerable degree of notoriety (much of it exaggerated), but I think it is premature for the a320-bashers to start beating the war drums, until more information comes to light from the crash investigation. Airplanes do crash, from time to time; not every one is tied to some catastrophic flaw in design and operation. Robert Dorsett Internet: firstname.lastname@example.org UUCP: ...cs.utexas.edu!cactus.org!rdd
...Romain Kroes, Secretary General of SPAC is quoted as saying "...it has been clear to us that the crews were caught out by cockpit layout" In Risks 13.06, T.C.Bennett@syse.salford.ac.uk asserts: "Surely this statement implies that there is a problem with training rather than the software or the crew." T.C.'s statement is a popular, but inaccurate claim, among manufacturers of safety and security systems. The manufacturer will say, "It doesn't matter if that naturally lures the system administrator into leaving the system in an insecure/unsafe state--we'll put something in the manual that tells him/her NOT to do that." An example of fatal man-machine interaction that is taught in most user interface classes is that of a plane used in WWII. In said plane, when a lever in the cockpit was up, the landing gear was down. This plane experienced a number of accidents until the manufacturer switched so that lever down=landing gear down. There are certain things that humans naturally expect machines to do in response to certain actions, and training and manuals are not entirely successful at changing those expectations. Kraig R. Meyer
Just some random thoughts on the subject, based on what's been posted here thus far: <>1. The extension of the concept of "pilot error" to "pilot computer error" is interesting. (Not only can those dumbos not fly a 'plane, they can't even program a computer! :-)<< Then how in the world did they manage to get 14,000 flight-hours between them, I wonder? It would appear that inexperience with flying in general is not a factor here. <>3. Unless we assume that two experienced pilots simply forgot that there are a few mountains in the way on the Strasbourg run, they would only have selected a descent mode prematurely if they did not know where they were. << This would appear to be the best answer. Consider: If there was equipment failure either A: the impact would have been far more sudden this this report indicates, or B: they would have radioed a mayday, since they'd have had enough time to do so. But since, as is pointed out: <>4. The aircraft did not "smash into a mountainside". It crashed into trees in an area where there is a fair amount of reasonably level terrain, as shown by the fact that the rear section was slowed down by the tail catching in trees.<< it would appear that the pilots never knew that something was wrong; and all equipment appeared to be working, until it was FAR too late to do anything about it, and this is the only logical reason I've seen offered as to why they were unable to get a message off before going down. All of this is made garbage, however, if you believe the Times article, though, where is states:"Jean-Paul Maurel, general secretary of the French pilots' union, said the aircraft had been on a normal approach path, well above the Vosges peaks when it suddenly plunged and hit the ground in less than a minute." This scenario would indicate some sort of major systems failure, and not the fauilt of the pilots in question. (But then, would anyone really expect a pilot's union rep to say that their members screwed up and killed X number of people in doing so?) <>In <a href="/Risks/13/05">Risks 13.05</a> Romain Kroes, Secretary General of SPAC is quoted as saying "... it has been clear to us that the crews were caught out by cockpit layout" Surely this statement implies that there is a problem with training rather than the software or the crew.< Not being a flyer, I wouldn't know, but new equipment, particularly if there are controls in places other than where you expect them to be, or that have a different command structure to them will foul even the best of us up in a crunch. I drive a stick shift car, for example. I'm forever poking a hole in the floorboard of my wife's automatic, trying to find the clutch when I need to stop in a hurry. Some of you, I'm sure foul your typing when trying to get used to a new keyboard. Could this be a factor in this crash? I mean, it seems established that the plane was below a safe approach level, well before the crash. Could it be, that the pilots realised this, and attempted corrective action in a hurry, and being a new system to them, fouled things up? <>Finally, I would like to quote an `expert' on aviation who in a recent TV interview said that "the common theme to all three of the A320 crashes is lack of altitude"<< Then, if true, could the pilot/computer interface with regards to altitude be different enough to be causing problems in panic situations? Pure speculation, here, of course. Final question: If the twoer was able to record that the plane was below a safe path, why was there no message radioed to the plane of the situation?
Between 10 and 27 December 1991, Leading Edge Products shipped up to 6000 IBM-compatible personal computer systems each of which included among the hard-disk software the Michaelangelo virus — which wipes the hard disk on the artist's 6 March birthday, although it also has some earlier destructive effects as well. [See San Francisco Chronicle, 25 Jan 1992, p. B1]
In the PC/AT environment, some adapter cards (i.e. interface circuit boards) have ROM's which contain extensions to the motherboard's own BIOS ROM for dealing with the peculiarities of accessing the interface board. These programs are executed when someone accesses the device, and could contain code which could do anything to the host system, such as writing a virus onto the hard disk. I've only seen these ROM's on graphics cards, but I think other devices perhaps including printer interfaces might have them too. Mark Thorson
It seems to me that there are many ways in which something that the media would consider a "printer" could contain a virus, or more likely a worm. The most obvious one is if the "printer" was a console printer terminal. Some of the less obvious ones are: The printer could simply print the wrong information. This is quite easy if you know the output is coming from some well-known program. Of course, then it wouldn't act the way it was reported to...which leads to the next possibility: My best guess is that it's more of a meme than a virus. That is to say, an idea that you plant in the heads of your enemies that causes them to waste time checking *everything*, no matter how ridiculous it may be. Once you propagate the meme of a "printer virus", people without local printer technologies have to worry every time they get a printer from somewhere else, and spend time reverse engineering it (assuming that's even possible). I often think some of the PC and Mac viruses fall into this category more than any other. Watch what you believe, it determines who you are.
I agree that the Iraqi printer-virus story was almost certainly a hoax. But I perceive some vagueness in your explanation ... "The print server (on, say, DECnet) is actually a networked computer acting as a print server; accepting files from other network sources and spooling them to a printer. True, this computer/printer combo is often referred to simply as a printer, but it would not, in any case, be able to submit programs to other hosts on the net." I disagree. With a bus-style network such as Ethernet, what is to prevent a subverted print server from mimicing one of the VAX hosts? I've seen it done. "However, it is unlikely that the Iraqi air defence system was Mac based ..." Can you substantiate this? Why is it unlikely? "In a situation like that, the first thing to do when the system malfunctions after a new piece of equipment has been added is to take out the new part. Unless the "chip" could send out a program which could survive, in the network or system, by itself, the removal of the printer would solve the problem." But "... a program which could survive by itself ..." is pretty much the defining characteristic of a virus: it installs itself into a new host. "Coming from the popular press, "chip" could mean pretty much anything, so my initial reaction that the program couldn't be large enough to do much damage means little." Indeed. In the scenario I envision, the "chip" would be a set of ROMs which fully replace the 2 megabytes of instruction storage on a PostScript printer (which is a typical size for an Adobe level 2 system such as the Laserwriter IIf.) "However, the programming task involved would be substantial." The NSA is capable of performing substantial programming tasks. "The article mentions that a peripheral was used in order to circumvent normal security measures, but all systems have internal security measures as well in order to prevent a printer from "bringing down" the net." And this security is woefully inadequate to prevent a virus that is designed with full knowledge of that security. "The program would have to be able to run/compile or be interpreted on the host, and would thus have to know what the host was, and how it was configured." Yes, and it seems quite reasonable that a federal spook agency would have that information. "It would also have to be sophisticated enough in avoiding detection that it could masquerade as a "bug" in the software, and persistent enough that it could avoid elimination by the reloading of software which would immediately take place in such a situation." But viruses do this every day! "It has also been rumoured, and by sources who should know, that the US military has sent out an RFP on the use of computer viri as computer weapons." The military confirmed the RFP, but pointed out that the objective of the study was *defense* against such virus weapons. -=- Andrew Klossner (uunet!tektronix!frip.WV.TEK!andrew)
Fishy as the "printer virus" story is, one should note that the statment: | ``A printer is a receiving device. Data does not transmit from the | printer to the computer,'' said Winn Schwartau, executive director of the | International Partnership Against Computer Terrorism. is not necessarily true anymore. With the advent of printers that speak TCP/IP and hook directly to the network, the idea that a printer could wreak havoc on computers and networks is no longer such a far-fetched idea. Jerry McCollom, Hewlett-Packard, Colorado Networks Division email@example.com
>The first question is: could a chip in a printer send a virus? Doesn't a >printer just accept data? I read that set of articles in my copy of USN&WR, and it made me think about what a printer does and the software that drives it. No, today's printers are vastly smarter than the dot matrix (or band printers) of yesterday. The PostScript one sitting 20 feet away is smart enough to tell the computer feeding it that it is out of paper, and not just by raising an interface signal, but by saying "out of paper". It can also tell me exactly what it didn't like about the print commands I gave it. >However, the "information" which comes back over the >line is concerned strictly with whether or not the printer is ready to accept >more data. It is never accepted as a program by the "host". Ah, but is that true? It was not but a short while ago that an Internet worm was released whose method of entry was to overflow the data input of a certain program, causing the extra material to be written to the stack and executed as a program. This is, I am sure, just one example of how DATA can become PROGRAM (and, as a programmer, I have had that happen too many times). >The case of "network" printers, is somewhat more complex. ... > ... True, this computer/printer combo is often referred to simply >as a printer, but it would not, in any case, be able to submit programs >to other hosts on the net. Perhaps it could. Networked printers (the ones that hang on the Ethernet) are assigned Internet addresses just like systems are. They have access to all the information passing through the Ethernet, including passwords and login id's from remote logins. Unless the network admin specifically excludes the printer addresses from being able to log in (if that can be done, I would have to spend some time investgating that question) there is nothing that would stop a printer from generating a login similar to one that it sees coming across the net, dumping an executable to the system, and running it. Thinking about this, it suddenly becomes apparent the danger of terminal servers. They are capable of monitoring any login coming in through them, and the systems they talk to are unable to keep them from using it to log in 'by themselves', without also closing out the users. >Third question: could a virus, installed on a chip, and entered into >the air defence computer system, have done what it was credited with? I don't recall if it was explicit or merely implied that the NSA knew what systems the Iraqi's were using. You wouldn't need to know much about the specific software, as long as you knew they were using X (a fact a smart printer could probably determine by watching the type of packets on the net) you could screw up the display. All of this is, of course, probably highly secret and will never be known for sure. It does look like the technology is there to do something like this. Whether the NSA invested the time and effort to do it is the question.
Noting that the Kentucky Public Service Commission recently changed its mind and decided to require per-line as well as per-call blocking (Telecommunications Reports, Jan. 6), GTE said that it is now "sadly and seriously" considering withdrawing the service. Because of the uncertain status of Caller ID in several states, GTE has chosen not to file Caller ID tariffs in Arizona, Arkansas, Idaho, Iowa, Michigan, Nebraska, New York, Ohio, Oregon, Pennsylvania, South Carolina, Texas, Washington and Wisconsin. Professor Lance J. Hoffman, Department of EECS, The George Washington University, Washington, D. C. 20052 (202) 994-4955
To quote Christopher Stacy: "Computer privacy ("security") systems need to be flexible, human engineered, understood by their users, and have their policies advertised and in conformance with the social setting in which they are used. It's very easy for counter-productive security measures to infect a group's thinking - a real case of a "computer virus" infecting people! :)" This was precisely the point of Multics access control — developing effective security meant usable and "friendly" security. I do question whether the fishbowl atmosphere of ITS necessarily contributed to the success of the labs research. (Though there were lots of valuable ideas in (and on) ITS independent of AI and ignoring privacy issues did simplify aspects of project administration.) An organization can choose a fishbowl model and reflect it throughout the organization. To a large extent I do like the relatively open atmosphere. But we shouldn't mistake a total lack of privacy as being fundamentally virtuous. The fishbowl approach is easy but misses the point of computer systems a part of the cultural infrastructure. I must have confidence in the privacy of email in order to use it as a fundamental means of communications and not just as a novelty. I don't discuss confidential information on my cellular phone for just that reason. The cellular phone system is a prime example of lessons not learned and a reason to periodically remind people of Camelot, oops, Multics. The Ohio case, (given my guesses as to what happened) is another example where the fishbowl model is not appropriate. In the PC world we have physical position as the model of privacy. This works nicely as an extension of ones desk. In order to more effectively share resources we need to have faith that the shared systems will also respect my privacy. As an aside, I fear that misguided organizational policies may pressure people into making people contribute their machines Linda servers. After all, a CPU is a terrible thing to waste. Let's recycle them. (Luckily there are the new power management chips like the 386SL that will allow me to argue that noncomputing is nonpolluting) Lotus Notes is a modern example of a system with a high regard for nonintrusive privacy. It achieves mixed success in regard to usability but does demonstrate how one can provide Multics-like security (and then some) in a loosely coupled PC environment. (I'll have to admit some bias here being a heavy Notes user myself).
In RISKS 13.06, Christopher Stacy responds to an earlier remark about the fact that Multics normally protected files against access by others, [noting the ITS system ...]. A still different computer security scheme was in use at the Stanford A.I. Lab (SAIL) in the same period, with still different social conventions. Default file and screen protection was used and protected resources were regarded as private. However, special commands made it possible for anyone to remove any security that got in their way. Such actions "left tracks" so that questions could be asked later about why it was done. If it was done without adequate justification, social or employment sanctions were applied. This approach probably would not work well in situations where users have little contact with each other, but in our laboratory environment it worked very well. We knew that there were enough computer security loopholes and enough smart people around who knew how to exploit them that there was no hope of strictly computer-enforced security working. By making security-breeching software available to everyone, we levelled the playing field and diverted energies that might otherwise have been expended on creative "locksmithing" into more productive channels. The un-secure commands proved useful in a number of situations in which individuals or groups had failed to foresee the need to access certain protected files whose owners were not available. Les Earnest, 12769 Dianne Drive, Los Altos Hills, CA 94022 415 941-3984 Internet: Les@cs.Stanford.edu UUCP: . . . decwrl!cs.Stanford.edu!Les
Marc Shannon describes a computerized telephone call from a credit agency, misdirected by a wrong phone number to him, concerning another person's account. Of course, the call violated the privacy of the intended recipient. He concludes by wondering if "we starting to rely on computers a bit much?" Probably true in many cases, but you don't need to involve a computer for this particular problem. Although this kind of program can dehumanize the situation worse by moving you further apart, a simple answering machine will suffice. I frequently get all kinds of misdirected messages contain private information left on my answering machine; these are all different places calling different people. One popular one is for one or another doctor's office to call and leave exasperated messages telling me about how the payment for someone's particular medical service is overdue. Sometimes they bother to leave a return number. It doesn't seem to matter what the outgoing greeting on my answering machine says. I believe that most people leaving messages simply assume that if they reach an answering machine, they have reached the correct number. Or perhaps it has little to do with technological risks, and is instead really a social issue. For example, maybe these kinds of incidents can be attributed to people just not caring about the quality of their work, rather than to technological naivete. Perhaps these problems contribute to each other. One risk that we run in RISKS, is attempting to analyze broad social issues with our pronounced technocratic tendencies.
A gas station near my house recently added credit card readers to their gas pumps so one no longer has to go inside to pay. At first I thought it was convenient. It even gave me a little receipt. How special. I went home and thought I'd calculate my gas mileage. I whipped out the receipt and noticed that it was for $2.00 less that what I had put in. Hmmm, that's strange. Then I noticed that it was somebody elses card number and name on the slip. I had watched the receipt print, so I know it wasn't the case of someone not taking it after the sale. If I got someone elses receipt, maybe that person got mine. Somehow, I don't get a warm fuzzy feeling from that. Sounds like a risk to me... Mike Keeler, Alisa Systems, Inc., firstname.lastname@example.org
Please report problems with the web pages to the maintainer