Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
I went on vacation last week, irrelevant I know except for the following. I flew on American Airlines, which had for the amusement of its passengers an in flight magazine called "American Way" issue dated February 18, 1986. It contained an article written by Isaac Asimov which I have reproduced here. The clever pictures by Kent Robbins also in the article were omitted for obvious technical reasons. Robots! Beware! by Isaac Asimov reprinted from American Way, February 18, 1986. I invented the Three Laws of Robotics in 1942, and these laws, which are built into the robots of my science-fiction stories, prevent them from harming human beings, force them to follow orders,a nd make them protect themselves, in that order of importance. Of course, the robots in which I imagined these laws to exist are complex fictional robots, far more advanced than anything in real life (as yet). In contrast, the robots in industrial assembly lines right now are just com- puterized arms, capable of doing simple tasks over and over. But they are capable of doing harm, and, as the inventor of the laws, I always feel guilty. Two workman in Japan were killed by robots, and in July, 1984, there was the first fatality in the United States. When the first American was killed by robots, there were 13,000 robots in in industrial use in the United States. One such accident with 13,000 robots in existence doesn't seem like a bad ratio, but it is estimated that by 1990 the number of industrial robots will reach 100,000. Will the rate of robot-caused fatalities also increase eight- fold? One may argue that accidents occur in connection with almost every mechanical device, however simple and small. Yet robots are different. Because they seem more intelligent than other machines, a fatal accident seems more likely to be the result of there malevolence. There is the feeling that intelligent machines should be more careful and avoid hurting a human being. In short, even if I hadn't invented the Three Laws of Robotics, people would take it for granted that they ought to exist. People therefore would resent robots more than they would resent other devices that do harm; a robot should know better. If we're living in a society that is going to be more and more robotized, then a public that resents and fears robots is likely to cripple what we think of as progress. Yet the serious accidents that have taken place so far in connection with robots have been the result. at least in part, of human carelessness. Perhaps in place of the first law we need a substitute that puts the onus on human beings. The first law --"A robot may not injure a human being, or through inaction, allow a human being to come to harm"-- cannot be built into the simple robots of today, so maybe it should be replaced with "A human being must not approach a robot in operation or one that may suddenly become operable." In other words, the human being must stay away. In order to reinforce that, the robot must be surrounded by a barrier, ideally one with a gate that when opened to allow human beings access will cut off all power to the robot. Unfortunately, a barrier is sometimes insufficient. If it can be climbed or crawled under, there is nothing to prevent someone from doing that rather than taking the trouble to open the gate. (Why? It's hard to explain, but we see human beings risking their lives every day in order to save 15 seconds of time.) As a result the barrier must not simply consist of railings or a low fence. It should consist of an elaborate fence that only can be penetrated by way of a gate. Furthermore, people who work with robots (of the kind we have now) must be thoroughly indoctrinated with the understanding that a robot that is not in operation may have inactivity as part of its cycle and that if the power is not off, the robot may suddenly move into operation as another part of its cycle begins. There might be emergencies when human beings must approach robots in oper- ation. If so, it is unsafe to suppose that they can count on a robot cont- inuing a motion indefinitely no matter how often it repeats the motion. It is possible that the robot's programming calls for repeated motions of a particular sort, but eventually, a set of different motions will start as another part of the cycle begins. To help understand this, there should be clear markings on the floor and other work areas representing the extreme range of all robot movements in all directions. Since no matter what one does, experienced workers begin to be over- confident of their own abilities and contemptuous of the robot's ability to do harm, indoctrination should be repeated periodically, and any viola- tion of safety rules invariably should be followed with disciplinary action. Eventually, of course, when robots have grown sufficiently complex, the three laws may be built into them, and then take over the responsibility for human safety, and we can relax. ==================== Isaac Asimov report's that the word "robot" is of Slavic origin and was first used in a play, "R. U. R." written by a Czech playwright, Karl Capek, in 1921. The initials stand for Rossum's Universal Robots. In Czech the word refers to "involuntary servitude."
Expert systems are inherently untrustworthy. If you claim or imply otherwise, and if the system subsequently causes harm, and if those harmed sue you, you get what you deserve. John Shore
Nancy Leveson <nancy@ICSD.UCI.EDU> writes: >I have recently seen several risks contributions which assumed that humans >are the cause of most system accidents and that if the human was somehow >replaced by a computer and not allowed to override the computer (i.e. to >mess things up), everything would be fine. I really don't think anyone is proposing this. What people are proposing is the use of computers to monitor data and alert humans to potentially dangerous situations. My understanding is that even minor failures at nuclear power plants activate hundreds of alarms and warning indicators. Clearly what is needed is an expert system to analyze the mass of incoming data and summarize the situation to the human staff. It can also react, more quickly than humans can, but presumably it would be designed to seek human approval before taking any drastic action.
The following article, reproduced here in its entirety, appeared in the 25 February 1986 edition of the San Diego Tribune. Can Nevada handle new slot gimmick? LAS VEGAS (AP) - A slot machine promotion promising payoffs of $10 million to $15 million has been given the green light by the Nevada Gaming Commission, but not without some misgivings. Commission Chairman Paul Bible said he had reservations about slot cheats who might rig the machines for phony payoffs. The progressive slot machine network, known as Megabucks, would be available in numerous hotels throughout Nevada and would be linked by a computer system to build up the huge jackpots. Ray Pike, an attorney for Megabucks manufacturer International Gaming Technology, said the company has made every effort to make the machine cheat-proof. It sounds like they are using some sort of computer network to link a bunch of slot machines together. Without knowing more than the above about the system it's hard to tell if they have vulnerabilities that other financial networks (like ATMs) don't have. Cheating a slot machine is not the same (in most people's minds, I suspect) as stealing from a bank, so — with $10+ million at stake — I'll bet (pun intended) that someone will try to break the system soon.
Please report problems with the web pages to the maintainer