[Reprinted from comp.parallel Usenet newsgroup (Dennis Stevenson, moderator)] From: firstname.lastname@example.org (Andy Glew) Newsgroups: comp.parallel Subject: Poetic Justice in a Machine Crash Date: 14 Sep 90 12:56:47 GMT Organization: Center for Reliable and High-Performance Computing University of Illinois at Urbana Champaign Our Encore Multimax just crashed in a way that seems like poetic justice: On the console terminal appeared a fragment of somebody's paper about multiprocessor interconnection networks. The last readable sentence was: "...it is difficult to build shared memory procesors " "^%%^$#$%#%$#it is difficult to build shared memory$%#%$#it is difficult to "build shared memory$%#%$#it is difficult to build shared memory$%#%$#it is "difficult to build shared memory%^$%^$%^$@#$@!$@#$@difficult to build "shared memory$%#$%$%#$%shared memory%^$%^$%^ shared memory %&$%^^%$$%^%^$ "shared memory&*^&*&*^&*^shared memory ... Almost too good to be true :-) (The screen garbage and control characters are not recorded verbatim). [Once again I am reminded of the prophetic nature of Vic Vyssotsky's Chaostron piece, reprinted in CACM, April 1984, pp. 356-7. PGN]
>The commander is unlikely to ignore the advice of the expert system, >unless it is clearly perverse. This means that the decision (say, >to launch a weapon) is being taken, in practice, by the expert system. This remark incited two responses asserting that the retaliatory decision in the case of the Vincennes was a matter of human, not mechanical, judgment, and that the computer system merely provided humans with better information than they would otherwise have, so that the human decision becomes more meaningful. This is ridiculous. In the case of the Vincennes, it cannot be disputed that a mistake was made. The Pentagon found no human responsible for it, so it must have been a mechanical error. (Recently, Captain Rogers was awarded a special medal of honor for his courage in commanding the Vincennes through the shootdown.) The assertion that humans have time to meaningfully evaluate the computers' information in a few minutes is patent nonsense (as proven by the Vincennes) - all humans can do is to *gamble* whether the computers (or their readings of the computers' consoles) are right, and so they act as no more nor less than randomizing agencies - i.e. one would get the same level of "judgment" by card shuffling. Such decisionmaking is de facto *governed* by computer: without computer prompts, no retaliatory decision at all would be taken; and, simply because of computer prompts, a virtually immediate retaliatory decision is mandated; and, that decision is based fully on the information provided by the computers. In view of these circumscribing facts, the construction that in practice computers "make" the controlling decisions is required both as a matter of common sense and as a matter of law, under the realistic interpretive standards unhesitatingly applied by the Supreme Court in Bowsher v. Synar, (1986) 106 S.Ct. 3181-3191, which ruled that the facial freedom of a proposed officer's decisionmaking was nullified by the circumscribing constraints: To permit the execution of the laws to be vested in an officer only answerable to Congress would, in practical terms, reserve in Congress control of the execution of the laws... There is no merit to the contention that the [officer] performs his duties independently and is not subservient to Congress. Although nominated by the President... the [officer] is removable only at the initiative of Congress... the political realities do not reveal that the [officer] is free from Congress' influence... [a]lthough he is to have 'due regard' for [executive rulings]... The congressional removal power created a 'here-and-now subservience' of the [officer] to Congress... In constitutional terms, the removal powers... dictate that he will be subservient to Congress . . . Unless we make the naive assumption that the economic destiny of the Nation could be safely entrusted to a mindless bank of computers, the powers that this Act vests in the [officer] must be recognized as having transcendent importance. Just so, minimal retaliatory timelines "in practical terms" assure the dominance of military computers in Vincennes-style decisions, which gives rise to a "here-and-now subservience" of military to to mechanical bodies.
>From: email@example.com (Steven Philipson) >In Risks 10.37, Martyn Thomas <firstname.lastname@example.org> writes ..... <>The commander is unlikely to ignore the advice of the expert system, <>unless it is clearly perverse. This means that the decision (say, <>to launch a weapon) is being taken, in practice, by the expert system. > > Neither conclusion follows. First, the proposed system is intended >to "advise commanders". It is NOT stated that the system is intended >to act on its own or to be blindly followed. Commanders will be very >likely to ignore the advise of such a system -- they tend to be very There is truth in both these viewpoints. My observations indicate that people tend to place more faith in the 'judgment' of machines than is warrented. Steven believes that commanders are suspicious enough of expert systems that this tendancy is overridden. The real issue is getting good information to the person making the decisions, (Can you tell I'm M.I.S.?) and making sure that the decision maker understands the system(s) that is(are) giving him information well enough to evaluate that information. Most of the problems I've seen with automated systems aren't intrinsic to the form. They're implementation errors. * When you put in your 'Expert System', do you train the user to evaluate its output, or do you train them to follow its directions? If the people who MAKE the system are the ones doing the training, I'd bet on the latter. * Does the system tell you WHY it thinks you should do something, or does it just tell you to DO it?. Deciding what a person does and does not need to know is always a tricky task. * Is it obvious or explained HOW the system works? It's much easier to predict bizzare, erroneous, or just less-than-optimal performance if you've got a good idea of how the system works. * When you put the system in, are you removing other sources of information? Putting a tv camera on a vehical will let you see into blind spots, zoom, and edit in other information. That DOESN'T mean you ought to plate-over the windshield.... The problems we have using technological artifacts are the same as we have making the parts work in the artifact. The pieces have to work with the system, And the system always includes all the parts AROUND whatever you're changing.
> We all wish to minimize risk, but we must recognize that we can > not eliminate it; there are significant risks in human activities > regardless of how they are undertaken. There will be grave errors > in military operations regardless of the technology that we use. The captain of the Vincennes was faced with a decision that had four possible outcomes: 1. Destroy approaching plane; plane is hostile (CORRECT OUTCOME) 2. Destroy approaching plane; plane is not hostile (ERRONEOUS OUTCOME) 3. Don't destroy approaching plane; plane is hostile (ERRONEOUS OUTCOME) 4. Don't destroy approaching plane; plane is not hostile (CORRECT OUTCOME) Mr. Philipson's statements read as if erroneous-outcomes such as case #2 are unavoidable given the nature of decision-support systems and human decision-making, and are qualitatively similar to errors such as case #3. They are neither. Military personnel have, by joining or accepting induction into armed service, accepted certain risks. Civilians have not. If there is any doubt whatsoever that the approaching plane was hostile, the Captain should have decided not to destroy it, accepting the risk of outcome #3, i.e., that his ship might come under attack (Note: not even necessarily that his ship or crew would sustain any injuries). He and his crew signed up for that risk when they went to sea. The passengers of the airliner had accepted no such risk. The Captain should have waited. Jeff Johnson, HP Labs, Palo Alto
The following items appeared in the 9/14/90 edition of Action Line, a consumer advocacy department of the San Jose-Mercury-News. I recently shopped at PW Market at Landess and Morrill roads. When I gave the clerk my check, she immediately accused me of writing a bad check -- several, in fact. I was totally embarrassed. I've never bounced a check in my life! -- L.T.L., San Jose [Response by Action Line.] There was a communication problem, says Mike McMaster, store manager, who says he's sorry you were embarrassed. He will be sending you a letter saying so. However, McMaster says you weren't accused of writing a bad check but misunderstood the chain's check-clearing system, which is different from most other stores'. PW's system records bad checks by listing the last six digits of the checking account number; the _complete_ checking account number is listed in a separate booklet. The last six digits of your account number matched one in the store's computer, which is what caught the clerk's attention. After the clerk checked the book, however, she realized the rest of your account number did not match. McMaster says you didn't want to listen when employees tried to explain it to you. McMaster says PW is trying to rework its computer system so it will accept 10-digit numbers to avoid a similar situation in the future. [To me, it seems like there is quite a range of quality in the machines used to verify my credit. Some are solid-looking hardware from NCR or IBM with expensive keyswitches and plasma displays. Others are cheapo stuff with LED displays and calculator-style keypads. I guess PW went with the system from Ma & Pa Kettle POS Systems. mmm]
Let's pretend to be a smart password cracker who has heard of "doing things in parallel" and design a system that would allow us to crack passwords on many machines simultaneously. One might conceive of a RPC service that accepts incoming requests for (user-name, plain text password) pairs and tests the validity of said pair against that system's password file. One would run this service on every workstation, and obtain parallelism by making requests to all workstations virtually simultaneously. Example: Let's say we have a typical office environment of a few file servers with 30 workstations. We are security conscious, so we enable SUN's C2 security package. Typically, these workstations would share the same password file using SUN's Yellow Pages. All we need to do now is to install the above mentioned service on all of these workstations and we can check about 30 passwords in parallel. Not bad, if we could just find a way to get that service running on all workstations... Gee... I wonder what rpc.pwdauthd does? Bingo! Here is EXACTLY the desired service. It already runs on every SUN with C2 enabled! When C2 is enabled, the password part of the password file is hidden from users. Rpc.pwdauthd fills the gap by providing a service whereby a (user-name, plain text passwd) may be verified; this service runs on every workstation. One makes RPC calls to the daemon to verify a password. Unfortunately, any one from any where may do this at any time, thus leaving the doors open for distributed password cracking. The cracker doesn't have to provide his own CPUs -- he can just use all the CPUs that are in your domain! To make matters worse, rpc.pwdauthd does NOT generate any audit records even though it makes the appropriate calls. This is a bug -- the code neglects to set the process audit-uid and audit-state, so all auditing is ignored. Conclusion: In order to hide passwords from crackers, we have instead offered crackers the IDEAL means to do what they wanted to do in the first place: Crack YOUR passwords, using YOUR machines, possibly using ALL your workstations!!! 00c - Caveh Jalali
Harvard has just finished installing (at some expense) a new phone system for undergraduate use. By various threats and persuasion, HU has set up this system so that no sensible undergraduate would buy phone service from any other vendor (Only HU subscribers get listed in Harvard's Centrex directory, get Centrex service, etc.) To handle long distance (which subscribers to HU's local service automatically buy from HU as well) they have set up a Personal Authorization Code (PAC) system; to make a long-distance call from any 493- (student) phone, the caller must enter a five-digit code that is assigned at the beginning of the year. Students cannot change their codes except by going to the phone people and asking for a new one, which costs money. A conversation I have heard at *least* five times since my arrival two days ago: "Gee, there are 100,000 possible PAC codes, 00000-99999. And there are 6400 undergraduates [essentially all of whom live in dorms and subscribe to Harvard's phone service]. That makes a 1/16 chance that a randomly chosen code is a valid PAC code." "So, choosing about [fiddles with calculator] 10 or 11 codes at random gives a 50% chance of hitting a valid one." The RISK is obvious. This is obviously a frequent thread, but if an institution with so many intelligent people (like, in the CS department or Math department) can be so STUPID... PS. My friends at MIT tell me that their system is similar. Is every college this asinine, or only the decent ones? ;-)
I would like to know if anyone has any experiences with, citations,and/or thoughts about DTP fraud. With a relatively low investment in scanning devices, a person could easily use desktop publishing for copying checks, certificates of deposits, currency, diplomas, grade transcripts, licenses, etc. A few cases have already been found and a relatively recent article in FORBES spelled out the problem in some clear detail. Here are some questions that come to mind about the risks. Comments welcomed. * Is this truly a problem today or in the near future? * Are there techniques to detect scanned versus original documents? * Are there sufficient restrictions over the types of specialized paper stock that is used for currency and other financial instruments? * What are some of the developments in desktop publishing (or other related technologies) that may worsen (or even possibly contain) this type of copying? * What are some of the other types of documents that people have or might copy for illicit purposes? * If there is the capability to scan and/or manipulate financial instruments, what will happen to the national economies of nations that are (somewhat) dependent upon restricted opportunities to counterfeit these instruments? Need some extra money? Warm up that copying machine. Sandy Sanford Sherizen, Natick, MA 01760 USA PHONE (508) 655-9888, FAX 508-879-0698, MCI MAIL: SSHERIZEN (396-5782)
'Business Week', September 24, 1990, carries a story by Jeffrey Rothfeder detailing not only the all-too-common tales of mistaken identity in personal data, but a new category of "data cowboy" selling data it is illegal for employers to have: LOOKING FOR A JOB? YOU MAY BE OUT BEFORE YOU GO IN Background checks are nosier now, and harder to fix when wrong The lead-in tells of James Russell Wiggins being fired after six weeks on a new job when a background check turned up a drug conviction. "It turned out that Equifax Services Inc., the company that investigated Wiggins' past, had goofed: It pulled the criminal record of James RAY Wiggins. Wiggins was the accidental victim of increasingly common practice -- combing data bases to find information on job applicants....." "Providing employee data to companies ... is a booming business, say data vendors. Sales of pre-employment data are growing as much as 75% a year for some suppliers. The larger players -- Equifax, Fidelifacts Metropolitan New York, and Apscreen, among others -- provide more than raw data. They mix information from various data bases and produce summaries that describe the applicant's financial condition, criminal and driving records, and business relationships. Despite the occasional mix-up, the big data companies have earned a reputation for thoroughness." Such checks can cost as little as $100, but some employers with high turnover find even that too much, and turn to cut-rate data sellers who "assemble raw, unchecked data from creidit bureaus, motor-vehicle departments, courthouses, and other sources. Problem is, some of the information may not be legal to use when hiring. 'These data cowboys worry me,' says Apscreen Pres. Thomas C. Lawson, who fears that a backlash against them could prompt restrictions on the sale of legitimate pre-employment information...." "Information Resource Service Co. (IRSC) in Fullerton, Calif., for example, sells lists of arrests that ended in acquittal, discharge, and no disposition...." It's illegal for a employers to have such information. Other data bases, such as Employers Information Services Inc. (EIS) in Gretna, La., track employees who have filed for workers' compensation. "Ernest Trent, a Pennzoil Co. roustabout who has 15 years experience, ripped his right arm on an oil rig in 1986 and collected workers' compensation. Since then, he has been turned down for nearly 200 jobs. 'I'm blacklisted.'" If so, it's illegal. "Both EIS and IRSC say they can't control how their clients use the information they buy."
My piece in the September CACM (in the inside back cover section called INSIDE RISKS) has a really strange error, in which a bulleted item appears near the top of the last column, instead of at the beginning of the conclusions section, with the other bulleted items. Constantly having to live with flaky networking, I am not surprised by anything, but do not recall having an EMail message arrive with the order of paragraphs scrambled. (We have of course had numerous reports of compression algorithms going astray, lost messages, duplicate messages, etc.) In this case, BITNET could be the culprit, because my copy of the same message was fine. A context editor problem might also be suggested.
Please report problems with the web pages to the maintainer