Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
A KAL pilot said that the pilot of the downed plane may have been the indirect victim of his autopilot computer. [For those of you new to this problem, the most plausible theory thus far seems to be that the copilot had inadvertently left the autopilot set on HDG 246 instead of switching to INERTIAL when passing over the outbound checkpoint, at which point they should have changed course.] I contest the summary characterization of the inadvertent setting of the autopilot as a "the most plausible theory thus far" to explain KAL007's winding course. My honest opinion is that the most plausible explanation of all the facts is that the route was calculated to stimulate Soviet radars for intelligence gathering purposes. To this day there has been no public congressional investigation into the KAL007 incident, even though the Air Force irregularly destroyed radar tapes of the flight, and even though Japanese tapes of the incident, et alia, strongly indicate that the course of KAL007 was deliberate. A statutorily required investigation by the National Transport Safety Board was inexplicably cancelled, documents lost, and gag orders placed on all civilian employees. See Shootdown, Viking (1986), by R.W. Johnson, for a thorough review of the astonishing evidence that KAL007 was in fact on an espionage mission. He carefully *eliminates* the accidental autopilot setting theory, and all other seriously-taken specific navigational-error hypotheses. If you haven't heard of KAL015's bizarre duplicity, or of the deceptive maneuvers shown on Japansese radar tapes, you haven't begun to understand the weight of affirmative evidence that KAL007's route was wilful. Books like Hersch's on the subject are silly to dismiss the espionage hypothesis in a footnote, while simply ignoring such evidence. As for my main point re the autopilot explanation, KAL007's route was more "organic" than linear, with in-flight course changes, including remarkable curves, each of which would have had to have been mistakenly made in order for the autopilot error explanation to hold. This is a case where the characteristics of the several course "errors" do not conform, in a basic sense, to the characteristics of computer error. In particular, the 246 degree fix simply does not account for KAL007's route. This hypothesis is not "most plausible," it's not even a possible explanation. True, a pressured and hopelessly understaffed international inquiry, before the release of most of the still pitiful evidence now published, concluded that the 246 degree hypothesis provided a possible explanation of the incident, but this was an illogical (and apparently political) statement, so plainly untrue that the international pilots' organization took the trouble to formally denounce the assertion. Perhaps computer professionals likewise have a responsibility to make it clear that the hypothesis is woefully insufficient, and amounts to little more than an application of the convenient Electronic Warfare Theorem: "If possible, get an expensive electronic device (i.e. a computer) to make a decision; if the decision turns out to be wrong, one of its tape units can be disconnected and two programmers fired in retribution." (Bourland, "Non-Decision Theory", Memorandum to the Director of Research, DOD, Dec. 1961.) In conjunction with all the other facts, Occam's razor forces me to prefer the espionage hypotheses, at least until the Congress publicly investigates the incident. In the meantime, I think it is objectively clear that all the autopilot-error-cum-sleeping explanations that have done the rounds are all fatally inadequate. If we suggest that such explanations are plausible, or seek only the "least implausible" sequence of snafus, we may erroneously squelch the rightful reasonings of those who will continue, against the political odds, to press for a public inquiry. As for the "new" hypothesis that the tardy realization of error caused a continuation of the erroneous course, this fails to account for the fact that KAL007 suddenly swooped, late in its course, even further into Russian territory, rather than away from it, as would have been the obvious reaction upon discovery of error. Nor was this due to panic, for even at the last KAL007 radioed its position in perfectly normal tones, even reporting, quite casually, in its last moments, that it had ascended to a new altitude, whereas *three* Japanese radars indicated that KAL007 completed a steep dive before making this final false report. (This followed upon consecutive false position reports for KAL007 that had been relayed by the follow-on flight KAL015, despite an order from a ground controller that KAL007 should report its position directly.)
A new venture in token-based ID authentication — and a hint of a broad new thrust in EDP security — has emerged with the first product from the Applied Information Technologies Research Center, a little-known R&D consortium organized in 1984 by a number of universities and leading U.S. vendors of information service products. AITRC, in Columbus, Ohio, about to beta test a credit card-sized calculator which impliments a challenge-response ID authentication. A software module on a host CPU sends a 7-digit challenge to a remote terminal, the user keys that number into his "calculator," presses a special authentication button to process that number (and a token-specific seed) through a one-way crypto algorithm — then reads off the 7-digit response code on the calculator's LCD screen. That number, transmitted to the host, verifies the token as one issued to a specific user. Tokens (also called "hand-held password generators") are said by IBM to increase the certainty of end user authentication by at least a full order of magnitude over mere passwords. Tokens impliment the second of the three ID authentication options (something known, something held, something inherent to the user) and have drawn rising interest as the relative frailty of classic password systems becomes apparent and risks proliferate. The two leading vendors, Security Dynamics in Cambridge, Ma., and Sytek of Mountain View, Ca., are NSA-certified — so their tokens can be integrated into access control systems for secure DoD computers — and SD last week obtained a GSA scheduled contract which allows no-bid purchases by federal agencies. But the AITRC development may mark tokens even more forcefully as the future direction for the industry. AITRC is jointly funded by CompuServ, Meade Data Central, Chemical Abstracts, the Online Computer Library Center and John Wiley & Sons; as well as Carnegie Mellon University, University of Pittsburgh, Wright State University, Ohio State, the Ohio State University Research Foundation, BDM Corp., and Batelle Institute. No lightweights there. AITRC hopes to see licensed token/calculators marketed at $10 apiece by the end of this year, according to AITRC president George Minot -- although the members of the AITRC consortium could potentially use and offer them to their clients for even less, he said, since consortium members get royalty-free access to the technology. At $10 per unit, AITRC would revolutionize the pricing of tokens -- which currently range between four and ten times that for comparable devices. Minot conceeded, however, that projected price is based on high volume production (minimum100,000) overseas. The AITRC token is built upon the 4-bit NEC calculator chip, works as a standard calculator, and is powered by a 2-year lithium battery. According to Minot, the device is also designed to be "initialized," or registered on the host, from any remote terminal or push button telephone. Vin McLellan, The Privacy Guild, Boston, Ma. (617) 426-2487
Relayed from: INFO-MAC Digest Saturday, 23 Apr 1988 Volume 6 : Issue 40 From firstname.lastname@example.org Mon Apr 18 10:11:09 1988 Subject: The Scores Virus Date: 18 Apr 88 16:11:09 GMT My colleague Bob Hablutzel got a copy of the Scores virus last Thursday and disassembled it, and I've been studying and testing it ever since. So far I've reverse-engineered about half the code and have a thorough understanding of how it works. This note is a preliminary report on what I know so far, after four days of research. It also outlines plans for a disinfectant program. The virus is definitely targeted against applications with signatures VULT and ERIC. I don't know if any applications with these signatures exist or are planned to be released. The virus infects your system folder when you run an infected program. The virus lies dormant for two days after your system folder is first infected. After two, four, and seven days various parts wake up and begin doing their dirty work. Two days after the initial infection the virus begins to spread to other applications. I haven't completely finished figuring out this mechanism, but it appears that only applications that are actually run are candidates for infection. After four days the second part of the virus wakes up. It begins to watch for the VULT and ERIC applications. Whenever VULT or ERIC is run it bombs after 25 minutes of use. If you don't have a debugger installed you'll get a system bomb with ID=12. If you have MacsBug installed you'll get a user break. After seven days the third part of the virus wakes up. Whenever VULT is run the virus waits for 15 minutes, then causes any attempt to write a disk file to bomb. If you don't do any writes for another 10 minutes the application will bomb anyway, as described in the previous paragraph. There's also more code to force a bomb after 45 minutes, but I can't see any way that this code can be reached, given the forced bomb after 25 minutes. The virus identifies VULT and ERIC by checking to see if the application contains any resources of type VULT or ERIC. Applications with signatures VULT and ERIC normally contain these resources, but other applications normally don't. I verified the behaviour of the virus by using ResEdit to add empty resources of types VULT and ERIC to the TeachText application. TeachText bombed as described above on an infected system, even though TeachText itself was not infected! While running my experiments I was in ResEdit on the infected system and heard the disk whir. Sure enough, ResEdit was infected. I've been running on an infected system with an infected ResEdit for three days. I reset the system clock to fool the various parts of the virus into thinking it was time for them to wake up. The Finder has also become infected. ResEdit, Finder, and the rest of the system seem to be functioning normally. Only my version of TeachText modified to look like VULT or ERIC has been affected by the virus. If you repeat any of these experiments be very careful to isolate the virus. I'm using a separate dual floppy SE to perform my experiments, and I've carefully labelled and isolated all the floppies I'm using. My main machine is an SE with a hard drive, where I have MPW and my other tools installed. It's OK to look at infected files on the main machine (e.g. with ResEqual, DumpCode, etc.), but don't run any infected applications on the main machine - that's how it installs itself and spreads. Children should not attempt this without adult supervision :-) An infected application contains an extra CODE resource of size 7026, numbered two higher than the previous highest numbered CODE resource. Bytes 16-23 of CODE resource number 0 are changed to the following: 0008 3F3C nnnn A9F0 where nnnn is the number of the new CODE resource. You can repair an infected application by replacing bytes 16-23 of CODE 0 by bytes 2-9 of CODE nnnn, then deleting CODE nnnn. I've tried this using ResEdit on an infected version of itself, and it works. The MPW utility ResEqual reports that the result is identical to the original uninfected version. The virus creates two new invisible files named Desktop (type INIT) and Scores (type RDEV) in your system folder, and adds resources to the files System, Note Pad File, and Scrapbook File. Note Pad File and Scrapbook File are created if they don't already exist. Note Pad File is changed to type INIT, and Scrapbook File is changed to type RDEV. Both of these files normally have file type ZSYS. The icons for these two files change from the usual little Macintosh to the generic plain document icon. Checking your system folder for this change is the easiest way to detect that you're infected. Copies of the following five resources are created: Type ID Size Files ----- ----- ----- ------------------------------------- INIT 6 772 System, Note Pad File, Scrapbook File INIT 10 1020 System, Desktop, Scores INIT 17 480 System, Scrapbook File atpl 128 2410 System, Desktop, Scores DATA -4001 7026 System, Desktop, Scores A disinfectant program would have to repair all infected applications and clean up the system folder, undoing the damage described above. I don't yet know exactly which files can be infected, but I know for sure that Finder (file type FNDR) can get infected, and that applications (file type APPL) can get infected. For safest results the disinfectant should examine and disinfect the resource forks of all the files on the disk. I recommend the following algorithm: Scan the entire file hierarchy on the disk, and for each file on the disk check it's resource fork. Delete any and all resources whose type, ID, and size match the table above. Delete all files whose resorce forks become empty after this operation. If the resource fork's highest numbered CODE resource is numbered two more than the next highest numbered CODE resource, and if it's size is 7026, then patch the CODE 0 resource as described above, and delete the highest numbered CODE resource. Also examine all files named Note Pad File and Scrapbook File. If their file type is INIT or RDEV, change it to ZSYS. I'm fairly confident that a disinfectant program implemented using the algorithm above would sucessfully eradicate the virus from a disk, restore all applications to their original uninfected state, and not harm any non-viral software on the disk. It should work even on disks with multiple infected system folders. I also believe that it should work even if run on an infected system, and even if the disinfectant program becomes infected itself! There's a small chance that it could delete too many resources, and hence damage some other application, but that's a small price to pay for a clean system. Getting rid of a virus is tricky, even with a disinfectant program. The disinfectant program should be placed on a floppy disk along with a system folder. Make a backup copy of this disk. The machine should be booted using the startup disk you just made, and then the disinfectant should be run on all the hard drives and floppies in your collection, including the backup copy of the startup disk you just made. Don't run any other programs or boot from any other disks while disinfecting - you might get reinfected. When you're all done, reboot from some other (disinfected) disk and immediately erase the startup disk you used to do the disinfecting, which may be (and probably is) infected itself. This should absolutely, positively get rid of all traces of the virus. The backup disk you made and disinfected should contain an uninfected copy of the disinfectant program in case you need to use it again. There are at least two red herrings in the virus. It uses a resource of type 'atpl', which is usually some sort of AppleTalk resource. As far as I can tell, however, the virus does not attempt to spread itself over networks. The 'atpl' resource is used for something else entirely. This is not a bug. Also, the virus creates the file Desktop in your system folder. This is done on purpose. It is not a failed attempt to modify the Finder's Desktop file in the root directory. The file is used by the virus, and has nothing to do with the Finder. I don't know why the virus seems to cause reported problems with MacDraw, printing, etc. Perhaps it's a memory problem - the virus permanently allocates 16,874 bytes of memory at system startup (four blocks in the system heap of sizes 772, 40, 8, and 334, and one bock at BufPtr of size 15360). I've only found one possible bug in the virus code, and it looks pretty harmless. The code is very sophisticated, however, and I can easily understand how I might have overlooked a bug, or how it might interact in strange unintended ways with other applications and parts of the system. When we've finished completely cracking this virus we'll probably distribute another report. I've posted these preliminary results now to get the information out as quickly as possible. We also hope to write the disinfectant program, if someone else doesn't write it first. I've decided not to distribute detailed information on how this virus works. I'll distribute detailed technical information about what it does and how to get rid of it, but not internal details. This was a very difficult decision to make, because normally I firmly believe in the enormous benifit of the free exchange of code and information. The Scores virus is a very interesting and complicated piece of code, I've learned a great deal about the Mac by studying it, and I'm sure other people could learn a great deal from it too. But I don't want to teach twisted minds how to write these incredibly nasty bits of code. If I write the disinfectant program, however, I will distribute its source, because I do want to teach untwisted minds how to get rid of them. So please don't bombard me with requests for more information. You may be the nicest, most honest, incredibly important person, but I won't tell you how it works. I'll make only two exceptions, and that's for a very few of my colleagues at Northwestern University, and for qualified representatives of Apple Computer. Thanks to Howard Upchurch for giving us a copy of the virus, and to Bob Hablutzel for helping me crack it. John Norstad Northwestern University Academic Computing and Network Services 2129 Sheridan Road Evanston, IL 60208 Bitnet: JLN@NUACC Internet: JLN@NUACC.ACNS.NWU.EDU Monday morning, April 18, 1988.
Re: >From: email@example.com (Paul Cudney) Proper handling of timezones is a much harder problem than generally realized. Fortunately, it has been solved. An international group including Arthur David Olson, Robert Elz, and Guy Harris produced a public domain package for UNIX more than a year ago. It handles past time, future time, System V time, daylight time, double daylight time, partial hour shifts, multiple shifts in a year, and even solar time. Timezone rules are kept in files, not in compiled code. A rather complete database of rules has been compiled. This package has been adopted by Sun, and by Berkeley (shortly after the 4.3BSD release), among others. It is in use on at least three continents. PS: Don't confuse it with what's in the latest POSIX draft standard, which is useless.
From ELECTRONICS ENGINEERING TIMES, April 11 1988, p. 18: IEE JUDGES SAFETY OF SOFTWARE by Roger Woolnough The Institute of Electrical Engineers, Britain's premier organization of professional EE's, has been awarded a government contract to study the use of software in safety-critical systems. The one-year project will be undertaken in collaboration with the British Computer Society (BCS). The IEE/BCS study will examine the present use of software in safety-related systems, and describe likely trends in regulations and codes of practice across all types of industries and application areas. It also will identify areas where regulations and codes are lacking, or where there are inconsistencies between those used in different sectors. The third part of the study will investigate the need for certification of products, organizations, and engineers. The certification of engineers could include both those involved in design and those undertaking safety assessment. - Jon Jacky, University of Washington
The UKAEA's Atomic Energy Research Establishment at Harwell is no more 'ultra secret' than (say) your local government food-testing lab. It is a secure site, I'll grant you, but you'd find it a lot easier to get in to than many large companies. Mike Mike Salmon, Climatic Research Unit, University of East Anglia, Norwich, UK JANET: firstname.lastname@example.org | BITNET: m.salmon%cpc865.uea@ukacrl | BIX: msalmon
Herb Lin's comments about the DoD funding bill asking for a specific report on viruses prompts me to ask a more general question: does anyone know of anyone systematically trying to do I guess what one would call an epidemiology of viruses? Has someone been trying to keep track, say, of exactly what particular installations have reported (to whom? -- good question) having been hit by the MacWorld virus? by the Israeli virus(es)? by the "NASA" virus just mentioned? (General note: the epidemiolgy wouldn't help solve the problem — there really is only one technical solution, fraught with lots of administrative nightmares — but it might, but only just might, help signal whether the potential threat has materialized enough to create the kind of crises atmosphere needed to implement the solution.)
Please report problems with the web pages to the maintainer