The following quoted from NASA's press release shows that for the second time a mix-up in units resulted in an experiment failure, but this time it was a spacecraft. > The peer review preliminary findings indicate that one team > used English units (e.g., inches, feet and pounds) while the other > used metric units for a key spacecraft operation. This > information was critical to the maneuvers required to place the > spacecraft in the proper Mars orbit. > > "Our inability to recognize and correct this simple error > has had major implications," said Dr. Edward Stone, director of > the Jet Propulsion Laboratory. "We have underway a thorough > investigation to understand this issue." Risks - too many to list, but if after 40 years NASA can't sort out measurement units, what hope have we for Starwars projects. It is a terrible inditement to have to admit to. It certainly ranks with the HST mirror as a fiasco. Perhaps Europe has got it right for once. Metric units at least mean factors of ten or more out, which tend to show up errors. The English still have miles and pints and galleons (UK not US), but they do their science in metric units. Global Research Information Systems, Glaxo Wellcome Medicines Research Centre Gunnels Wood Road, Stevenage SG1 2NY UK +44 1438 76 3222 firstname.lastname@example.org [Thanks to all of you, too numerous to cite, for noting this item. The measure in question was apparently kilograms per second vs. pounds per second of force, off by a factor of 2.2, which would seem to explain the too-close approach. The need for very strong typing strikes again. PGN]
By now, many of you are aware of the Japanese nuclear accident where a large amount of Uranium solution was placed in one condensation container and achived the "critical" condition for runaway fission and thus released high-energy particles (and heat) and generally radio active materials. For example, http://www.washingtonpost.com/ has an article titled "Radiation Leak in Japan". Already the incident seriously injured three people and 21 more people were exposed to high dose of radiation. (The number of people found to be exposed has been increasing over the last few hours.) That a certain amount of Uranium solution would reach such critical condition has been known for years. For example, Richard Feynmann's "Surely you are joking, Mr. Feynmann" chronicles the author's experience of seeing a rather sloppy handling at Oak Ridge laboratory where the army people were not aware that the water solution needs much smaller amount of Uranium to reach the critical condition than solid or powder since water acts as a slowing material and increases the chance of neutron's interaction with Uranium and such. Feynmann and his superior Segre(spelling?) explained the basics of the nuclear material and how to calculate the critical mass (of solutions) in order to work out the practical avoidance guideline. This happened before the atomic bombs of 1945. So, WHY ON EARTH, in today's fuel processing facility, such concentration of Uranium solution can happen or is allowed to happen?! According to a news article (in Japanese) at Mainichi Shimbun newspaper site http://www.mainichi.co.jp a few bad design and operation decisions emerge. First, there was no strict oversight of the line operators who moved the solution during a condensation process. The operators move the solution to a large condensation tank from a container. But, there is no checking mechanism that the critical amount of uranium is deposited in this condensation tank. It seems that the reckless handling of the solution marginally below the limit was routine (I may be wrong. I hope I am wrong, but the article seems to suggest that this was the case.) It is reported about 7 times as much solution allowed was dumped by mistake. Agah. Furthermore, the designers and management people of the processing plant don't seem to believe in Murphy's Law. There was no automatic warning or similar when the amount of over the allowed amount (below the critical amount of course) is thrown into the condensation tank. So there was no incentive to the line operators not to go over the threshold carefully. (Weren't they taught about basics?!). No procedural manual exists to handle such "critical condition" event, should it happen. The three operators who directly caused the incident were found lying on the floor when three colleagues entered the area after hearing the siren warning the high radiation level. The village people near the plant was furious since they were not notified promptly. They learned the accident TWO (2) hours after the incidence: by then, radiation material escaped the building. A clearly written well prepared manual at least could have warned these people much quicker. After all, the plant people had the time to check out radiation level outside the plant around 11:45 am. The village people were notified around 12:30. The accident was believed to have happened around 10:35 am. Aside from the three people, the plant workers in the next buildings were found to be exposed to large dose and some small radio active particles were found in some people's hairs. Oh well, as of now at 3 am in the morning, the local provincial government where the plant is located shielded off 350 meters radius area (it seems that it is being extended to 500 meters as I write this post now.). People living in the 10 km radius, (310,000 people) are being advised not to go outside until tomorrow morning (as if something would happen by then.). A few bad designs indeed. Why don't people get things right, what the people back in the 1940's managed to handle? I have had some bad things to say about the Japanese nuclear power industry. I have no idea HOW these guys would continue operating in this manner. We need a Saturday night massacre style of shuffling of heads and inject some sense of scientific integrity to the newly hired workers and management, I guess. That a renowned nuclear physicist turned politician, Dr. Arima is in charge of the government agency overseeing the industry could be a good omen. However, if he can handle the guys with such shoddy history behind them in the industry effectively in a short time is a question. In any case, Dr. Arima was quickly assigned the chair of the ad-hoc overseeing committee yesterday. I bet criminal investigation would commence for the plant operators and such. (I wish it does.) PS: Right now, the water container that surrounds the condensation tank is being emptied. I hate to think who is/are doing this where. The water surrounding the condensation tank is believed to act as "mirror" for the neutron particles that cause the runaway fission. By emptying the water tank, it is hoped that the neutrons are no longer reflected back to the condensation tank and fission somehow subsides. Oh, I forgot to mention. After more than 12 hours, it seems that the critical condition still continues!. Counts of neutron has been high meaning that the runaway fission in small scale continues. (But not much dust particles seem to be blown into the air.) Can we call this a micro-Chernobyl? It's up to you. I am not sure if the reported better preparedness of a currently planned processing plant in Aomori (the northern part of Honsyu island) against this type of accident is a blessing or too little too late. Chiaki Ishikawa <email@example.com.NoSpam> Personal Media Corp., Shinagawa, Tokyo, Japan 142-0051 [slight spelling corrections, including one in archive copy. PGN]
[from Dave's IP distribution] Massive Fiber Cut Pauses East-West Traffic Click on our sponsors! Updated 11:42 AM ET September 29, 1999By Max Smetannikov, Inter@ctive Week At least four Internet service providers are experiencing severe traffic backlogs because of a massive fiber-optic cable cut that put out four OC-192 lines connecting data networks on the East and West Coasts. Industry sources told Inter@ctive Week that the cut was accidentally made by an unidentified gas company in Ohio around 12:30 EST today. The news is sending shockwaves through the networking community, with many carrier operators struggling to understand why, all of a sudden, their traffic is routed through London and Denmark. At least four Internet service providers are being affected by the outage. Various online sources have named AboveNet; GTE Internetworking; and MFS Communications, a WorldCom subsidiary, as ISPs hit the worst. "Let me tell you, it really hurts right now," said Dave Rand, AboveNet's chief technology officer. "We were given a 1 hour estimate for this problem to be corrected." GTE Internetworking's public relations department had heard of an outage in Pennsylvania earlier today, but had no comment on the Ohio development. MCI WorldCom public relations didn't have an immediate answer to the query. [The cut apparently resulted from gas company workers during construction. Various ISPs were still down hours later. PGN]
The Federal Bureau of Investigation says that some of the Y2K-related programming fixes that were undertaken by foreign contractors may contain malicious code. "We have some indications that this is happening," says Michael Vatis, head of the inter-agency National Infrastructure Protection Center. "A tremendous amount of remediation of software has been done overseas or by foreign companies operating within the United States." A Central Intelligence Agency officer assigned to the Center said recently that India and Israel appeared to be the "most likely sources of malicious remediation" of U.S. software. "India and Israel appear to be the countries whose governments or industry may most likely use their access to implant malicious code in light of their assessed motive, opportunity and means," CIA officer Terrill Maynard wrote in the June issue of Infrastructure Protection Digest. Such code could contain "time bombs" set to detonate at some future date, disrupting service or compromising security and password protections. The Special Senate Y2K committee, in its final report last week, called such scenarios "unsettling." (Reuters/TechWeb 1 Oct 99) http://www.techweb.com/wire/story/reuters/REU19991001S0001; NewsScan Daily, 1 October 1999, with permission) [See also http://dailynews.yahoo.com/h/nm/19991001/tc/yk_code_2.html . PGN]
I get a 'digest' of 'interesting' news stories once a month from a computer magazine here in the UK (not that interesting, but I have never gone to the trouble of cancelling the subscription). Just this minute, I received this months issue, which contained the headline / abstract; SIX OUT OF SEVEN US-RUSSIAN "HOTLINES" WILL NOT SURVIVE Y2K. Worried (or just interested) I followed the link and found; Six Out of Seven US-Russian Telephone "Hot Lines" Will Survive Y2K. Note the clever insertion of 'not' into the first version. So, do we have a 14% or an 85% chance of an unfortunate mis-understanding? Do the operators at the end of those hotlines suffer withe the same mis-reading condition? 'Hello Russia, we are [not] really shooting at you' [The original story came from DNWire and talks about a US Congressional Y2K committee] Simon Hogg
Followers of Microsoft security bulletins have noticed pattern lately: A security hole in Internet Explorer is announced together with a workaround, followed by a patch, followed by a new hole, the workaround, etc. In fact, all of these holes are part of a much larger problem, one that Microsoft doesn't seem to know how to fix. The difficulty, no surprise here to RISKS readers, lies in ActiveX and its interaction with the browser. ActiveX controls can be marked "safe for scripting," meaning that a script on any HTML page can activate them without requesting permission or giving notification. And the controls turn out to have holes. So far, Microsoft has identified two buffer overruns and one case of improper filesystem access among Microsoft-supplied, marked-safe controls (Security Bulletins MS0099-33, 37, and 40). But the risks are a great deal worse than that. Anyone, it turns out, can write an ActiveX control and mark it safe for scripting. There's no validation and no enforceable rules. So it's not hard to imagine MyTrojans.com putting a really nasty control on a Web site. The only thing then standing between the user and disaster is Microsoft's flimsy requirement that controls be signed. Most users, confronted by an official-looking certificate, will just click OK, no matter who has signed it. Or a nasty control could be signed with a hijacked certificate. For now, Microsoft recommends turning off ActiveScripting. Unfortunately, that breaks a good many Web sites, including most of Microsoft's. A less draconian solution suggested to me by a Microsoft developer is to deny permission to run "safe for scripting" controls. But even this breaks a lot of sites, including Windows Update, which is most Windows 98 users' best hope of installing security patches. Fortunately, there don't appear to have been any Trojan control exploits yet. Steve Wildstrom <firstname.lastname@example.org> Technology & You, *Business Week* 1200 G St NW Suite 1100 202-383-2203 Fax: 202-383-2125
[Courtesy of David Farber <email@example.com>'s IP distribution] http://www.inria.fr/Actualites/pre55-eng.html INRIA leads nearly 200 international scientists in cracking code following challenge by Canadian company Certicom Paris, September 28. 1999 - A new code-cracking challenge set by Certicom has been successfully overcome using 740 computers in 20 countries over a period of 40 days. The code, ECC2-97, is based on a technique known as elliptic curves. Led by Robert Harley, a member of the Cristal project at INRIA, France's National Institute for Research in Computer Science and Control, the 195 researchers involved showed that a 97-bit encryption system based on elliptic curves is more difficult to crack than a 512-bit system based on integers such as RSA-155. Encryption systems based on elliptic curves have been known since the mid-1980s, but have only recently been adopted by leading encryption companies such as RSA Security Inc. Certicom issued its "ECC Challenge" in November 1997, specifying a series of challenges of increasing difficulty. The company offers prizes up to US$100,000. The aim of the challenge is to encourage research in the field of elliptic curves and their applications in encryption, and to strengthen arguments in favor of using elliptic curve cryptography instead of systems based on integer factorization. The challenge dubbed "ECC2-97" took place in a set of about 10^29 points on an elliptic curve chosen by Certicom. To solve the problem, participants first computed 119,248,522,782,547 (more than 10^14) using open-source software developed by Harley. Among these points, they screened 127,492 "distinctive" points and collected them on a Alpha Linux workstation at INRIA where further processing revealed two twin points. Finally Harley computed the solution using information associated with these two points, thus nailing the problem. The solution was found after less than one third of the predicted computation. The probability of finding the answer so quickly was less than one in ten. Two other twins were detected a few hours after the first - a less than one in 100 probability! Nevertheless the computing power used, around 16,000 MIPS/years, was twice as much as that used for the factorization of RSA-155 announced by Herman Te Riele of CWI (Amsterdam) and his colleagues on 26 August 1999. "These results strengthen our confidence in codes based on properly-chosen elliptic curves," said Harley. "This needs to be taken into account in standards for security and confidentiality on the Internet." According to Andrew Odlyzko, Head of Mathematics and Cryptography Research, at AT&T Labs, the code-cracking operation was "a great achievement that demonstrates the value of fruitfully harnessing some of the huge computational power of the Internet that is idle most of the time". He added: "It validates theoretical security predictions, and demonstrates the need to keep increasing cryptographic key sizes to protect against growing threats." Arjen K. Lenstra, Vice President at Citibanks's Corporate Technology Office in New York and one of the main contributors to the recent successful attack on the RSA-155 challenge, compared the two computational efforts and noted that the present result makes 160-bit ECC keys look even better compared to 1024-bit RSA keys, from a security point of view. "Ideally we would like new theoretical advances to further reinforce these practical results, although such advances appear out of reach for the moment." Out of the $5000 prize money, the team members will give $4,000 to the Free Software Foundation to encourage the creation of new free software. The remaining $1,000 go to the team members who identified the twin points. Both were in fact found by Paul Bourke using a network of Alpha workstations, mainly used for studying pulsars at the Centre of Astrophysics at Swinburne University in Australia. The most active teams in the project were: Astrophysics & Supercomputing, Australia INRIA, France University of New South Wales, Australia "Friends of Rohit Khare", USA and France Ecole Polytechnique, France Compaq, USA and Italy Technischen Universitaet Wien, Austria University of Vermont, USA "WinTeam", International British Telecom Labs, UK Internet Security Systems, UK Rupture Dot Net, USA "Jabberwocky", USA Ecole Normale Superieure de Paris, France For a complete list of participants consult the project's Web pages. Further information: The ECDL Project: http://cristal.inria.fr/~harley/ecdl/ The Certicom ECC Challenge: http://www.certicom.com/chal/ Technical contact: Robert Harley, INRIA : 33 1 39 63 51 57 - Robert.Harley@inria.fr Media contacts: Christine Genest, INRIA : 33 1 39 63 55 18 - Christine.Genest@inria.fr Sylvie Baranger, Andrew Lloyd & Associates : 33 1 43 22 79 56 - firstname.lastname@example.org [Added note from Seth David Schoen <email@example.com> in Dave Farber's IP: Actually, they did not "show" this in the most important sense, which is the mathematical sense. They showed that, using generally available techniques, they found it more difficult; they did not show that the problem is inherently more difficult. [...]]
Greetings. An alert PRIVACY Forum reader recently brought a somewhat bizarre and certainly ironic situation to my attention. Intuit (makers of "Quicken" and other extremely widely-used financial software packages) had a web site (http://privacy.intuit.com) that presented various information regarding their privacy policies. It also included a feature which allowed any registered Intuit customer to view and alter their "privacy preferences." This included data such as whether or not they wished to receive promotional materials from Intuit, how they should or should not be contacted (e.g. e-mail, phone, etc.), and whether or not their name and address would be released to outside firms. To access this feature, the customer needed to supply their last name, zip code, and ... *nothing else*! Upon entering any last name and zip code (and given the number of Intuit customers, a hit would be pretty likely for most common names) the user would see the associated first name, city, and last four digits of phone number for that person. The user could then freely modify the privacy preferences for that customer. Needless to say, I immediately expressed my concern over this situation to Intuit officials. Within a few days, I was contacted by their VP Corporate Communications, informing me that the preference access features of the site had been shut down, and that any users attempting to access them would be directed to an 800 number. A live customer service representative would then verify their contact information before performing any preferences changes. Intuit plans to restore the web preferences feature to the site after making security enhancements, probably within a month or two. That Intuit responded promptly to my concerns by closing down the feature is to be commended. One must still wonder, however, about the chain of events and review which permitted such an obviously flawed feature to have been implemented in the first place--it is, unfortunately, an all too common sort of situation. Lauren Weinstein <firstname.lastname@example.org>, Moderator, PRIVACY Forum; Member, ACM Committee on Computers and Public Policy; Host, "Vortex Reality Report & Unreality Trivia Quiz" http://www.vortex.com/reality From PRIVACY Forum Digest Saturday, 25 September 1999 Volume 08 : Issue 13 (http://www.vortex.com/privacy/priv.08.13) Moderated by Lauren Weinstein (email@example.com) Vortex Technology, Woodland Hills, CA, U.S.A. http://www.vortex.com Subscriptions are via an automatic list server system; for subscription information, please send a message consisting of the word "help" (quotes not included) in the BODY of a message to: "firstname.lastname@example.org". Mailing list problems should be reported to "email@example.com".
In the most recent issue of *The New Yorker*, 4 Oct 1999, John Updike reviews the latest book by Henry Petroski, someone who has been mentioned in many previous issues of RISKS (e.g., 3.25, 9.15-16, 12.51, 18.61). The newest book is indeed a metabook, ``The Book on the Bookshelf'' (Knopf, $26), a book about books and how they evolved. Updike's review concludes with some of the risks of books using computer technology as the medium itself, the constraints of reading from CD-ROMs, the effects of hackers and electromagnetic catastrophes on the computer forms, the gradual ebbing away of seldom-read books into computer warehouses, and the MIT Overbook. Both Petroski's book and Updike's review make fascinating reading.
I received this from a friend who works at A Very Large Corporation and has requested that both he and the company remain anonymous. From what I can tell, someone at said company was fiddling with a Linux box and configured it to be the Primary Domain Controller (instead of authenticating off of the Primary Domain Controller). Well, this hosed all NT domain authentication in the company and prevented anyone from authenticating until the offending PDC was removed from the network. The end result? The company is banning Linux. Now, this *exact* same thing happened to a friend of mine at another company, but it was quickly fixed, identified, and Linux is still in use there today. Same problem, different result. While I'm not by any means an NT guru, this seems to be a HUGE vulnerability in the NT Domain Authentication mechanism--if I ran a network where anyone can plug into my network and stop all authentication this easily, I would be scared out of my wits. Here's the body of the e-mail. I for one would like to send the author a copy of "On Writing Well." The names have been changed to protect the ignorant: We have encountered an incident with the Linux desktop operating system. A Linux box named <foobar> had assumed control of our domain yesterday and temporarily paralyzed our network. The box has been identified and shut down. Affective[sic] immediately, all use of Linux systems within the
Cyber-SpeakIra J Rimson <firstname.lastname@example.org> Tue, 28 Sep 1999 15:27:11 -0400Maybe some of you can explain the logic behind the following note received from my UK friend Jonathan Berman: We've just bought a new colour printer for the office. The instructions included the following important message: "Note: The Starter CD includes a utility to easily copy the HP Deskjet 1120C printer software to 3.5-inch, high-density diskettes. This allows you to use the diskettes to install the software on systems that do not have a CD-ROM drive. See the Printer Software menu in the Starter CD" Am I missing something obvious here?
Please report problems with the web pages to the maintainerTop