Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
Having just survived a hellish weekend due to Hurricane Fran here in North Carolina, I found it interesting that several technological RISKS have only now come to light. The area was clearly unprepared for a disaster of this magnitude, and is now paying the price for the "can't happen here" complacency apparent in the local utilities' failure to take preventive action which could have greatly reduced the suffering now happening here. Specifically, the electric utilities (and, by extension, their customers) have thus far resisted modernization in the form of buried power lines; presumably the rate increases necessary to finance this are anathema to existing customers. The lack of attention to trees growing close to power lines has now borne fruit, so to speak; about 100,000 subscribers have been without electricity for over three days, as of this writing. For what it's worth, I'm a recent transplant here myself, after having lived for nineteen years in South Florida, where hurricanes are an established fact of life and building codes are strict enough to persuade most designers to do the Right Things. The Risks Forum has had much discussion in the past of the engineering of critical and safety systems, and how they should be designed to fail in a "safe" mode. It turns out that this design principle was lost on the people who designed the apartment complex in which I live. This complex contains electronic card-access locks with no manual overrides, and a "security" gate which fails into a "lockdown" mode. This is the sort of "safe" mode which might be appropriate for a prison, but certainly not for the only entrance/exit for a residential community. Had a fire broken out in the wake of the storm, I would very probably not be here to write this. This complex is also provided with a so-called "security system" which is automatically hooked into each unit's telephone line. In the event of a power failure, these systems attempt to dial their monitoring stations to call for service. There is apparently no time-out interval for this; these alarms simply seized all affected phone lines and effectively kept them out of service until their backup batteries ran down after eight hours or so. This means, of course, that a power failure also guarantees loss of telephone service (for eight hours, anyway). The RISKS here are all too depressingly obvious. It's a near-miracle that more people did not have to pay with their lives for such embarrassing lack of foresight. Dave Schulman, Nortel, Inc., 400 Perimeter Park Drive, Morrisville, NC 27560 Validation Engineer, Feature Test I (919) 905-4844; (919) 905-2549 (FAX)
>Path: news2.digex.net!howland.erols.net!newsxfer2.itd.umich.edu!uunet!in2.uu.net!news-in.tiac.net!news.gte.com!gte.com.gte.com!jmaddaus >From: jmaddaus@gte.com (John S. Maddaus) >Newsgroups: rec.aviation.military >Subject: Another missile/airline incident >Date: Sun, 8 Sep 1996 14:33:04 GMT >Organization: GTE Labs Inc >Message-ID: <jmaddaus.10.3232D920@gte.com> >NNTP-Posting-Host: 132.197.24.59 >Xref: news2.digex.net rec.aviation.military:106911 The *New Hampshire Sunday News* [8 Sep 1996] is reporting that at 1:45pm 29 Aug 1996 American Airlines flight 1170 was flying over Wallops Island, Virginia, en route from San Juan to Boston when the captain reported (apparently only to the company at the time) "a missile off the right wing". The report has been confirmed by the NTSB, which has assigned an investigator. Apparently, the FAA is investigating on its own as well. The paper goes on to mention the proximity to Wallops Flight Facility with nearby Navy installations at Norfolk and Lexington Park. I'm assuming that normal cruise for the 757 would put it out of range of surface-to-air portables and there is no way to infer the trajectory based on what was said in the paper. However, note the headline "American Airlines Pilot Says Missile Zoomed by His 757", which is not what the quote above relates. [Not surprising. Headlines often have little to do with articles, because they are written by a headline specialist. John's subsequent comments, speculations, and questions have been omitted for RISKS. PGN] John Maddaus [Wallops Island has long been a rocket launching facility. Over 9 years ago, RISKS-04.96 reported the case in which a lightning strike on the launch platform ignited three rockets and accidentally launched two of them. NASA had been intending to test launch capabilities in the presence of lightning storms. PGN]
Regular RISKS readers may remember several earlier articles about this incident that seemed to lay the blame on software bugs. Now, we hear a different story entirely and the cause is alleged to be something completely different. [Note that a single person, Chiaki Ishikawa, has simply reported the sequence as it arose. See RISKS-17.65, -18.18, and -18.41. PGN] There is a risk that occasional readers or researchers using search engines may come across the archived RISKS articles and use them to prove a point, or merely to sensationalize. They might never find the later articles that set the record straight. This is not a criticism of RISKS, but rather an attribute of any ongoing public discussion on the Internet. There is also a new element to the risk. The resources needed to search massive amounts of news archives was, until recently, only affordable to wealthy organizations. Now we can all do it inexpensively, but most of us aren't trained investigators or journalists. I believe that we should make airplane accidents a special case, and to voluntarily withhold public discussion and speculation until the accident report is in. Not forever, but until the reports are in and read. There are several reasons why just airplane disasters are exceptional. a) Early speculations about why the incident occurred are frequently wrong. b) The actual report, issued after all the physical and other evidence has been examined, will be available within a reasonable time (months to one or two years.) There is ample opportunity to challenge the report's conclusion or offer other opinions after its release. c) Speculations are often highly technical. This may lead non-technical people to ignore what is actually said as not understandable, and to look only at the headline and the source. RISKS is a respected source and some people may believe anything they read here is authoritative. d) The sensational nature of air disasters makes the public and the media hungry for any tidbits. Technical debates intended for a closed audience aren't likely to stay closed for long. e) Speculation may cause additional grief for the families of victims and crew or others involved. Reputations can be ruined. Even when the speculations prove true, the full report can include mitigating details that color one's judgement differently than partial information might. Pilots in the hanger engage in gossip just like anybody else. When an air disaster occurs, you can bet the gossip flows freely. However, it is considered bad form to do so within earshot of laymen. No doubt there are other risks that also deserve sensitive treatment, but to me airplane disasters stand out most clearly. Dick Mills http://www.albany.net/~dmills [It is intriguing that there are perhaps a dozen books on the KAL 007 case. It is not surprising that theories proliferate during times that definitive reports are not available, but it is also not surprising that, even with the presence of supposedly definitive reports, different theories continue to propagate indefinitely. The TWA 800 case may become another example. You may note that RISKS has been silent on that case, eschewing speculation; however, there are some rather startling hypotheses floating around. PGN]
The euthanasia-via-computer story in RISKS-18.05 reminds me of an anecdote I was told by a colleague in a previous job. He once worked for a pharmaceuticals firm and was required to find out if a particular candidate molecule had any potential as an anti-anxiety drug. He rigged up a set of cages for a number of rats, each of which could be warned (by a light, or something) and shocked (by wires in the floor of the cage). The idea, as I recall, was to warn the rats, then shock them briefly over a period of time to see if the light alone subsequently caused the same amount of agitation. If the drug was effective, it should make the rats less anxious (move around less) when the light came on. Unfortunately, the software he had written (in interpreted BASIC) had the undesirable feature that, if it stopped due to a runtime error, the output states [*] would remain as they were. During the night, the program hung during the "shock" routine, and when he came in the next day all the rats had been comprehensively electrocuted [*]. [* Rat-etat(s), especially if they were French rats? PGN]
This week I had the frustrating experience of trying to register for a conference using an online form that provided no feedback. I'll omit the name of the conference to protect the guilty. (The conference is web-related, so its organizers really should know better.) The HTML form used the "mailto" action to deliver user input via e-mail rather than feeding it to a CGI program. Not only is "mailto" not supported by all web browsers, but it has the unfortunate feature of not providing any feedback to the user that the form has been submitted. In contrast, well-written CGI transaction handling programs will provide an acknowledgement screen and/or acknowledgement by e-mail (and very well-written ones may provide an encrypted electronic "receipt" as well). The risks of online transactions that lack immediate feedback? User confusion and anxiety over whether the form was really submitted and the transaction really took place; redundant submission of forms as users re-submit in expectation of feedback; multiple transactions (e.g., multiple charges to the user's credit card) unless the back-end system has a bulletproof way to detect duplicate submissions; possible *non*-execution of transactions if the user's web browser really didn't catch the click on the "Submit" button and the user has been trained to expect no feedback; and wasted time and money as the user gives up on the online transaction and tries to straighten it out by telephone. I do have some sympathy for small organizations trying to carry out transactions on the web without a prohibitive investment of resources. The marketplace for online commerce software is extremely chaotic right now, and do-it-yourself CGI programming is considerably more complex than learning to slap HTML tags into a document. But as the web moves from being a publishing system to being a more comprehensive system for commerce and other kinds of transactions, it is important that the people who set up online transaction systems recognize that their attention to reliability and user interface design must increase accordingly. Prentiss Riddle riddle@rice.edu
The following happened to me about a year ago: Wanting to ftp a file to a remote server, I accidentally used a telnet client instead of an ftp client. Confused about the presented interface, I managed to type in some commands that crashed the ftp server on the remote site, dumping a core file. A little later, I found the core file in my directory, and for no other reason than shear boreness, I loaded it into a textfile. Removing all the control chars, I suddenly looked at a string that was all very familiar to me: my password. It turned out the send password was stored in a variable which was allowed to live too long. (It got quickly fixed.) I don't think I need to explain about the risks. Abigail [This is a very old risk, but worthy reiterating for younger readers. PGN]
A British insurance company is reported to be providing a new insurance policy. They have been offering an "Alien Impregnation Policy", which is said to have sold 300 policies in a week: fired up by their success, they are now offering a "Virgin Birth" policy, on the grounds that a number of women are worried about the millennium. However, religious authorities are said to be cool towards the idea. (Since one of the larger organised religions depends on just such a manifestation of the Holy Spirit, this seems on the face of it inconsistent.) This makes the Y2K computer-system date risks appear almost trivial. The market may be limited however — a straw poll of female colleagues indicated that this was the last thing on their minds at the moment. [It will be interesting to see what evidence will be required for insurance claims on an Alien Impregnation Policy. I presume this will be a classic challenge for computerized DNA matching, and a glorious opportunity to become famous for the lab technician who first identifies alien DNA, or its equivalent! PGN]
According to today's on-line Philadelphia Inquirer (see `http://www.phillynews.com/inquirer/96/Sep/06/business/AOL06.htm') U.S. District Court Judge Charles R. Weiner filed a temporary restraining order against AOL, ordering it to stop blocking junk e-mail sent by Cyber Promotions Inc. [The trial is scheduled for midNovember.] Brian Clapper bmc@telebase.com http://www.netaxs.com/~bmc/
[...] Undoubtedly, the plaintiffs cited _Consolidated Edison v. Public Service Commission, 447 US 530 (1980) in the injunction. In the Supreme Court decision, the Court said that our right to be bothered does not justify limiting First Amendment rights. The solution is "[simple as] transferring [it] from envelope to wastebasket." [_Sex, Laws and Cyberspace_, 1996, pg 36] The obvious counterpoint is that the US Postal Service is a "universal" service which is legally mandated to attempt delivery to all residents. With few exceptions (based on geography), receiving a letter requires no prior setup by the individual. Nor does the individual have to pay a monthly charge for access to his mailbox (in contrast to the _rental_ fee for those boxes located in a post office). Nor does the individual have to sign contractual agreements with the postal service regarding issues such as "appropriate use" of the mail. (There are postal laws, but many "appropriate use" restrictions are more restrictive.) In contrast, online service providers require a "setup" before they'll accept e-mail for an individual. They charge for access to the mail boxes. And nearly all providers require users to agree to various "conditions of use" before the user can access his mail box. These differences may not be enough to directly challenge the court's reasoning in the prior case, but it's enough to challenge the assumption that the matter has been definitively resolved. In 1980 the USPS was the sole mail delivery agent for the vast majority of Americans; it was also (until recently) literally a branch of the federal government. Today we have e-mail, fax/modems, FedEx, etc., all independent of direct government involvement. As a wild-eyed extrapolation, this case may kill the attempts by the USPS to form its own e-mail service. If it's ultimately decided that private ISPs can block spammers while the USPS (as a quasi-government agency) can't, few people would choose the spammed version if they had an alternative. Bear Giles bear@indra.com
>From what I gathered while scanning an article at lunch today, the spamming company had _1.5 million_ AOL e-mail addresses. Assuming a typical message of 2k or so and a typical target of only 1% of the AOL subscriber base, the effects on AOL mailers could still be best described as "mail bombing." (15,000 messages totaling 30 MB). Assuming the typical user has 2-3 messages in his inbox at any time, the total amount of disk space wasted on spams is 6-9 _Giga_bytes, at best. In practice, with a distributed architecture the additional disk space (used or unused) required might easily total 20+ GB. Another important factor to consider is the cost to backup an extra material in user's mailboxes. The cost per AOL user is still modest, but why is it borne by AOL users, and not the party sending the unsolicited mail? (The sender of physical mail is always responsible for postage.) And what happens when it's not one company sending mail to 1.5 million subscribers, but a thousand direct marketers? Bear Giles bear@indra.com
Perhaps this subject is better thought of trying to make a silk purse out of a sow's ear? If you are going to put your reputation on (the) line via contracting with an Internet Service Providor, then it behooves you to have a service level agreement with them. Assuming that they are going to provide the service that you silently expect does not seem like a wise business decision to me. Pete Weiss at Penn State
Greg Lindahl <Greg-Lindahl@deshaw.com> writes about auto-responders that bounce messages with Precedence: {bulk,junk} and ignore the Errors-To header field, and admits that "writing correct mail handling programs is complex." In fact, the situation is even more complex than that. Both fields are undefined, nonstandard, and can cause incorrect mail handling. Use of the Precedence field varies widely. It was originally used by sendmail to determine queueing priority and more recently, to determine whether a nondelivery report returns the subject content. Some vacation programs use it (among other heuristics) to determine whether to respond to a message. Some X.400 gateways use it to encode the X.400 Priority field, and return as nondeliverable any message that contains an unknown Precedence keyword. Some mailing list expanders use it as a means to prevent loops between peered lists, and therefore refuse to forward any message with certain Precedence values to the list membership. As a result, there is no value for Precedence that is recognized by vacation, which does not result in mail delivery failure for some set of users. Use of Errors-to violates a long-established (since 1980) standard which indicates where to send nondelivery reports. Within SMTP, they go to the MAIL FROM address, and mailing lists are required to set the MAIL FROM address to their list maintainer when distributing mail to the list membership. Outside of SMTP, nondelivery reports go to the address in the Return-Path field, which is set from MAIL FROM when the mail leaves the SMTP world. Some lists set Errors-to without setting the MAIL FROM address, some set only MAIL FROM, and some set both. MTAs that comply with the standards ignore Errors-to, but others use it to override MAIL FROM. Still others send nondelivery reports to both addresses, which in the worst case can cause a form of sorcerer's apprentice syndrome. In fact neither field belongs in the message header while the message is "on the wire". Queueing priority and error return addresses are both the concern of the message transport layer (e.g. SMTP), while the message header is intended only for use by the user agent. MTAs are supposed to ignore message headers, but instead they end up affecting whether a message gets delivered. And since these fields appear visibly in much list traffic, there is a widely held perception that they are correct protocol. Plain-text protocols are easy to implement and debug, but they create the RISK that users will think they understand the protocol, and attempt to implement it without reading the specifications! Keith Moore http://www.cs.utk.edu/~moore/ Computer Science Dept. / Univ of Tenn / 107 Ayres Hall / Knoxville TN 37996 Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; OR ABRIDGING THE FREEDOM OF SPEECH, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances. - Amendment I, US Constitution (emphasis mine)
>Windows 95 allows you to specify one password on the system... and changing >the password on the Win95 screensaver does _not_ require verification with >any system-wide or user-specific passwords. Having used this OS for about a year now, I find that it does not have a particularly secure setup. If windows is waiting for a user login, there appear to be two ways around it: 1. Click OK. When windows again prompts for the password, click cancel. This seems to have the effect of logging the user in at the highest possible level if there are multiple users of the system, otherwise the user is logged in as the common user. From here it is simple enough to run regedit, causing all manner of havoc. 2. Press the windows key, or Ctrl-Esc if there isn't one. This brings up task manager, again with the ability of running regedit from the command line. The only way I have seen around this is to disable the login box (change the primary login to Windows), which will circumvent the appearance of the login dialogue. Unfortunately, this means there can be only one user, and if there is only one user, it isn't sensible to remove Registry editing tools. Enable passwords at bios level, and switch it off when you leave your machine, for optimum security. Oh, and don't use win3.1x screen savers, they can be skipped by Ctrl Alt Del. Stewart Nolan
Microsquish Stealth Bug Insertion Technology While I'm kind of dismayed by the "if you can't innovate, litigate" philosophy so often applied to Microsoft, this is a particularly lethal little gem from their MFC team. In many development environments, this problem will almost certainly guarantee that your app will crash the first time it is executed on customer machines, but the crash will only happen once, and will mystify tech support. The problem arises in the use of property pages, otherwise known as tabbed notebook dialogs, as they are designed and implemented in the Visual C++/MFC environment. VC allows the developer to use interactive resource editors to design the property pages, but IT ONLY DOES THE FINAL STEP OF THE PROCESS IN EXECUTING THE APPLICATION, not in the development cycle. This step, where the style of the page is changed in the resource will cause most machines to abort the software as it is attempting to change a read only resource. What really concerns me from a risks perspective is that the traditional development model is to release an exe to the test/qa group, and then to ship this exe to production when it receives a blessing from test/qa. This means that the exe shipped to production IS NOT THE SAME AS THE ONE THAT WAS TESTED, because the tested exe has been executed, and the one sent to production has not. Hence, every customer who launches the app for the first time will be rewarded with a crash, which can never be reproduced. Yes, you can get around it with careful use of filtered exceptions. The problem is that this is rather insidious, and outside the realm of thinking of most developers, who view an exe as the final product of the development process. In this case, the final product is an executed exe. Personally, I feel this is a lot like the Monty Python "Frog chocolates" sketch. VC too should have a great big warning sticker on it saying "An EXE from the linker is NOT A PRODUCT. YOU MUST EXECUTE IT TO MAKE A PRODUCT." — -- — -- — -- ORIGINAL MICROSOFT DOCUMENTATION — -- — -- — -- Applies to class CPropertyPage, specifically the DoModal function that causes the page to be presented on the screen. virtual int DoModal( ); [... standard usage documentation deleted...] [HERE IS THE INTERESTING BIT!!!] Note. The first time a property page is created from its corresponding dialog resource, it may cause a first-chance exception. This is a result of the property page changing the style of the dialog resource to the required style prior to creating the page. Because resources are generally read-only, this causes an exception. The exception is handled by the system, and a copy of the modified resource is made automatically by the system. The first-chance exception can thus be ignored. Since this exception must be handled by the operating system, do not wrap calls to CPropertySheet::DoModal with a C++ try/catch block in which the catch handles all exceptions, for example, catch (...). This will handle the exception intended for the operating system, causing unpredictable behavior. Using C++ exception handling with specific exception types or using structured exception handling where the Access Violation exception is passed through to the operating system is safe, however.
7th USENIX Security Symposium 26-29 January 1998 Marriott Hotel-- San Antonio, Texas Sponsored by the USENIX Association, the UNIX and Advanced Computing Systems Professional and Technical Association, in cooperation with: The CERT Coordination Center. Papers due: September 9, 1997. Program Chair: Avi Rubin, Bellcore Conference home page: <http://www.usenix.org/sec/sec98.html> Detailed guidelines for submission via e-mail to <securityauthors@usenix.org>. or telephone the USENIX Association office at (510) 528-8649. USENIX Conference Office 22672 Lambert Street, Suite 613 Lake Forest, CA USA 92630 Phone: (714) 588-8649 Fax: (714) 588-9706 E-mail: <conference@usenix.org>
Please report problems with the web pages to the maintainer