Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…
Bay Area Rapid Transit (BART) had another bad day. At 7am, a ghost train appeared at the San Francisco 24th Street station, requiring manual operation through that station. Independently, three trains had to be taken out of service because of mechanical problems. All of this caused a 15-minute delay systemwide. Later, a computer crash caused delays up to 30 minutes systemwide, from 5:50pm to 9:45pm on 19 Dec 1996. BART also had a serious power cable outage in the transbay tunnel on 12 Dec 1996 (not previously reported here by itself, because of its lesser relevance to the computer systems — although it certainly had a major effect on the system as a whole). That cable problem was traced to sloppy maintenance after the cable was damaged way back when it was installed back in the early 1970s. BART now observes that an overall cable overhaul had been considered prior to the 12 Dec outage as an urgent step in upgrading the aging infrastructure.
Reading some back issues of RISKS, I realised how many of the incidents and problems reported are due to systems (embedded and otherwise) getting old. The "system" (made up of the people who use it, as well as all the computer "systems") has to be able to cope with the oldest and newest technology in it, and all the breakdowns in all the (non-upgradable) component systems. With the pace of change in technology (try buying a 1 megabyte SIMM), we are going to find ourselves more and more unable to maintain the systems we build. Current examples are such things as: air-traffic-control computers being kept going long after the manufacturer can officially no longer supply the spares, by wizened (i.e., over-45) hardware engineers with soldering irons and a bag of those big fat 20-watt resistors; car "computers" which tell you the car's average speed over the last hour, for the first four years of the car's life, and then flash "88 8888" at you unless you spend $1250 for a replacement unit; a four-year-old PC for which you can't buy a new disk drive (the BIOS can't handle anything bigger than 512 MB). Future examples: anything with a smart card in it; the often-documented problem of reading information stored on 30-year-"guaranteed"-lifetime optical disks in 30 years when the disk might be readable, but no computer system can connect to the drive; and of course the year 2000. People are very good at adapting to change (once they get over the emotional hurdle of accepting it's going to happen) and working with the limitations of old and new; just look at any home with items as diverse as a TV and a toilet. Computer systems aren't at all good at it, and anything with firmware is just disastrous. Here's a little test for readers who work in systems specification. Have you ever seen a requirements, functional, or design specification for any computer application system that specified not just the hoped-for start date for production use of the system (you know, the one where you have to subtract a year for political reasons, even though you know there's no way it'll be ready in time), but also the END date, after which the system would be taken out of use and replaced with something else? For advanced students: what would have been the effect on any given project of doing so? Nick Brown, Strasbourg, France [The juxtaposition of this item with the BART item is of course purely coincidental(!). PGN]
This week's L.A. Weekly (Vol. 19, No. 4, page 15) has a story about inaccuracies in the Los Angeles Police Department's Police Arrest and Crime Management Information System (PACMIS). The following is my summary of some RISKy aspects of the story. The article focuses on a doughnut shop in Van Nuys. Because the shop is the closest establishment to the street in a mini-mall, many incidents nearby bear its address in police records. That may be useful for getting officers to the scene quickly, but it resulted in the shop being described as a "disorderly establishment" because of the number of crimes recorded at its address, the majority of which had no connection with the doughnut shop. The article paints PACMIS records as being very inaccurate. The examples cited include multiple entries for the same incident, and an entry that described a nearby jay-walking citation as a "felony prostitution" arrest on the premises of the doughnut shop. PACMIS is not intended to be used for "evidentiary purposes", but the story claims PACMIS reports are used to influence zoning board hearings and other nuisance-abatement measures. The story doesn't describe what PACMIS's original purpose was. It sounds to me like a typical instance of the risks of inaccurate data and of using data for other purposes than those for which it was collected. Jeremy Leader, Tujunga, CA, USA
The Superintendent of the Palisades Park, NJ, school system had to explain to the School Board an $875 bill from a 16-year-old "hacker" who was hired to break into the schools computer system. The 19 Aug 1996 meeting was reported by the AP, and was carried by most national newspapers, such as the *Richmond Times-Dispatch*, 22 Aug 1996, "Hacker frees transcripts", page A16. It seems that some students or former students, whose plans had changed, desperately needed transcripts sent off over the summer. The computer and database containing the students' records were locked via passwords, and no one who knew the passwords could be reached. The principal and former vice principal were on vacation and unreachable. The school employee who had the codes was recently incapacitated by a stroke. Members of the guidance department were furloughed for the summer because of the school's budget crunch and could not be found. The school's computer coordinator asked the 16-year-old "computer whiz" to break into the system, and unlock the files so the various transcripts could be printed. He did, and billed the school system $25 an hour for 35 hours of work. I believe this story shows the Risks of security, password access, and encryption systems, and the need for built-in "Key Recovery" architectures. Businesses and other organizations cannot be put at Risk, when critical information cannot be unlocked because of an employees absence, incapacitation, or lost keys and passwords. Companies like AccessData, Orem, UT, do quite a lot of business in terms of password recovery for unlocking files. Unfortunately, these techniques, or the techniques used by the "hacker" would not be effective in systems using modern security and cryptography. "Key Recovery" or "Key Archival" has long been recognized as a necessary function and service of security and cryptographic systems. While it is true, that in the U.S., a Court could order that one of the archived keys/passwords be released to the court, there are legal and procedural safeguards in place. There are far greater risks, such as losing the key/password, and having critical information lost and undecipherable. In the case above, the future livelihoods and careers of the students who immediately needed their transcripts would have been put at risk if they were unable to unlock the passwords. Robert J. Perillo, CCP Perillo@dockmaster.ncsc.mil [Reminder: RISKS readers will undoubtedly be able to make the distinction between "key recovery" for stored data and "key recovery/archive/escrow/ or-whatever" for communications. The situations are quite different. RISKS readers will of course also be familiar with the potential risks inherent in the presence of any mechanism that allows key recovery by an untrustworthy third party. PGN]
Much British media panic has been devoted to the recent conviction of an "ATM gang" of high ambition. A collection of high-grade villains with impeccable pedigrees in robbery, gangsterism and drugs dealing over 30 years compelled a software expert who was in prison for attacking his wife and child to help them in their enterprise. The man revealed his role to a prison chaplain and subsequently acted as an undercover informer on his release. The plot was to tap telephone lines carrying encrypted ATM card details to and from the banks, especially the Visa and Access (Mastercard's name in the UK) centres in Essex. Having decrypted the information, fake ATM cards with genuine cardholders' details would be manufactured from the 140,000 blanks they had bought. Teams of associates armed with the cards would then assault thousands of ATMs in the UK and abroad. The media response, based on the claims of the prosecution, said the plot had "put the entire banking system in danger". Not only that, but prosecution estimates of the potential proceeds were put at about L800 million, with chief counsel adding that in fact, "the sky was the limit". The claims of the prosecution were alarmist, even if the conspirators were in deadly earnest, and bear the hallmarks of media fantasies about the true level of risk in bank security (see suspicions about the Sunday Times reporting on this in RISKS-18.17). The logistics alone of the planned ATM raids would have been horrendous. It would have been extremely difficult for a "raider" to use more than one ATM card after another at one machine without raising immediate suspicions. Typically, ATM cards have withdrawal limits no higher than L200 in the UK, often much lower, depending on the status of the account. If one imagined that one person could cover, say, 50 machines in a — very busy — day and got L200 from each machine, he would clear L10,000 in one day. If, say, 100 gang members could withdraw that much each per day (a generous estimate), it would take them 160 days of continuous raiding to reach the prosecution figure of L800 million. It seems highly probable that within a short time of the raids beginning, alarm bells would go off throughout the banking system and ATMs would either be closed down, put under surveillance, or new, lower limits on all cards imposed by the banks. (There would be loss of a public confidence in ATMs, but hardly a collapse of the banking system). Security cannot be good in a gang that big, and the chances of one of them being caught and spilling the beans, especially after the tremendous hue and cry that would have gone up, would have been very high. Code-breaking gangsters? But could they have got that far? Newspaper reports failed to emphasise the all-important question as to whether the encrypted information could be decoded. Defence counsel were scathing about the possibilities and called experts to testify that it was effectively impossible. One of the defence barristers said: "The basic method was fatally flawed ... because the encryption system used by the banks is so secure that no current technology available in the world, not even the combined expertise of the world's leading scientists, is capable of breaking it." The judge appeared to accept this, with a proviso. Addressing the defendants, he said in sentencing them: "It was not possible for you, with the equipment and expertise then at your disposal, to carry out this fraud to a successful conclusion. There is, in particular, no evidence that the cards recovered by the police would then work or that the codes had then been broken. However, beyond that I'm not prepared to go. I do not believe it is necessary to go further but for the avoidance of doubt I make it clear that it would, in my judgment, be irresponsible and wrong on the basis of the information before me to accept any additional assurances along the lines that this is a fraud that no one could ever commit." Lawyers being what they are, the judge could not exclude the possibility that the decryption was possible, even though the remoteness of that possibility does not seem to have struck home, particularly when it is considered that the gang's only computer expert was working against them. The gang's expert claimed no expertise in cryptography and yet said in evidence that there had been a successful decryption dry run. This was not corroborated elsewhere, and the judge did not accept it. The case in court did not, as a result, measure up to the headlines. In fact, the prosecution had decided from the start only to charge the men with conspiracy to steal, which carries a maximum sentence of seven years, instead of conspiracy to defraud, which carries a maximum of 14 years. For the latter charge to succeed, the Crown would have had to convince the jury that the defendants had "a reasonable chance of success". Despite the gang spending some L100,000 in the purchase of equipment, making numerous clandestine visits to British Telecom telephone exchanges, and suborning about three BT engineers, the prosecution realised such a charge would fail. The judge kept his head and, in the end, the ringleader was sentenced to only five years in prison, some subordinates to four and three years, and the man whose heavily guarded country home was the headquarters for the plot only got a two-year suspended sentence. It is worth remembering that sentences reflect the convicted person's criminal record, and the "form" of these men was awesome. Computerised fraud on a gigantic scale has been a major focus of media panic and large-scale exaggeration in the past. The criminal community, it seems, was guilty of believing everything it read in the papers.
In a copyrighted story dated December 19th, Rex Nutting of TechWire reports that the U.S. Justice Department is considering new rules to restrict access of Federal Parolees to computers and the internet. Restrictions being considered include "requiring parolees to keep a daily record of their computer use, requiring parolees to agree to unannounced inspections of their computers, banning the use of private or public computer networks without prior permission and banning the possession or use of data encryption programs." Lawyers and law enforcement experts termed the new rules "silly" and "unenforceable". Regardless of these opinions, I attach a more sinister meaning to this proposal. The U.S. Justice Department is reinforcing a prevalent government attitude that private citizens use the internet primarily for criminal purposes. For some reason, it is all right for large corporations or organizations to surf the net, and also to use encryption to protect their data and financial transactions. But the Justice Department rules imply that private citizens doing the same thing must have something to hide, or are searching for illegal materials--bombs, pornography, electronic vandalism methods, etc. To my knowledge, parolees are not required to report visits to their local libraries, where much of the same information can also be found. Perhaps we need laws that require that borderline institutions such as libraries not be located in high-crime areas or near prisons...
Here's some extremely alarming information made available by Dan Farmer. It has a good break down of sampled sites that were assessed and what the number of them were vulnerable or misconfigured. I'm sure these sites probably didn't use SATAN (http://www.fish.com) or ISS (http://www.iss.net) to secure themselves. 8-) The survey is available at http://www.trouble.org/survey P.S. In light of security experts conducting these surveys, I'd recommend using tools like RealSecure and Courtney to detect when you unknowningly become a statistic in these surveys. ... I guess it is accepted practice to scan at will as long as you do not break in? email@example.com Christopher William Klaus, Internet Security Systems, Inc., Ste. 660, 41 Perimeter Center East, Atlanta,GA 30346 (770)395-0150
It's no secret that PC hardware manufacturers modify machines continuously without changing model numbers, thus making it very hard for end users to maintain consistent configurations. These changes are usually swapping one model of peripheral for another or one brand of chip for another, and less commonly changing board layouts. These changes are done to reduce manufacturing cost, improve reliability, and/or deal with parts availability. According to the December 1996 edition of Byte (pg 28), at least one large PC vendor recently made an incompatible BIOS change without changing the BIOS revision label displayed at boot time. The problem was discovered because one machine ran Windows NT, while an ostensibly identical machine wouldn't. The RISKS of poor configuration management, driven by the industry drive for reduced cost and increased performance, claim another victim. [As an aside, software vendors have taken to making changes without changing the version number also, in a process known as a "blind update". JJE] [It is the next step in the process after the "blind date". PGN]
This is about a "feature" in WinWord 6 that a colleague stumbled across. The incident shows the full arrogance that MS products exhibit when dealing with situations with ambiguous semantics. The said colleagues job was to prepare monthly reports about estimated and real work-power efforts regarding some projects work-packages. For the first few months he wrote these reports using the WinWord installed on his laptop. Recently he switched over to using the WinWord installed on a central NT server using WinCenter to access it remotely from his Sun workstation. As it is common practise with such recurring reports, he opened the report from last month to change the figures and save it under a new name. Now the figures were listed in a table, and the sum of each column was calculated via the "Table Formula Sum(above)" function. Before he started changing the numbers, he accidentally pushed F9 ("Update Fields") while the cursor happened to be in the sum field. Can you imagine how big his surprise was when suddenly the column sum CHANGED before his eyes? Remember, he didn't change anything! And WITHOUT WARNING! And to add insult to injury, the sum was WRONG! Avid MS "fans" surely know by now what was wrong: the LANGUAGE settings of the two Windows versions differed! And thus the symbol for the decimal point was `.' on the one machine and `,' on the other. The arrogance? NO WARNING! And the sum function does not even just truncate then numbers, no, it somehow interprets the comma or period as a list, because summing "1,4" and "2,3" gives... TADAH! "10"!!! (it used to give "3,7" with the other language settings) The RISK? I really wonder how many reports are out there with wrong numbers because some poor dilbert-type employee typed it on his computer and her boss printed it from another computer with different language settings. Maybe even someone got degraded or fired "because s/he reported the wrong numbers"! Also, WinWord is the only editor I know where you open a file, close it again and WinWord asks you if you want to save the changes!! (Another example of built-in arrogance: I tried to import some ASCII-text records into MS Access. Instead of asking about the format, Access silently tried to interpret the text itself and finally reported "1746 errors found"!) So the final lesson learned is: <>> MS documents ARE NOT PORTABLE! << (And another example of non-portabililty: MS Project stores the workdays-per-week number as a PER-USER setting, not per-project! Exchange a Project with a colleague and BINGO! Everything gets re-scheduled.) There are surely lots of other examples, maybe others would like to share their "experiences" too. BTW, I am sure that some of these "features" are or will get fixed. But I am also sure that nobody will receive a bugfix or upgrade for free, as it is common practise with other software companies. Instead you will be prompted to "upgrade to Windows97 and WinOffice2000 for a real cheap $999.99 now and your problems will be solved". Sweet arrogance! My feelings are best reflected by this signature found on the Net: "Microsoft is not the ANSWER. Microsoft is the QUESTION, and the ANSWER is NO!" Roland PS: Sorry if this sounds very emotional, but I have already wasted enough precious time with things like these. Roland.Giersig@aut.alcatel.at (speaking only to, err, for myself) ALCATEL Austria, Scheydgasse 41, A-1210 WIEN. Phone: +43-1-27722-3755
> Making Netscape's cookies file a symbolic link to /dev/null is also an > effective way to disable cookies under UNIX. There's been a lot of talk recently about disabling Cookie logs in Netscape - theres a simpler (undocumented?) way of doing this you might want to mention in a future journal. In the Netscape preferences file there is an option "ACCEPT_COOKIE" that gets set to "0" if you want to always accept them, and "1" to prompt the user for every new cookie. Setting it to "2" doesn't prompt the user and in fact disables cookies altogether. Mark Cox, Technical Director, UK Web ------ http://www.ukweb.com/~mark/ Latest news on the Apache Web Server ------- http://www.apacheweek.com/
I got e-mail from _many_ people pointing out: o this bug in phf and other sample CGI programs that come with the NCSA httpd distribution has been known since March 1996. o it allows many other tricks in addition to grabbing the /etc/passwd file. If you have one of the buggy CGI programs installed, then intruders can use it to do essentially anything that can be done by whatever userid your httpd server runs under. The degree of tact ranged from courteous to downright rude. Some people implied that I must be a grossly incompetent webmaster not to have fixed this one months ago. But this is an academic research facility, not the CIA -- and even the CIA website did get cracked, though I don't know which hole was exploited in their case! I do try to keep up, but I am not a full-time security specialist; I do many different things, and I'm usually behind on most of them. After two weeks wasted cleaning-up after the crackers, I am even farther behind than usual. So, it seems to me that there are lessons to be learned from this episode on several levels: o I personally am more conscious of security issues than I was before this attack. I will spend more time keeping up with security alerts that might affect my site, and I will add scanning my webserver log for suspicious URLs to my list of things to check on a daily basis. Of course, this will mean less time for my real work, and this does anger me. Even a supposedly-harmless intrusion requires a responsible administrator to spend untold hours checking for damage, looking for backdoors and Trojan horses the intruders might have left behind, etc., etc. The cracker left a note claiming we had nothing to worry about because he didn't damage anything, but in fact he _did_ do a small amount of damage without apparent malicious intent. And he did a large amount of damage to my confidence in the integrity of my systems! o Steps are being taken on an institution-wide level here at Yale to improve the channels for coordinating responses to things like this. When the wizards began checking other Yale servers for this hole, they discovered that our servers in the Medical school are not the only recent victims of phf attacks. o Perhaps the Internet-wide channels for propagating alerts need to be refined. I believe part of the reason I missed this hole was the sheer number of alerts coming from various quarters that I and other overworked system administrators are expected to read. It is simply not reasonable to expect that every system administrator will apply every recommended security patch every time yet another patch is released! There needs to exist a mechanism for prioritizing warnings according to the ratio of severity to ease-of-exploiting each threat. The phf hole is so nasty precisely because anyone who is reasonably familiar with Unix can learn to exploit it in a matter of minutes. And many crackers are exploiting this one right now. Matthew.Healy@yale.edu http://paella.med.yale.edu/~healy
C A L L F O R P A P E R S The Forum of Incident Response and Security Teams (FIRST) Ninth Annual Computer Security Incident Handling Workshop "Bridging the Borders - Incident Handling in an International Network" Marriott Hotel, Bristol, England Monday 23-Jun-1997 to Friday 27-Jun-1997 (inclusive) Abstract Submission Deadline: 15-Jan-1997. The complete call for papers can be found at http://www.first.org/workshops/1997/cfp.html . Contact Information: firstname.lastname@example.org for e-mail, FAX: +1 415 725 9121 (Attention to: Stephen Hansen, Subject: FIRST 1997, Wkshp) or or via post, Attn: Stephen Hansen, 333 Sweet Hall, Stanford University, Stanford, CA, 94305-3090 USA
Please report problems with the web pages to the maintainer