Excerpted from http://www.theage.com.au/news/2000/12/21/FFX0NEBVXGC.html "Emergency lines went dead for six minutes shortly after 1am [on Wednesday, 20th December] until an emergency generator restored services. ... While no callers rang during the critical period, noone could have sought assistance had there been an emergency. ... In case of power failure, the call centres have a diesel generator. But Mr Bahr [technical representative for the Bureau of Emergency Services Telecommunications] said he was aware of an incident in the past when the back-up had failed."
Dear Customer, Egghead.com has discovered that a hacker has accessed our computer systems, potentially including our customer databases. While there is no indication that any customer information has been compromised, as a precautionary measure, we have taken immediate steps to protect you by contacting the credit card companies with whom we work. They are in the process of alerting card issuers and banks so that they can take the necessary steps to ensure the security of cardholders who may be affected. We wish to underscore that we have taken these steps as precautions. We have no information at this time to suggest that any credit card information has been compromised. We are investigating this possibility, and we are doing everything we can to proactively protect you. If you would like further information, you may wish to contact the issuer of your credit card to determine what steps they are taking. We regret any inconvenience this may cause you. We issued a press release on this matter earlier today. [...] If you have additional questions, please call our customer service team at 1-800-EGGHEAD (344-4323). Respectfully, Jeff Sheahan, President & CEO, Egghead.com, Inc. [The Press Release notes that Egghead has "retained the world's leading computer security experts to conduct a thorough investigation of our security procedures and an analysis of this breach." PGN]
Recently there has been some discussion on BUGTRAQ regarding some companies attempts to change the way they publish security advisories. Some background: http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0036.html http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0042.html http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0056.html http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0054.html http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0076.html http://www.securityportal.com/list-archive/bugtraq/2000/Dec/0101.html Basically, it started when Microsoft abruptly switched to a new advisory format where the notification e-mail included only a cursory description of the problem and a [malformed] URL for the actual report. Today, Elias Levy announced that @Stake wanted to switch to a format with more information in the message but still requiring a visit to their website for the full advisory. Some interesting RISKS: - Access for people with marginal Internet connections or browsers other than IE/Netscape is less convenient. - Information is unavailable if the webserver is down or overloaded, as might happen with an important advisory. It seems counterproductive to put important, time-sensitive material behind a single point of failure, particularly when the decision is to deliberately avoid using a free distributed, fault-tolerant distribution channel. - It makes it much easier for a vendor to change an advisory without notifying anyone, especially since changed or removed advisories won't be archived in anywhere near as many places as a mailing list such as BUGTRAQ. In addition to covering up bad work, this would also make it easier to remove or tone-down past advisories about companies the author is now aligned with. - It opens the prospect of tailoring content to the reader. This could be as simple (and annoying) as charging for access to some content or as complex as determining what to show based on where the request came from (e.g. competitors, vendors or journalists). While this would probably be caught for something major, particularly at first, it would not surprise me to find at least subtle tampering happening regularly if this becomes commonplace. I find it hard to ignore the above concerns given that the switch provides no benefits of any sort to the reader, let alone enough benefit to outweigh them. The only legitimate advantage from such a policy is that it makes it easier for the author to change the contents of a released advisory. For legitimate purposes, this is unnecessary: - Updates to current advisories can be published in the same fashion as the originals, ensuring that anyone who received the original will receive any updates. - It's extremely easy to add a link to the advisory which people could use to check for updates. - It discourages the use of proper change control, since it's easier to update an existing advisory than release a update. - It will cause old links to break if they ever move content around and neglect to install a redirect for the old URL. (Microsoft is notorious for this, especially with links that break only for visitors using something other than Internet Explorer)
A brief e-mail has been getting forwarded around our campus which reads: Check out breaking news at CNN: http://firstname.lastname@example.org/evarady/www/top_story.htm At first glance, this appears to be a genuine article on CNN, but a quick read reveals that a cute joke. Most people who have seen the fake article have immediately assumed that www.cnn.com has been hacked in some manner. Those more familiar with HTTP specification, however, will notice that the URL is completely valid, and does not lead to or redirect from any cnn.com computers. No machines have been hacked. Instead, the e-mail just plays with your expectations of what a URL should look like. The risk here is not a computer one at all, but a social risk that even (or perhaps especially) knowledgeable people will assume something has been hacked when it hasn't been. An even sneakier URL might be: http://www.cnn.com&story=breaking_news@306511916/evarady/www/top_story.htm For those of you still pondering why that URL works, read the HTTP spec and try the equivalent: http://email@example.com/evarady/www/top_story.htm Richard J. Barbalace <firstname.lastname@example.org>
Recently, a mailing list I'm on forwarded a report of a "hack" of the CNN.com site. Upon looking closely, I found that the CNN site hadn't been hacked at all — it was the *minds* of readers of this hoax "report" that were being hacked! Rather cute, actually, but it exposes what is perhaps a larger RISK, so please bear with me while I set up the story... An MIT student named Eric Varady took a parody news article from The Onion <URL:http://www.theonion.com/onion3637/bush_horrified.html>, edited the layout to resemble CNN's format, and copied it to his own site <URL:http://salticus-peckhamae.mit.edu/evarady/www/top_story.htm>. (Note that multiple threatened legal actions have since forced him to remove the original content, but an explanation page is still there.) He then passed around a "report of a hack of the CNN site" with a URL [which I *do* hope makes it through the mail-to-HTML scripts at Catless!] of <URL:http://email@example.com/evarady/www/top_story.htm>. If you look very closely, you'll see that the actual host named by this URL is not "www.cnn.com", but "220.127.116.11" (a.k.a. salticus-peckhamae.mit.edu). That is, for IP-based/Internet URL "schemes" such as HTTP or FTP, the general format defined in RFC 1738 is: <scheme>://[<user>[:<password>]@]<host>[:<port>]/<url-path> The "user" field is very rarely used, and even then is more often seen with FTP than HTTP. But since it contained an at-sign before the first slash, the hoax URL was really <URL:http://18.104.22.168/evarady/www/top_story.htm> with the (ignored) user field of "www.cnn.com&story=breaking_news". Cute, eh? More serious scams of this sort are possible, given the number of users who (1) have *no* idea what the formal syntax of a URL is, and (2) routinely access the Web through "portals" which often create complicated indirection URLs to aid with logging or tracking to support advertising revenue, e.g.: <URL:http://www.foo.bar.com/logger.cgi?http://www.other.place.com/some_article> The RISK is that users are being bombarded with these monstrosities so often that they've grown used to it, and that they'll fail to recognize when they're being sent someplace they might not really want to go!! (Perhaps when it's not a joke, such as being sent to a porn site while working at a company with a "no tolerance" policy.)
For some time, I have been associated with organizations that maintained e-mail lists for communication with customers. Each customer mailing generates some quantity of e-mail responses to the mailing address or a specified reply-to address. Heuristic filters handle the most frequent types of responses, generating automatic replies or redirecting mail to appropriate addresses. There are, though, always some messages which the filters can't adequately handle, so my involvement tends to involve eyeballing them. The workload is by no means immense - for every 6,000 outbound messages sent, I manually handle one response. Some are questions the filters didn't catch, which I pipe to various scripts. Some are bounce messages. Some are chain letters - I grep those for From: headers and bounce them to the appropriate administrators; nothing to spread holiday cheer like a corporate policy smackdown. A good many are auto-responses. Within the set of auto-responses, a significant minority pose non-technical risks. Users who are going to be "away from mail" or "out of the office" for even a single day frequently leave instructions on who should be contacted in their absence, and their responses often include other information that could be considered sensitive. Their expectation is, of course, that they will receive mail from co-workers and colleagues who already know where they work, what they do, and have some need for the information. However, if they are subscribed to mailing lists, it is quite possible that the information they provide will be seen by completely unrelated users at other organizations. "I will be away from [government laboratory] from [departure date] and will return on [return date]. If you need to reach someone from the IT Security staff, Please contact [coworker] at [number] or e-mail to [address]." Congratulations. You've just told me what department you work in and where you work (the combination of which might not be the sort of thing you don't just go blabbing around), and given me a co-worker's name and direct contact information. The potential for a social engineering hack is giddying. This is, of course, a somewhat extreme example. But for each one like this, there are hundreds of others from people in business, academia and government. People who're perfectly willing to send total strangers information about their personal schedules - who they are, what they do, where they do it, when they're leaving, when they're coming back, where they're going, how they can be reached while they're gone, or who to contact instead. Perfectly normal information to give to a co-worker, colleague or neighbor. Somewhat risky information to give to strangers, in an era of competitive intelligence, corporate and other espionage, etc. Workarounds? You could hack your MTA/MUA/MDA to only send responses to certain domains, or omit all personal information from your auto-response. A more balanced approach would involve not re-stating information authorized users already know, and delivering necessary information in a minimal form. Ergo, instead of: "I will be away from GovLab from December 22 and will return on December 26. If you need to reach someone from the IT Security staff, Please contact John Smith at 809-555-1212 or e-mail to firstname.lastname@example.org." send something like this: "I am currently away from work. If you need to reach someone, please contact John <jsmith> at 555-1212." The logic, of course, is that an authorized person already knows where you work, what you do, your e-mail domain, and your area code. Nobody needs to know how long you'll be gone, if there's someone else who can help them. Dan Birchall - Palolo Valley - Honolulu HI - http://dan.scream.org/ Corporate Holidays 2001 - http://22.214.171.124/articles/262573.htm
One of Fred Cohen's requirements for an election process to be deemed trustworthy was this: > 9) Each voting location must be able to have unique vote layouts and > candidates to accommodate the wide range of elections that run both > simultaneously and sequentially. One of the aspects of the American election that is very strange to someone used to the British election process is that so many different votes are made on the same ballot paper. If one ballot paper is used for each vote then the counting process can be made much simpler, faster, and more reliable. The ballots can be quickly split into piles per candidate, and then those piles can be counted in parallel. Votes for different issues can also be counted in parallel. Separate "ambiguous" piles can be examined more closely if necessary. This also means that statewide elections can use the same ballot paper across the state without affecting the need for counties or cities to have their own ballots for their own elections. Tony f.a.n.finch email@example.com firstname.lastname@example.org
David Jefferson's well thought out critique of ATM-based voting misses one small but important point. Depending on where you live, it is not necessary to provide any authentication, or even to sign, in order to vote. Until this year, in Virginia all I had to do was state my name and address to the election official, and that was sufficient. Given the number of voters in each precinct (thousands in a presidential election), it's likely that I could have voted several times using a different (valid) name each time. If I knew the names and addresses of people in other precincts, I could therefore vote for them as well. The law changed this year, and now you either have to present some form of picture ID, or you must sign an affidavit. But they don't have a signature to compare to at the voting booth, so in the best case they find out after the fact that *someone* voted illegally (but they can't tell which vote it was). I mention this because all of the discussions about electronic voting (ATM-based, Internet-based, or otherwise) presuppose a requirement for strong authentication. If we're trying to model the paper world, that's not necessarily so. [Recognizing, of course, that there's not enough time to vote 1000 times in the same day in the paper world, under the assumption that I'd have to rotate between precincts to escape detection, but I can certainly vote electronically 1000 times in the same day.] Jeremy
>A related issue is voter authentication. We've heard this mentioned repeatedly in discussions about online voting. I don't know how voting works in your community, but there's virtually no authentication in my town. I go to the polling place, they ask me my name and address, and they cross it off the voter list; another person does the same thing when I turn in the completed ballot. None of the people there know me, yet they never ask for any proof of identity. A teenager trying to get into a bar at least has to have a fake ID; he wouldn't need that to vote as someone else — all the information he needs is in the phone book. However, if I tried to vote multiple times in the same precinct I suspect they would recognize me. So in order to stuff the ballot, I would have to go to different precincts, which would be quite tedious. With an electronic system, the door is opened to massive fraud by a single individual. Barry Margolin, email@example.com, Genuity, Burlington, MA
There's only one positive feature to using the ATM network for voting, which is that you can pay bribes to voters right on the spot. Other than that, it's totally impractical - ATMs use common physical formats for cards and for currency, but they don't have common processors, programming environments, network protocols, or user interfaces, and the real interoperation is done by the host processors that feed the networks and common banking standards, not because there's any interoperation out at the edges. Most of them use some variant on SNA, which was relatively appropriate technology at the time ATM networks started evolving, and are some variant on a PC - I've seen Dead-MSDOS messages on out-of-order ATMs, some have run OS/2, some Unix, while others probably have custom or other production operating systems. Bill Stewart <firstname.lastname@example.org>
>> — at least in comparison to the software industry, which typically >> has from 6 to 30 errors per line of code. His (appropriately sardonic) comment was: > Not to mention the journalism industry, with an error rate of 1 error > per 6 to 30 bits when reporting technical information... :) My (equally sardonic) comment is about the original (hidden) assumption that one can somehow compare chip error rates with software error rates. I was not aware that there was a published conversion factor converting transistors to lines of code. Perhaps I'm using the wrong metric and it's errors per pound of silicon that can be converted somehow to an equivalent in errors per line of source code. In any case, I'll bet if we first converted lines-of-code to pixels-to-display-a-line-of-code, software and hardware error rates would start to look a lot more similar. ...Matt http://backoff.pr.erau.edu/jaffem
In the mid 1990s, Pitney-Bowes developed and demonstrated a system for digitally-signed drivers licenses. I believe that the system was called VERITAS, but I could be wrong. The system provided for a 2D barcode on the back of each driver's license. The barcode contained a digitized copy of the driver's photograph, name, address, height, age, etc. The 2D barcode was signed with the digital key of the day, which itself was signed with the system key. I believe that the system key was changed every year. The company's business plan, I believe, was to basically give away the identity systems to state governments and then to sell verifiers to stores, restaurants, bars, etc. You would slap a person's driver's license down onto the verifier and it would display their photograph and tell you if they were old enough to drink, etc. It would also verify the signature. The Pitney-Bowes system was specifically designed to prevent the break-in-and-steal-it problem. Each morning the systems in the field would call up and get their key-of-the-day signs by the system-key. If a system was stolen, those systems wouldn't get signed. If they actually issued fraudulent cards, you could blacklist those cards and distribute the blacklist to the verifiers. You could even use caller ID to make sure that you wouldn't issue certs the phone number it was calling from wouldn't match the caller ID, and the system wouldn't issue a key. I saw this system at the RSA conference in 1993 or 1994. I was quite impressed. But Pitney-Bowes never sold it. I believe that there was a patent infringement problem.
I just spoke to Walter Neary at the university of Washington. He confirmed a 9 Dec 2000 report in *The Washington Post* that hackers gained access to confidential medical files. He said it was a good summary of the incident. (Other newspapers and television stations also reported on the incident as well.) But the statement you distributed was issued two days earlier. At that time, Neary said the college didn't know whether to believe the hackers' claims that they had accessed confidential data. He said the Washington Post and other reporters later obtained proof — the records themselves — that show that the hackers did indeed break into the computer. But he still disputes an Internet report, referenced in the statement, which claims that hackers "took control'' of the university's computers. Todd R. Wallack, Business Reporter, San Francisco Chronicle (415) 764-2815
*The Washington Post*, and a local TV station, obtained the "proof" from me, after the medical center sought to dismiss the incident as a rumor. Though I should hardly have to say it, I confirmed every aspect of this story before breaking it. (Even we "Internet reporters" do that sort of thing.) The hacker took command of large portions of the medical center's internal network. The University of Washington Medical Center later reluctantly acknowledged the accuracy of my report. http://www.washingtonpost.com/wp-dyn/articles/A46320-2000Dec8.html http://www.nytimes.com/2000/12/08/technology/08HACK.html http://www.msnbc.com/news/499856.asp http://dailynews.yahoo.com/h/ap/20001208/us/med_center_hacker_3.html http://www.komotv.com/news/qtmovie.asp?ID=8157 Kevin L. Poulsen, Editorial Director, SecurityFocus.com, Washington D.C. (202)232-5200
In Risks 21.15, "Lynda Ellis (LabMed)" <email@example.com> wrote > The following statement is for attribution to Tom Martin, director and chief > information officer for University of Washington Medical Centers Information > Systems: > [[...]] > we do know for certain that > no one has ever gained unauthorized entry into our separate and highly ====== ==== > confidential patient-care computer systems. I find this highly implausible. I could perhaps accept a claim that these systems are very secure, but to say that _no one_ has _ever_ gained unauthorized entry strains credibility. How does Mr. Martin know this? More to the point, how could _anyone_ know this? In the real world, all software has bugs in it, and all systems have loopholes. If nothing else, authorized users can be bribed, coerced, or fooled by "social engineering". > we take extraordinary measures to protect our clinical-based systems that > go well beyond the high security employed, for example, by most community > hospitals. These measures include the latest hardware and software, ================================ > encryption technologies, and strong host-based security. Using the "latest" hardware and software does _not_ boost my confidence in "extraordinary" security. Some of the RISKs here: * Wildly over-optimistic claims like these suggest that (as usual) management is pretty clueless when it comes to computer security. * Managers who actually believe the systems are _perfect_ don't have much incentive to improve them, or even examine/audit them too closely... * Line employees who might actually be very competent are more likely to quit when confronted with pointy-headed bosses. * And oh yes, the UW statement was right: claiming "our systems have never had unauthorized access" probably _does_ boost the chances of further attacks. Jonathan Thornburg <firstname.lastname@example.org> Universitaet Wien / Institut fuer Theoretische Physik http://www.thp.univie.ac.at/~jthorn/home.html
Please report problems with the web pages to the maintainer