Encrypt your data and deprive you beneficiaries. *The New York Times* has an interesting article on password protection on a deceased person's computer. What will happen when biometrics become prevalent? "When Tomm Purnell's uncle, Keith Cochran, died last year, Mr. Purnell's mother received two of Mr. Cochran's computers. One of them, a laptop, is password-protected, and even though Mr. Purnell considers himself somewhat of a computer geek, "the really obvious passwords," he said, like the names of Mr. Cochran's cats and combinations of his Social Security number, have failed." http://www.nytimes.com/2004/06/03/technology/circuits/03data.html?8dpc For an additional risk further on we get this gem: "Mr. Purnell said that some of his uncle's files are gone forever. Mr. Cochran stored digital photos on two computers that used the Linux operating system. The machines were given to his brothers, "who are Windows guys, so they erased and reformatted them," Mr. Purnell said." A little knowledge is a terrible thing.
Forget about spam -- even your "wanted" e-mail is clogging your in-box now, right? E-mail is "broken," says Eric Hahn, former CTO of Netscape and current CEO of antispam firm Proofpoint. "We need to make metaphoric changes. The [file-folder] metaphor was designed back when we were talking about getting five messages a day." Today, many folks receive 10 to 20 times that number and filing each one just takes too much time. "People hate filing. They hate it in paper. They hate it in e-mail. Could you imagine what it would be like to have to file Web pages just to get back to them?" Hahn suggests that in addition to overhauling the filing function, software developers should find a way to combine instant-messaging software with e-mail software. "Doesn't it seem odd that IM is separate from e-mail? Why are those conversations so fundamentally different?" he asks. Ben Gross, a researcher at the University of Illinois-Urbana-Champaign, says that in addition to incorporating IM, e-mail software developers need to integrate RSS readers into their products, so that users can view updates to a Web page without having to download the whole Web page into a browser. Some e-mail software developers are already experimenting with new approaches: Microsoft's Outlook 2003 and Google's Gmail service include a "group by conversation" feature that enables users to view related e-mails sent to and from a single person. [Wired.com 3 Jun 2004; NewsScan Daily, 3 June 2004] http://www.wired.com/news/technology/0,1282,63692,00.html
High labor attrition, poor infrastructure, and lack of data protection laws could derail India's booming outsourcing industry, says Nandan Nilekani, the CEO of Indian software giant Infosys Technologies. Noting that business process outsourcing (BPO) is based on reputation, Nilekani has urged the industry to deliver quality work: "Every year, about 70,000 jobs are added and the main challenge is how to attract people. The challenge is also how to retain the pool. It's a collective challenge. We require a holistic approach to expand the pool and train people. The question here is how to retain the manpower to deliver quality and value." Analysts say outsourcing labor attrition rates vary 20-40% in some companies while at top firms it averages at least 15%. [*The Age*, 11 Jun 2004; NewsScan Daily, 15 June 2004] http://theage.com.au/articles/2004/06/10/1086749833478.html
Any device that requires an operating system to load and install a custom driver on its say-so without verification or even alerting the user SHOULD be disabled and tossed on the garbage-heap of history. Autorun must not be the default behaviour for any hot-pluggable device, media, class of devices, downloaded content, anything, anywhere. If there is some hardware design specification that requires it, that design should be abandoned immediately, and any device that requires it considered obsolete. Who thought this was a good idea, and why weren't they on their meds? This is of course a much bigger issue than USB. Apple's Internet-enabled disk images, Microsoft's plethora of "active content" mechanisms, anyone's autorun media... anything that isn't specifically requested by someone who has verified authority for it (a locally logged in user, a network administrator, ...) that can't be run in a failsafe sandbox must not be run at all.
In May this year I purchased the new SonyEricsson Z1010 mobile phone here in Sweden, and after replacing my old SIMcard for a new one, I can make videocalls with it as well. Provided you're in an area covered by the 3G network, of course. Turns out my particular residential area (Bromma, a suburb of Stockholm) has only a spotty coverage at best, if I venture outdoors I might obtain a 3G-network connection, but indoors all bets are off. The problem is that my Z1010 phone is optimistic and switches to the 3G-network ("Sweden 3G") at every opportunity it gets. And, in theory, should gracefully revert back to my old 2G provider ("Telia") when outside 3G coverage. But what happens instead is that, every time I return home from the city, the telephone having merrily been connected to a good 3G-network all day, finds itself in a non3G area and grudgingly switches back to Telia. But wait, there is a faint 3G signal, let's switch back to that network. But no, we lost it... Yes, you might guess what's happening, the phone gets confused, instead of a connection the display reads "Only SOS calls available". And people calling me end up at my answering service. If I then succeed in bringing the phone back online and dial my own mobile number (this is a good old GSM network test, BTW), I mostly end up at my answering service. No busy signal. Time to go down into the basement (luckily out of 3G-reach) and reboot the phone there to get it properly online. When I called Telia customer service the other day to complain, a helpful person told me that to fix this, I needed to upgrade the software in the phone. Because the current software has some early version bugs, it turns out (!). The risk here? This stuff has obviously only been tested in a lab, where the 3G-transmitter is either on or off. My home, barely in 3G-coverage, was obviously not in the specifications... P.S. Otherwise I like my new phone, I've downloaded some of my DV-camcorder videos into it, makes a good show at the pub.
In RISKS-23.39, Barry Steinhardt refers to a GAO report about government use of data mining that mentions Verity K2 Enterprise as one of the programs. I wonder what definition is being used for "data mining" that covers Verity? I ask this as someone who first used Verity software in 1990 and worked for Verity from 1994 to 1997; judging by what I see on Verity's Web site today, the technology hasn't changed much. Fundamentally, the Verity technology that's being used here is precisely equivalent to Google's News Alerts: you specify a search that gets run against a series of documents. When a document matches, you get some kind of feedback. The only difference is that Verity's query language is enormously more powerful and flexible than Google's, with more operators and the ability to weight sub-queries with different strengths. (It's fairly common in heavy-duty Verity queries to end up with queries that have more than fifty terms and operators.) Obviously, Verity technology can add a lot of value to a data mining operation by pre-categorizing data very precisely. But I'm concerned that labeling Verity as data mining software could lead to focusing in the wrong direction, similar to the way it's important to differentiate between viruses, trojan horses, and worms. Aahz (firstname.lastname@example.org) http://www.pythoncraft.com/
Spencer Cheng wrote: << I have looked long and hard at proof of correctness in a previous job but other than safety critical systems, the cost is difficult to justify. That may be true of some techniques, but not all. We routinely generate proof of correctness, even for non-critical software. The keys to making proof of correctness practical are: 1. Use a notation that is designed to facilitate proof of correctness. This unfortunately rules out most standard programming languages. Better results are obtained by using languages designed for expressing the software specification and some of the implementation details. The full implementation can be generated in a conventional programming language (we use C++) by a code generator. 2. Use automated reasoning (AR) to generate the proofs. This technology has become practical at last due to advances in the theory of AR and improvements in processor power. We are getting successful automated proof rates of 95% to 100% depending on the problem domain, before we provide any hints to the prover (normally this is done by way of extra assertions). On our current commercial project we are getting 99.89%. So nearly all proof failures indicate either incomplete specifications or bugs. << In addition, proof of correctness is not very useful if the implementation changes even slightly. Using AR, this is not a big problem - just re-run the proof generator. Ditto when the specification changes and the implementation is adjusted to suit. << A complete and optimal set of black box test cases *IS* the S/W contract since it defines the semantics. Any implementation changes that changes interface behaviour would fail the black box tests. << I don't see how a set of black box test cases can be known to be complete, unless the system has no internal state and the set includes all possible combinations of inputs. Even if the system itself has no state variables (i.e. it just computes an output for some inputs), unless you test with all possible inputs, you don't know that the output will always be correct. The programmer might have "optimised" the calculation for certain values of the input variables (i.e. defined a different path in the program), and the optimisation may generate incorrect results. How would you have defined a "complete and optimal" set of test cases for IEEE double-precision floating point division, such that the Pentium FP DIV bug would have been found, other than by testing with all 2^128 possible inputs? If the system has a complex internal state, it is even harder to devise good test data, unless the software provides an interface to examine the state. Consider e.g. a word processor with an "undo" button which is good for undoing the last 100 changes you made to the document since opening it. How do you devise a "complete and optimal" set of test data for correct operation of that button? OTOH it is not difficult to formally specify. BTW I'm not against testing - until we have provably-correct compilers, linkers, provers, code generators and hardware, testing will still be needed even when correctness proofs are produced. Dr. David Crocker http://www.eschertech.com Consultancy & contracting for dependable software development
While I might share Spencer Cheng's enthusiasm for properly-designed tests, I regret that some claims he seems to be making exceed the known mathematical limits of the efficacy of testing. Basic observations by Littlewood and Strigini (Bayesian) and Butler and Finelli (frequentist) in the 1990's established the limits of testing, at least for systems whose failure rates must be lower than one failure in a million hours of operation. Many safety-critical systems, for example, have requirements for failure rates that are up to three orders of magnitude lower than this. Those unfamiliar with this literature may find it easily on the WWW. It is conceivable (not necessarily practical or even possible, but conceivable) that requirements on systems which may fail once in ten thousand to one-hundred thousand hours could somehow be reduced to a requirement on results from specific tests, as Cheng suggests by his startling phrase "A complete and optimal set of black box test cases *IS* the S/W contract since it defines the semantics." However, for systems whose failure rates are required to be lower than this, such a suggestion regrettably contradicts established science. Peter B. Ladkin, University of Bielefeld, Germany http://www.rvs.uni-bielefeld.de [Miscorrection uncorrected in archives. PGN]
There are many problems with proof of correctness - perhaps the biggest one being that "correctness" may not be the property of interest to the task at hand. For example, I would like to prove that programs designed to run forever don't stop. And I would like to prove that the green lights will not be on in non-parallel directions at traffic lights. And I would like to prove that my programs properly respond to all error conditions (into known and safe failure modes). None of these have to do with program correctness - per se. > I have become a firm believer over the years in proper black box > testing since - Except that there are too many states. So you can't do much of a test. This is a big problem for me in forensics cases. I just wrote a report in such a case in which I would love to have been able to say that some piece of data means some particular thing, but I cannot do so because I cannot know for certain that that is the only thing the data represents in the particular circumstance without either (1) access to the source code, or (2) the ability to reverse engineer it. The former is protected by copyright and the latter prohibited by the DMCA. So it will remain a question and not an answer for the court system charged with determining the freedom or incarceration of a defendant. > 1) it is the only way I know to establish that mythical S/W contract that people have been talking about for the last 20 years. Huh? > 2) [...] Any implementation changes that changes interface behaviour would fail the black box tests. Not hardly. There are other behaviors of programs than "interface" behavior unless you are very careful about the definition of interface. How do I black box test for performance-related covert channels? > 3) Requirement traceability can be satisfied by tracing the requirement > to the interface specification and onward to the test cases. And what interface specifications are those? Most products on the market today don't have such things. And to the extent that they do, they only partially specify real behavior. Can a complete specification be written? I don't think so. Fred Cohen - http://all.net/ - fc at all.net - fc at unhca.com - 925-454-0171 Fred Cohen & Associates - University of New Haven - Security Posture
After all, this is a method for identifying you, not a method for acquiring correct answers. When I am forced to use something like this (or the ubiquitous "State your mother's maiden name") I make sure to give answers that have no relation to the question asked (even when I'm also asked to provide the question). This leads to the further problem of remembering which non-sense answer goes with which (non-sense) security system, but at least others would have a harder time guessing my answers based on knowledge about me. The only time there is a problem is when some over zealous programmer tries to enforce formatting on the answer (e.g., requiring a valid date in answer to the example "Memorable date" question). Brian Reynolds email@example.com (212)618-0999
The flaw in the poster's reasoning is to answer the 'question' correctly. Why? What you have here, is in effect, another password. The system collecting your 'answers' cannot verify their correctness. Why not use other strings? 'f7u00Hngq' is just as good an answer as 'Boston' to the first question, etc. One should ask, when presented with a question or a blank to fill out: a: Do I really have to fill this out at all? Why does the asker need to know this information? b: If (a) is true, then is there a legal requirement for truthfulness? Wherever possible, don't answer the questions. Otherwise generate random answers and write them down, if you think you will ever need to refer to them again. If your wallet is lost or stolen and contains such gibberish, the correct association with where they might be needed is in your brain and hard to deduce. "Your mother's maiden name" need not be a name, need not have a female association. I have found that 'blue-green algae' or 'discombustable' work equally well.
A daft answer fits a daft question. Your memorable place might be "Ronnie Barker"; your memorable date might be "Paris Hilton" and your mother's maiden name might be "355/113". It's really a matter of how well you can remember these. Carl Ellison has suggested (I think accurately) that events of early childhood are the ones you won't forget. How you make daft answers from that is the problem each user needs to solve in his own way.
Here is some background information to the British ATC slowdown incident on 3 June 2004 reported by Debora Weber-Wulff (Risks 23.31). The Tageschau report appears to confuse NATS, which is the company called National Air Traffic Services, which runs ATC in Britain, with NAS, the National Airspace System, located at West Drayton, which is a couple of miles from Heathrow. I believe what happened is that Swanwick went to Manual Mode. That is Stage 2 of a four-stage graceful degradation of air traffic control (see below). Let me explain what that means, and what it entails. Swanwick is a place and a facility. It is not near Heathrow (by English standards of "near"), but on the south coast near Southampton. The London Area Control Centre (LACC) is the main system there, but there are others. I shall refer to LACC below because that is the system involved. LACC controls the London Flight Information Region (FIR), which extends from borders with continental airspace up to the Scottish border. The LACC system is relatively new, having come on-line in January 2002 after a ten-year development (planned: 4 years.Search the Risks archives for "NERC"). NAS is at least 30 years old. It has been rehosted, so I think we can probably guess that the Tageschau also confused the system with the kit that runs the system. NAS provides the master flight data planning for British airspace. This includes flight plans, predicted tracks and control sector boundary crossings for aircraft in the airspace system. I do not know if the primary and secondary radar data normally used by LACC comes through NAS as well, or whether LACC in normal mode generates its own radar data. In any case, LACC does have a radar data feed from its own direct connection with RadNet, which distributes that data from the radars throughout the ATC system. "Radar data" here means current 2-D position, "squawk" code (a 4-octal-digit temporary ID assigned dynamically by ATC), and aircraft pressure altitude. Now, flight planning, and "handoffs" between sector controllers as aircraft cross sector boundaries, using the info on the "flight data strips", is a very important part of air traffic management. Wendy Mackay studied this in Paris with the French controllers, and found that the rituals involved with handing off and passing data strips were crucial to successful operations, which might help to explain why attempts to move to automatic systems for flight data passing and handoffs have often failed (Is Paper Safer? The Role of Paper Flight Strips in Air Traffic Control, ACM TOCHI 6(4):311-40, Dec. 1999). LACC does have a partly automated and predictive system to aid controllers with passing the flight data and handing off (autocoordination). When NAS goes down, most of the predictive data feed goes away. The data on the screen at Swanwick starts to "age", and the colors start to change to indicate how old the data is. Controllers can work with aging data up to a certain point, but then it would become misleading and ultimately dangerous. There has to be a point at which a decision is made to ignore the aging data and perform the flight planning locally in LACC. This point is well-defined. At this point, LACC turns autocoordination off and starts manual coordination, called Manual Mode. In Manual Mode, the controllers at LACC work with their radar data and their voice contact only. They have lost automatic track and handoff prediction, and they have to perform this function explicitly using the old-style handoff routines between sector controllers. This is resource-intensive, as is the switch to Manual Mode itself. When LACC switches to Manual Mode, they also impose flow control to conform with the resources now available. Inter alia, continental controllers with flights whose destination lies within or which are to transit the London FIR and which are not yet airborne are to hold those flights on the ground. Traffic already in the air continues, but is controlled to conform with the reduced flow into and in the London FIR (lots of holding!). Manual mode is intense work for the LACC controllers, and it continues until * NAS comes back up (obviously!), and * the traffic flow has reduced to a stable maintainable level, and * the controllers are rested enough to perform the reversion to the NAS flight-planning-data feed (which is itself a non-trivial operation akin in resource consumption to the Manual Mode change), and * somebody decides everybody's up for it. So, no matter how quickly NAS comes back up, once the decision is made to go to Manual Mode, LACC is there for a while. There seems to me to be no obvious way around this feature. The NAS has a failure-recovery strategy which is reasonably effective, considering when it was designed, and many NAS failures are recovered without LACC having to go into Manual Mode. It is moderately loosely-coupled. So there is graceful degradation. First stage: NAS failures are recovered without LACC noticing. Second stage: LACC goes to Manual Mode, which means it works as a typical control center usually does, but has to impose flow control because there is simply too much traffic to work in the traditional way. Third stage: if LACC loses its secondary radar data things get hairier (primary returns, that is, radar bounces off the airplane without transponder data, provide only two-dimensional positional data, and that not as accurately). Fourth stage: if all radar data is lost, controllers revert to voice-only. Fifth stage: there isn't one; voice-only does not fail, on pain of going to bed without its supper. The whole process is quite well layered. So what about replacing NAS? Well, sure, but recall it is a system that works, and has done for 30 years. And recall that attempts to replace old systems with newer ones, such as Swanwick LACC replacing the area control function of LATCC at West Drayton, often end in tears (op. cit.). So the people responsible for replacing NAS would be wise to proceed vveeeerrrry carefully. I thank Martyn Thomas for catching myriad mistakes in an earlier version of this note. Its accuracy is largely owing to him, and remaining inaccuracy still to me. He also pointed out to me NATS's statement on the slowdown, at http://www.nats.co.uk/news/news_stories/2004_06_03.html Peter B. Ladkin, Bielefeld, Germany www.rvs.uni-bielefeld.de
Please report problems with the web pages to the maintainer