"The current extreme cold is causing troubles to the Rhatische Bahn (RhB) [a private railway company in Switzerland, Ff]; the brand-new locomotives can't cope and mainly the service of the Albula-route in Engadin [eastern part of Switzerland, e.g. St.Moritz, Ff] suffers breakdowns. The main switch of the Ge 4/4 III type locomotives gets frozen and the oil supply control software goes out of service. With a heating equipment for the main switch and a new version of the oil supply control software all problems should be resolved." The new version of the software will be most likely delivered with cold resistant bits and bytes preventing them to shrink under low temperature conditions. Another solution could be to equip them with skates so that the bits and bytes can function on ice too. Karol Fruhauf, INFOGEM AG, CH-5401 Switzerland
How should a revision level be interpreted? Here's a quick guide for anyone short of a clue: 0.1 WE GOT A REALLY GREAT NEW WAY TO DO THINGS !!! <0.9 Not ready for prime time. 0.9 We think it works, but we won't bet our lives on it. 1.0 Management is on our case; seems like a low risk. 1.01 Okay, we knew about that. All known bugs are fixed. 1.02 Fixes bugs you won't see in 27,000 years, i.e. more than three times the age of the universe. 1.03 Fixes bugs in the bug fixes. 1.04 All right, this REALLY fixes all known bugs. 1.05 Fixes bugs introduced in rev 1.04. 1.1 A new crew hired to write documentation. 1.11 From now on, no comma after "i.e." or "e.g.". 1.2 Somebody actually changed a functional feature. 2.0 New crew hired to write software. Old crew blamed for bugs. 2.01 New crew sending out resumes to placement agencies. 3.0 Re-write the software in another language, go back ten squares. ... return to line 0.1
>Cancelmoose is not a "vigilante", as he does not act alone; there is >substantial agreement amongst the Usenet administrators that have >contributed to this debate that these actions are necessary. Not to be hopelessly pedantic, but as the post was disagreeing with the WSJ's word choice: vigilante- a member of a vigilance committee; any person executing summary justice in the absence or breakdown of legally constituted law enforcement bodies. Now also, a member of a self-appointed group undertaking law enforcement but without legal authority, operating in addition to an existing police force to protect property etc. within a localized area. - New Shorter Oxford Dictionary It appears to this poster that, in fact, the operator of Cancelmoose -is- a vigilante, executing summary justice on behalf of the vigilance committee consisting of the USENET administrators among whom the 'vote' was taken. firstname.lastname@example.org
Although I have seen much in the past week about ETS and its electronic testing system, I have seen little in the way of first-hand reports by people who took the test. I am one of those people. Here is my report. In the fall of 1992, I had the mistaken notion that I wanted to go to graduate school in the fall of 1993. I got the GRE application and discovered that it was too late for me to take the standard written test and still make my application deadlines. It was possible, however, for me to pay an additional fee (under $100, but more than $20) and take the electronic version. This seemed like a godsend, so I willingly parted with my money and awaited my response. I got a form back from ETS approx. three weeks later. I was to call up the local testing center (in Cambridge, a few blocks from MIT), and make an appointment. I did this. On the appointed day, I showed up at the ETS testing center. They seemed very disorganized. The center was a quarter of the floor of a medium-sized office building. There was a reception desk up front, a waiting area, and rows of offices with windows. Across from the offices was the "testing room" --- a long, narrow room. There were tables along the walls, computers on the tables, chairs, and windows that overlooked the offices. The arrangement, apparently, was to make it difficult for people taking the test to see other people's screens, and easy for people to look into the room and see if we were cheating. I presented two forms of identification, which the receptionist scrutinized. I then waited in the reception area until the appropriate time. Three other people showed up. We were then led to the testing area. I sat down at one machine, and the test began. I should say here that I found taking the test on the computer to be a much more enjoyable experience than taking the test on paper. The test is self-paced. You have a little count-down timer in the corner of your screen showing you how many minutes and seconds are left in a particular section. You can abort the section at any point and go to the next section. The test looks much the same as the regular test: diagrams or reading comprehension paragraphs, with questions. You answered the question by clicking on a bubble --- just as if you were filling it in with a pencil. You could also mark questions. You could switch to an outline view that showed every question how you had answered it, and whether or not it was marked. You could click directly to a question. Was the test susceptible to cheating? Absolutely. Quite simply, we were not watched. There was intermittent talking between two of the test takers on the day of my test. I could see screens of other test-takers. Infact, one of the students seemed to doing the same questions that I was doing. ETS said that one of the advantages of the computer-test is that the exam will be able to adopt to the student in real-time. This will make the exam more precise, allowing it to zero-in on the student's exact abilities. In my experience, though, I didn't see any such adapting. (I vaguely recall that ETS said that the adapting wasn't being used the year that I took the test, but it may be online now.) Between each section, we were given a short break. At the end of the test, I was given the chance to cancel the entire test. If I didn't want to do that, I could view my result. However, if I viewed my result, that made it permanent. I got my score and left. Instead of taking the regular 3 1/2 hours, I was finished in two. As I said, it was a very enjoyable experience.
On Godfrey: My interpretation of the letter was that the complexity of the tax code is what prevents them from automatic the transcription fully. It takes a human accountant to go through the financial records and determine what should be copied to the distribution calculation. While I doubt that it's actually impossible to automate this, it's quite conceivable that the complexity has delayed implementation of such measures. Companies only have finite resources to automate their procedures. On Flatau: > ... It also was quite apparent when I balanced my check >book at the end of the month. Isn't that precisely what happened? An accountant screwed up, and when the auditors reviewed the books they discovered the error and corrected it. Frankly, I'm surprised at RISKS people flaming Fidelity for this error. IMHO, this is an example of a company whose procedures *worked*. As we all know, no process is infallible, and checks are necessary to catch the errors. Of course it would be nice if the checks always occurred before the failures became public, but the important thing in this case is that the checks occurred before checks were cut (sorry for the pun). Barry Margolin BBN Internet Services Corp. email@example.com
I see two RISKS: The implication that this mistake only happened because manual steps were involved, and the risk of having infrequent but highly important tasks that require non-standard procedures (like "manual" calculations in a largely computerized environment). <>Kathy Godfrey firstname.lastname@example.org Maybe there is also a cultural RISK?? Magellan Fund does not expect to lose billions and billions of dollars. Manually entering data that is unexpected or unpleasant or unwelcome surely must be more prone to error. Perhaps a computer version of a freudian slip?? So, procedures for manual entry of data need to take into account the potential emotional/affective content of the data being entered. [What about fraudian slips? PGN] Floyd Ferguson email@example.com
Unfortunately, the disclaimer at the end of Mr. Blaze's posting makes it impossible to extract the single paragraph I wished to comment on (another RISK in itself) so I will have to wing it. [Nice observance! PGN] The RISK a government takes in enacting such laws is that people will try to act within them. To me one of the most powerful strengths of the USA is that a large portion of the citizenry feel free to point out just how absurd some laws are by insisting on their enforcement. If everyone did, whether out of "creative anarchy" as here or pure fear of the consequences as exists in unnamed countries (fortunately regimes based on fear never seem to last very long since stagnation is inevitable). IMHO, unenforceable laws are bad laws primarily because they degrade respect for all laws (look at speed limits for one example). Of course many laws are broken simply because they are unknown (does everyone have a leash for their alligator ?), obscure, and/or unenforced. Further, where enforced, people quickly learn the right words to say (when I built a home for my hobby, on my first application I said "garage" and was told that for the size it needed steel doors and a commercial fire sprinkler system. It immediately became a "garage/workshop" and so came under le$$ re$trictive residential requirements). The bottom line is that it takes people like Matt to take the time and trouble to point out the absurdities in order to get them "fixed" - after all in the best of worlds, that is what bureaucrats are for. IMHO doing something like this is worth it provided you never have to do it again. Padgett ps wonder what would be the RISK of making up an official appearing document for example from the "Department of Stepanographic Affairs" and using for the exportation of encoded ducks ?
The flap here in Israel regarding cellular phones began when newly inducted soldiers brought the phones to basic training at boot camp. The concern had less to do with security risks but rather socio-economic concern: Soldiers who could afford cellular phones (which are ridiculously expensive in Israel and serve as status symbols) and those who couldn't. One of the most important features of the Israeli Army is that it is an army of the people: many and women from the age of 18 serve two (for women) and three (for men) years, and then do reserve duty once a month till the age of 55. The use of cellular phones by new recruits caused concern for the melting pot characteristic of the army and serves to introduce additional stress between those who can afford the phones and those who can't. The pizza on the other hand is another story.... Michael Dahan Dept. of Political Science Hebrew University Jerusalem, ISRAEL firstname.lastname@example.org
The problem with cellular telephones in Military/Emergency service use is that they work too well! In an emergency people get rather discouraged when their incredibly expensive radio system is overloaded, and unusable, but a simple cell phone works perfectly well. If you are in an emergency situation, and you are reduced to whistling into a microphone in morse code to try to get your message through an overloaded radio channel, then, if you have a cell phone, you will use it regardless of security. Of course, in some cases the cell phone may be more secure than other means of communication! David Wadsworth
RISKS-16.72 contains some discussion of the GIF/LZW patent problem, including a suggestion that some other standard (such as JPEG) be defined. Unfortunately, JPEG contains patented (by IBM) compression code. Many compression algorithms, including the simplest form of run-length encoding, are patented. See rtfm.mit.edu:/pub/usenet/news.answers/compression-faq/part[1-3] for details, patent numbers, and excerpts from the mind-numbing legalese in which patents describe inventions. The same FAQ also points out that LZW is patented by both Unisys and IBM, the patent office having failed to notice that both patent applications described the same algorithm. David Winfrey email@example.com Capital PC User Group, Rockville MD
It is my understanding, based on a conversation with Richard Stallman two years ago, that the Unisys patent on LZW only covers compression, and not decompression. This is the reason that the GNU gnuzip compression program can decompression "compressed" archives without violating any (known) patents. This, of course, means that there is a simple and reasonably transparent solution to the current GIF crisis: 1. Adopt a new compression standard for GIF. 2. All newly created GIF files will be in this standard. 3. Continue to support decompression of GIF files compressed with LZW, because that does not violate the Unisys patent. For a compression algorithm, we could use the simpler compression algorithm used by programs like PGP (I think that it is straight LZ, rather than LZW), or we could use the algorithm that gnuzip uses (which is better at compression than LZW, but executes more slowly). Simson
In RISKS 16.72, Peter Bishop refers to the recently announced UNISYS/COMPUSERVE patent restriction and recommends switching over to use the Free Software Foundation's GZIP. This may not solve the problem. In the same issue of RISKS, Tim Oren of Compuserve pointed out that the restriction had been imposed entirely by Unisys, and that Compuserve wished as much as the rest of us that it would go away. I am not sure what compression algorithm ZIP & GZIP use, but I was under the impression that they use LZW, just like GIF. If so, I should imagine unisys will be leaning on the FSF and PKWare and anyone else they can find who uses LZW in their software. It would be very interesting to see the opinion of someone with a knowledge of patent law on how a patent can be obtained on something that has been as widely published, apparently without notice of restriction, as LZW has been. John Mainwaring firstname.lastname@example.org The usual disclaimers may apply
I believe Tim Oren of CompuServe has made an error in chronology. When the GIF standard was initially developed in 1987, Unisys was not "pursuing" a patent for the LZW algorithm; the patent had already been awarded, in 1985. Garrett P. Nievin
This discussion brings several time-related bugs from my past to mind. One of the more mundane was the October bug. We introduced some new date/time functions into our code sometime in the spring or summer. They all seemed to work fine. When October came around, things started behaving strangely (obvious "core walks" for you UNIX folk). The problem? Months were zero-based (0-11) but were displayed one-based (1-12). The code used the zero-based number to calculate the number of digits required for display. Simple coding error. The bigger problem was a lack of unit testers to test all ranges of dates instead of relying on the normal passage of time. When the problems first showed up it had been months since we had worked on the date/time code so it was not immediately suspected. A more interesting bug which shows just how likely it is for time to not be accurate if it's not fetched atomically showed up when we had our product running on a 286-based UNIX (XENIX I think). I forget the exact details, but we had your basic C language MIN() type macro, which looked something like this: #define MIN(x, y) (((x) < (y)) ? (x) : (y)) Simple enough. Until someone did something like this: least_time = MIN(time((long *) NULL), old_time); [ the time() function in UNIX returns the number of seconds since the epoch ]. This looks innocent enough, but it expands so that there are two time() calls — one during the comparison and one to get the value the expression evaluates to. Our expectation was that either the original x or y would be the result of using the macro. Instead, some cases evaluated to x plus some number of seconds — the number of seconds we gotted swapped out on that pathetic system between the two calls. Steve Sapovits, Telebase Systems (610) 293-4724 email@example.com
[I could not possibly use all of the responses received on this subject. I chose just a few, ignoring a few very fine points that would be useful to someone involved in a particular system. Mark Brader wrote a nice note on leap seconds, which, however, seemed gratuitous here. Pete Carah <firstname.lastname@example.org> noted we are now up to 18 accumulated leap seconds. Martin Ewing <email@example.com> warned about nonmonotonic clocks. Several folks had "fixes" for nonatomic clock reads. Paul Robinson's note generated a welter of messages. I recommend we wait until NEXT January, and try again. PGN] Actually, it's even worse than Dave is saying. See, UTC was adopted in 1961, but the leap second system was not adopted until 1972. For at least some of the years in between, when adjustments in UTC were needed, the *rate of time* was changed. That is, instead of extra seconds, there would be *longer* seconds (and proportionately longer milliseconds). So the true conversion of a time stored in milliseconds since some time in 1582 is complicated indeed! Recommended reading: "Greenwich time and the discovery of the longitude" by Derek Howse (1980, Oxford University Press, ISBN 0-19-215948-8). Mark Brader SoftQuad Inc., Toronto firstname.lastname@example.org
Actually, there are TWO bugs. The T$ in line 160 [which should be 150] should be T1$. The program specification was that it should make N calls to DATE$ and 2N calls to DATE$, so these really are bugs, not just alternative interpretations. E.
Please report problems with the web pages to the maintainer