On the day of the bombing of the federal building in OK, some federal buildings in Boston were evacuated due to bomb scares. At least one of the scares was the result of a prank phone call. The police very quickly arrested an 18-year-old man, who allegedly placed the call.
The police have been waiting for several years for a new phone tracing system. Currently they have to call NYNEX (the local phone company) and initiate a trace while a caller is still on the line. The new system (scheduled to be put into place hopefully this year) will let the police trace the call automatically.
While preparing documents for trial, NYNEX discovered that a technical operator had transposed a trunk number during the trace, and thus they had traced the call to the wrong phone number (perhaps traced the wrong call to the right phone number?). The accused was released, NYNEX made lots of apologies including offering a full scholarship with no strings attached, and the person who made the actual call remains at large.
Some points to note:
The current tracing procedure requires manual entry of trunk numbers and has a very clear failure mode....
... but NYNEX had enough recorded information to later on determine that a mistake was made. (Well done!)
When the new system is installed, the traces will be entirely automatic. They might be more reliable, but it's unclear if there will be an audit trail in the case of failures. (It sounds like just a software system — if there's a bug in the algorithm, how will the algorithm detect it?) (If there are ways to spoof the system used to identify the phone number, that leads another set of problems.)
There's always the risk of "but the computer says" syndrome.-michael j zehr
[Also noted by Mark Kruse <firstname.lastname@example.org> and Nachum Shacham <email@example.com>, whose news excerpt noted that the mistake was discovered when police requested the necessary paperwork showed the wrong number. (In law enforcement matters, computer records generally have to be backed up by original noncomputer records.) PGN]
ComputerWorld 15 Apr 1995 mentions that some early adopters of speech recognition software are starting to develop voice problems. In the article, the vendors minimize the seriousness by suggesting that inexperienced users tend to talk too loud, and that the problem can be overcome by voice training. However, it occurs to me that my singing instructors have mentioned that interrupting a vocalization is a definite source of stress to the vocal cords; part of voice training consists of learning to keep as even a flow as possible. I suspect that speaking isolated words with distinct pauses between them is a very unnatural kind of voice activity and that we have not heard the last of this problem.Daniel P. B. Smith firstname.lastname@example.org
Our local paper 'the Nottingham Recorder' has the following item about portable phones:
Mobile phones have been banned from hospitals throughout Britain following a police probe into more than 100 deaths in an intensive care unit at Worksop's Bassettlaw General Hospital.
A Department of Health circular has been sent to every hospital in the country warning 'The department has received reports of mobile and cellular telephones interfering with the operation of medical devices Portable, cordless and cellular telephones should not be used close to patient monitoring,infusion or life support equipment because interference may affect their normal operation with potentially serious patient consequences.......
Patients, contractors and other visitors should be discouraged from using such telephones in hospitals.'
The risks seem to be obvious. I also wonder that apparently hospital staff are to be banned from using these whereas everyone else is only to be `discouraged'!David Wadsworth email@example.com
Laurence Brothers writes in RISKS-17:08 that some are investigating letting the police use high-powered electro-magnetic pulses to crash a getaway car's computer chip and bring it to a halt.
What if the EMP destroyed those incriminating computer records stored on the laptop on the front seat? Or crashed the computer chip running that crook's pacemaker? Every RISKS reader should have EMPathy for this.
[Pacemaker risks also noted by robert_rose@VNET.IBM.COM, firstname.lastname@example.org (justin wells), Mike Haertel <email@example.com>, and tada@MIT.EDU (michael j zehr), as well as firstname.lastname@example.org (David Alexander) — who noted hearing aids, electronic drug-infusing devices, electronic locking devices, alarm systems, security cameras, radios, etc., and the rampant opportunities for malicious misuse. PGN]
There has been some publicity recently here in the UK about an attempted fraud against the "Instant Win" version of our National Lottery. Whilst the current alleged fraud is very crude and was unlikely to not get spotted it does appear to show are rather classic case of a computerised security checking system designed for *verification* which can be used to *obtain* information as well as simply verifying it.
The "Instant Win" lottery makes use of scratch cards, purchased from retail outlets, which when the foil covering is scratched off reveal whether the card is a winner and if so for how much.
For smaller amounts the winners can collect their winnings immediately from the retailer. (Larger amounts have to be claimed from the lottery company directly).
In order to prevent fraud, as well as the winning amount the card also has on it (again concealed under the foil) a security code number which can be used to verify a winning card is genuine.
This is done by the retailer entering the security number into a computerised terminal supplied by the lottery company which apparently not only displays if the ticket is a winner but also the amount of the win.
The current alleged scam is said to work by the retailer just scratching off the foil above the security number and then using his terminal to see if the card was a big winner. If not he attempted to sell it to an unsuspecting punter and hope they didn't spot the small bit already scraped off!
Now obviously the current scam is rather crude and not very likely to succeed as it only requires one suspicious punter to tip off the lottery company.
But it does appear to demonstrate what would seem to be a classic flaw in a security system designed just for verification in that rather than requiring input of both the code number and the winning amount and then simply giving a Good/Bad response it actually gives out information in response to only one input!
Whether there are actually any practical ways of exploiting this flaw, it does seem surprising that a lottery system , surely something giving a high priority to effective security should have an obvious flaw like this.
After all it is a close parallel to the classic flaw in a computer login system that would tell you that a username was invalid before asking for a password, hence allowing a hacker to identify valid usernames. Surely no-one would dream of implementing such a vulnerable system these days...Mike
In "Capital IDEAS" (Vol. 3, No. 6, April 1995) a publication of the National Taxpayers Union Foundation, there is an article on page 4 entitled "Outrage! of the Month" which is actually composed of three outrages. The third outrage relates to programming computer dates. This subject has been discussed on RISKS before so I'll just give the text of the outrage.
Stan Niles, PhD 505-678-3834 email@example.com
The federal bureaucracy's computers are about to be dragged kicking and screaming into the 21st century.
It seems the original computer program designers, many of whom are now dead or retired, never gave much thought to allowing government software to read dates beyond December 31, 1999. Computers could mistakenly think, for example, that a date entered as "4/15/00" meant April 15, 1900, not the year 2000. Massive accounting errors could therefore become the norm, such as calculating benefit checks based on 100 years of interest instead of just one year. Data Dimensions, a computer consulting firm, estimates that "millennium conversion" could cost the federal government $75 Billion in equipment and labor to implement. A typical federal agency will need to modify up to 100 "applications" (computer programs that use dates), at a labor expenditure of up to 60,000 people-days.
So far, the Social Security Administration (SSA) is the only agency to begin the task of "millennium conversion," which is expected to take SSA seven years. Do you want out of the government's costly time-warp?
At least in the case of document retrieval, full-text indexing with stop words, word stemming, a semi-automatically generated thesaurus, relevance scoring, and relevance feedback have been shown to outperform the best manually indexed documents in retrieval accuracy and completeness. This result goes back to Gerard Salton's 1971 paper, "A New Comparison Between Conventional Indexing (MEDLARS) and Automatic Text Processing (SMART)" (CORNELLCS:TR71-115, available at <URL:http://cs-tr.cs.cornell.edu:80/TR/CORNELLCS:TR71-115). If you do not have the disk space for full-text indexes, you are at the mercy of the indexers (whether human or computer)...Mark Fisher Thomson Consumer Electronics
In comp.risks you write:
> [Hmm! According to you, it comes at 1/1/01 rather than 1/1/00.
> I wonder who agrees with that! PGN]
Who am I to argue with astronomers? They work in mathematics and FORTRAN, so counting starts with one. Therefore the first day of the first century is 1/1/00000001. C was invented later, and never really accepted by astronomers. We just moved things closer by starting at 1901.Rob Horn firstname.lastname@example.org
[The customary convention was also commented on by email@example.com (John Sager), firstname.lastname@example.org (Paul Menage), email@example.com (John Harper), firstname.lastname@example.org (William M. Bickham), "david (d.p.) woodman" <email@example.com>, firstname.lastname@example.org (justin wells), and Greg Lindahl <email@example.com>, an astronomer who noted "Astronomers get to go to 2 sets of turn-of-the-century parties... you nay-sayers only get to go to one." I'll be at the former, when most of the computer-related risks are likely to begin. PGN]
Oh, yes, I certainly agree that the astronomers and ephemeris/almanac folks like 1/1/01 as the century start. However, because there was no year ZERO, that does not scale backwards. The first century BC clearly began on 1/1/-100, and the first millennium BC on 1/1/-1000. The only SANE way to handle this is to provide a 99-year first century; just as we have leap-years and leap-seconds, we could have a backwards leap-century. Indeed, it is just as well there were no computers in virtual-year 0000, or the religious wars would have been proven recursively unsolvable.PGN
> We chose "second of century", using a double precision floating point
> representation. Analysis showed that this would preserve millisecond
> accuracy for the span of interest.
Sigh. There is more to time calculations than just understanding time, and there is more to numerical analysis than just the range of the mantissa.
I'll try this once more, keeping it brief. This project wouldn't have used IEEE format 20 years ago, but let's proceed with the analysis under modern assumptions. The IEEE double-precision floating-point representation provides 53 bits of significance in the mantissa. Using the approximation that 2^10 = 10^3, this can be see to allow a range of about 8x10^15 in the mantissa, before bits start getting dropped. Since there are about 3x10^7 seconds in a year, or about 10^8 every 3 years, one can represent about 8x16x3 = 384 years to millisecond precision without violating that range, right?
Wrong. This is true only if you represent time as integer milliseconds. Since the representation used seconds, the milliseconds were represented fractionally. There are only four millisecond values that can be represented accurately in a binary floating-point system: 0, 250, 500, and 750. ANYTHING ELSE is an approximation. This is especially true if your calculations involve any addition and subtraction of times.
> Since we usually were satisfied with one minute accuracy this
> seemed sufficient.
It sounds like this project involved some sort of modeling of the physical world. In such cases, the loss of a few bits of accuracy may not matter, although I still think programmers ought to understand numerical analysis better than most do. But remember that the original discussion of time representation arose in the context of measuring time in computer and financial applications, where these bits can matter.
> There are a few applications that need better than millisecond precision,
> but for most of the worlds applications double precision floating point
> will provide enough precision for the next few millennia. (A simple test
> for those who are unsure about their needs. Do you compensate for the
> variations in the rate of the Earth's rotation? If not, you probably don't
> need millisecond accuracy.)
This is both a broad and a biased statement. Here's a simple response to the simple test: I'm not dealing with astronomical phenomena, and so I couldn't care less about variations in the Earth's rotation. What I care about is whether I can query the system time, do something, query the time again, and subtract to get an accurate elapsed time. I care about not only milliseconds, but microseconds (and in a few years, nanoseconds).
Now it's true that I generally deal with time in a small range. Unfortunately, the system time is represented in terms of a relatively long base interval. So I have to be able to take two times measured in years, subtract them, and get microsecond precision in my results. If the times are represented as integer microseconds, I can be sure that everything will "balance," as the accountants say. If a sloppy floating-point representation insists on giving it to me as integer and fractional seconds, things won't add up, and my papers may not get published.
> There could have been some round-off issues, but we
> rarely did any arithmetic other than addition or subtraction of two
> times, where millisecond accuracy is maintained.
But addition and subtraction are precisely the places where numerical errors are introduced! Millisecond accuracy was *not* maintained in this situation. Since you only cared about minutes, this hardly mattered, but it's still an incorrect statement.
> A word of caution, double precision floating point is suitable for an
> internal representation of UTC, or "absolute" time.
It's only appropriate when you're doing modeling of the physical world, where you don't care about losing a few bits of accuracy in the fraction because your measurements aren't that good anyway. Double precision is a *terrible* way to model elapsed time inside a computer system. It has nothing to do with the nature of time, and everything to do with numerical analysis.
Geoff Kuenning firstname.lastname@example.org geoff@ITcorp.com
Of course, we have here the canonical example of PR people (in this case the "Resource Persons" who provided this information) who really don't understand what they are saying when talking about computers.
What company, for example, is "Windows 95", and what products does it develop ?
Add to that the fact that products developed by Clark Development (makers of PC Board) and Mustang Software (makers of Wildcat) are *legitimately* distributable on a try-before-you-buy basis as shareware, and you have the canonical nightmare of BBS operators being forced to prove to the computer illiterate judicial process that they were perfectly entitled to make copies of such files available for download.
In my experience of BBSing, I also have to suspect that some of the "products" developed by Borland, Microsoft, IBM, Novell, et al., may well have been freely distributable service fixes. All of these companies have been known to make such patches widely available for the benefit of their customers. Take one service pack, accidentally label it with the name of the product that it is a patch to, and there's an immediate source of confusion even to people who know what they are doing ...
On a more light-hearted note :
Why should access to information on how to successfully commit suicide be made unavailable to software pirates ? Surely those interested in stamping out piracy would applaud such a move ?
Also, who would trust instructions on how to "succeed" at suicide to be reliable, when it is patently obvious that the author obviously lived long enough afterwards to write up said instructions ... ?
Have no fear, other research groups are working on this problem! The Library 2000 group at the MIT Lab for Computer Science is part of a 5 university project working on digital libraries. One of our highest priorities is replication. Data stored in a digital library system will be replicated; not only locally, but in a geographically diverse way to protect against a wide-area disaster. The replication scheme itself is transparent. If a site goes down, other sites replace it automatically.
In addition, we are working on long term storage. This means persistent storage as well as moving data to the current media continually (as opposed to leaving it on a PDP-10 DECtape). However, the only media which has persistence of 50+ years which has been proven in a reliable way is film. So a possibility is to microfilm the bit patterns of digital documents as an archive mechanism. For more information on our project and some interesting papers on fault-tolerance and replication, please refer to our WWW site: <URL:http://ltt-www.lcs.mit.edu/ltt-www>.Andrew Kass, MIT Laboratory for Computer Science; Library 2000;
> ... It is impossible to get out of the system. ...
My university installed recently a library system with a similar feature. Here, you need a password... to log out of the system!
The password, of course, is included in the documentation handed out to users, but it is easy for casual users to be trapped by the system.
Me? I just use telnet and ^]q, thanks.Patricio Poblete
>Date: Tue, 18 Apr 1995 15:19:15 -0400
>From: David Karr <karr@CS.Cornell.EDU>
>Subject: Computer-controlled electrocution (RISKS-17.06)
>People seem upset by the use of a computer to control an electric chair...
Some don't care. But in this age of "software reuse" and modular hardware & software components, there is a real risk here. It's the risk of having one's work put to uses of which you are ashamed, or would be if you knew about it. This risk exists in the development of any modular type of component - you don't know what the next user is going to hook it up to (a bomb, a power grid, or an incubator). There are ways to mitigate this risk, such as working on components so specialized that such use would be infeasible, but this only goes so far. A certain engineer I know of tells me of the irony of being caught by a radar speed trap after having invented a key component of the radar guns being used - it's not always the "bad guys" that make unanticipated use of the technology, and it's not always the "good guys" that invent the item being used in an unanticipated context. So there are lots of opportunities to "suffer" in this way.
More to the "RISKS" point, though, the components any given engineer or inventor designs may well be coopted for other uses to which they >seem< suited, but actually are not - trying to adapt "fail-safe" control technologies to become "fail-deadly" (when "deadly" is the intent of the novel component context) brings up an entirely new face of the re-engineering complexity debate. Particularly when, as in this case, the lethality of the resulting system is intended to be highly directed, but there are people (other than the intended victim) involved in the use of the equipment. It would be hard to feel good about a fail-safe one designed into one's component if the result was that the executioner fries instead of the convict...
The previous post also tried to point out that eliminating capital punishment (which would make such machinery a moot issue) is a debate not germain to RISKS. While I tend to agree with this in specific, I wanted to point out that (political/social/other) systems can be designed in ways which encourage or discourage the implementation of RISKy support structures, but this notion is rarely considered in guiding the development of social policy. Another RISK of ignoring RISKS?John R. Lupien email@example.com
Please report problems with the web pages to the maintainer