Forum on Risks to the Public in Computers and Related Systems
ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator
Volume 22: Issue 7
Saturday 18 May 2002
Contents
Apple Computer's hidden spam filtering- Derek K. Miller
Apple: break your new PC with a copy-protected CD, it's not our fault- Charles Arthur via Dave Farber
Shipping the Big Iron: a computer-related risk!- Mike Hogsett
UK govt wants to make "e-filing" compulsory for taxes- David Cantrell
Verisign doesn't encrypt credit-card info- Daniel Norton
Making a list, checking it never- Adam Shostack
Re: The Console Buffer Knows...- Dick Mills
Re: GNU is not UNIX- Theodore Ts'o
Dimitri Maziuk
More on Klez- Bob Morrell
Paul Mech
Info on RISKS (comp.risks)
Apple Computer's hidden spam filtering
<"Derek K. Miller" <dkmiller@pobox.com>>
Thu, 9 May 2002 10:46:54 -0700According to a recent report at the Mac-focused news site MacInTouch, Apple Computer places surreptitious spam filters on the free POP3/IMAP mac.com e-mail addresses it provides to Mac OS computer users. <http://www.macintouch.com/applemailfiltering.html> Apple does not advertise its filtering, does not reveal how it works, and provides no way to turn it off or adjust its parameters. The risk is clear: spam filtering is far from perfect, and Apple's is no exception. It both lets through actual spam and rejects some legitimate mail, including (so far) a number of postings from e-mail lists that mac.com users have explicitly subscribed to. Worse, the messages are bounced back to the sender, but the recipient has no way of knowing anything is missing unless he or she is expecting something in particular. The only way the Mac user community found out about the filtering was when list administrators whose messages had been bounced contacted Apple to find out what was wrong. The problem is particularly acute for those who use mac.com as their primary or only e-mail address, since there is rarely any way for mistakenly blocked "spammers" to contact them to let them know anything's wrong. Users have no way of knowing how much mail they may have missed. The approach is also puzzling, since spam protection is usually a benefit ISPs and e-mail providers like to promote to their users. And in most instances they provide the option to deactivate or adjust it too. Derek K. Miller - dkmiller@pobox.com, Writer, Editor, Web Guy, Drummer, Dad Vancouver, BC, Canada | http://www.penmachine.com
Apple: break your new PC with a copy-protected CD, it's not our fault
<"Charles Arthur, The Independent" <carthur@independent.co.uk>>
Fri, 10 May 2002 15:44:02 +0100
[From Dave Farber's IP,
http://www.interesting-people.org/archives/interesting-people/]
An interesting little endnote in
http://kbase.info.apple.com/cgi-bin/WebObjects/kbase.woa/wa/query
?searchMode=Expert&type=id&val=KC.106882
(Apple tech note 106882)
which is mostly about how to get out one of those copy-protected CDs, e.g.,
Shakira's "Laundry Service" if you put it in your Mac and the thing goes
gray screen on you.
Lots of stuff about poke this and prod that. And at the end, a note which I
translate to mean "Hey, nothing to do with us, you did the equivalent of
sticking a fork in the toaster":
"CD audio discs that incorporate copyright protection technologies do not
adhere to published Compact Disc standards. Apple designs its CD drives to
support media that conforms to such standards. Apple computers are not
designed to support copyright protected media that do not conform to such
standards. Therefore, any attempt to use non standard discs with Apple CD
drives will be considered a misapplication of the product. Under the terms
of Apple's One-Year Limited Warranty, AppleCare Protection Plan, or other
AppleCare agreement any misapplication of the product is excluded from
Apple's repair coverage. Because the Apple product is functioning correctly
according to its design specifications, any fee assessed by an Apple
Authorized Service Provider or Apple for repair service will not be Apple's
responsibility."
Interesting. So whose fault is it? The user's, one must surmise. But I
foresee some more sticky labels - perhaps ADMIN ADVISORY - INCLUDES
EXPLICIT WARRANTY BREAKING DATA
Shipping the Big Iron: a computer-related risk!
<Mike Hogsett <hogsett@csl.sri.com>>
Fri, 10 May 2002 15:04:17 -0700We recently arranged with Sun for them to lend us one of their larger systems, a Sun Fire 4800. The machine has 208Gbytes RAM & 12 CPUs. Not a cheap machine. It is mounted within its own 72" tall cabinet. It is shipped in a wood crate which is approximately 3' wide by 4' deep by 8' fall. Gross weight is about 900 pounds. Since their warehouse is just across the San Francisco Bay from us, they contracted with a local carrier to ship it to us. The machine was picked up from their warehouse, placed into the truck, and arrived at our receiving department a few hours later. When the driver and our receiving personnel opened the trailer door, the crate was lying on its side, although it had been upright when it left the warehouse. The driver stated that he had heard a loud bang after making a turn and had thought he may have blown a tire. On the crate there were several shock sensors and tilt sensors, only one of which had tripped (the one which was face up when it was on its side). There were also instructions telling us what to do if these sensors had been tripped. The instructions told us to accept shipment but to inspect for damage and call the carrier. We did accept shipment but did not open the crate to inspect for damage. We contacted our representative at Sun for advice. Our representative is having a replacement shipped to us and the unit which is here now will be picked up and sent back. I was quite surprised that the crate was not strapped in and tied down tight given how narrow, tall, and heavy this crate was, not to mention the value of its contents. Michael Hogsett
UK govt wants to make "e-filing" compulsory for taxes
<David Cantrell <david@cantrell.org.uk>>
Fri, 10 May 2002 18:38:14 +0100This is taken from *Metro*, a [...] free tabloid distributed on trains and tubes in the London area. The same story appears in *The Register* and elsewhere: E-mail taxman or face UKP3,000 fine, by Sarah Getty Companies and individuals will be fined up to UKP3000 if they fail to file tax figures via the Internet, it emerged yesterday. Measures proposed in the government's new Finance Bill mean that so-called 'e-filing' of payroll information will be phased in for larger businesses in the next few years. By 2010, all employers will be required to submit tax forms online. But accountants fear elderly and disabled people who employ carers, or people who use nannies, will be caught up in the legislation if they pay through a payroll system. ... He also objected to the way the Bill is framed, which means little parliamentary scrutiny will be required is the Inland Revenue or Treasury wants to extend the powers of the Bill... The RISKS are obvious, both in terms of accessibility of this online filing system, but also the growing trend for legislation by regulation, thus allowing ministers to bypass parliament. David Cantrell http://www.cantrell.org.uk/david
Verisign doesn't encrypt credit-card info
<Daniel Norton <Daniel@DanielNorton.net>>
Thu, 09 May 2002 07:45:16 -0400Their slogan is "The Value of Trust", but why trust an organization that doesn't protect credit-card data when you sign up for a "Code Signing Digital ID?" The actual URL is too long for inclusion here, but it's the link at the end of this page that reads "Code Signing Digital ID for Microsoft Authenticode" (above where it reads "US $400.00 annually"): http://digitalid.verisign.com/developer/ms_pick.htm Fill out the form and include your credit-card number below where it reads "All enrollment and credit-card information is transmitted securely using the Secure Sockets Layer (SSL) protocol and VeriSign Server IDs." But also note the absence of any "secure" indicator on your web browser. Press "Accept" and all form data is submitted in plain text to Verisign. ::slaps forehead:: The RISKS lesson? Don't trust anyone just because they say "Trust me."
Making a list, checking it never
<Adam Shostack <adam@zeroknowledge.com>>
Fri, 10 May 2002 11:48:31 -0400Risks of mis-identification, arrest, etc, from law enforcement databases are well documented. The new, secret terrorism databases are starting to repeat these mistakes, as documented in a New Yorker article on a 70 year old black woman (Johnnie Thomas) is in a set of FBI databases, because "John Thomas Christopher" was an alias used by Christian Michael Longo, once on the FBI top ten most wanted list. Unfortunately for Johnnie Thomas, even the fact that Mr. Longo is in jail in Oregon hasn't removed her name from the computers, and the New Yorker article documents her (so far unsuccessful) attempts to clear her name. http://www.newyorker.com/talk/content/?020513ta_talk_mcnamer The application of fair information practices; allowing people to know what data is stored about them, to access and correct records, etc, is often brushed aside for law enforcement claims that the data must remain secret. Does anyone know if these databases are the the Expanded Computer Assisted Passenger Screening Program, or something else?
Re: The Console Buffer Knows... (Bergman, RISKS-22.06)
<Dick Mills <dmills@mybizz.net>>
Thu, 9 May 2002 16:14:29 -0400> However, I think that there's a RISK when someone uses an apparently > "dumb" device with no intrinsic "history" to connect to another device > (the switch) with no command history, and yet there's a record of every > screen preserved weeks later. True; but when we examine the totality of the risks archives, and Mr. Neumann's book it becomes clear that the number of security pitfalls in computer use are so great in number, that even the most experienced and vigilant among us can't avoid making fools of ourselves fairly often. My conclusion is that meaningful security comes only with the military approach of physically limiting access to computers and networks of computers containing secure information. Everything short of that has to be considered half measures and evidence that the information owner isn't really serious about security. For the vast majority of the computing community that is not willing to take measures as extreme as the military does, the implication is that we must design our systems and procedures and set our expectations on the presumption that our work is not secure and can never be secure. At least in my humble opinion.
Re: GNU is not UNIX (Maziuk, RISKS-22.06)
<"Theodore Ts'o" <tytso@mit.edu>>
Thu, 9 May 2002 12:22:59 -0400
Dimitri's rant contains a number of inaccuracies as well as misconceptions
or myths. [It also had a typo in the subject line that has been fixed in
the archives for search purposes. It also provoked a lot of responses, some
of which are interspersed by me within Ted's comments here. PGN]
First of all, with regards to Stallman's rants about "LiGNUx", this is a
very long and often debated point. To claim that it has a good technical
reason is very much stretching the argument. At its base it is
fundamentally a social/political question, and it's all about the FSF trying
to take credit for a very popular OS movement. Stallman himself freely
admits that the reason why he tried to rename the operating system was to
try to draw Linux Users (many of whom are primarily pragmatists who use it
because it's best for what they want to do, and not for any kind of
political reason) into embracing his particular political agenda with
respect to Copyleft and the FSF's definition of "Free Software".
Dimitri claims that "Linux" has lots of GNU software in it, when in point of
fact, even the most highly inflated figure claimed by the FSF is 28% of a
typical Linux distribution, and for some distributions, (for example SuSE)
the percentage of FSF code compared to other Open Source code is far, far
less. There are of course many other arguments and counter-arguments ---
including the FSF's contention that the XFree86 code should be counted as
GNU software because the FSF would have included it in their vision of a GNU
system --- never mind the fact that XFree86 wasn't written by the FSF and
isn't even distributed under the GPL. ("Any code is GNU if I say it is!")
However, the arguments start losing technical and logical and even
rhetorical integrity quite quickly, and I won't waste the RISKS reader's
time with any more of it.
(It also should be noted that while Dimitri claims that BSD is not GNU, *BSD
systems also uses the GNU toolchain, and many other pieces of GNU software;
so where's the clear technical distinction between when something is GNU and
when something isn't?)
Secondly, with respect to Linux the killall command, it should be noted that
first of all, contrary to Dimitri's assertion, IRIX's killall does allow the
"killall -HUP inetd" construction; as does NetBSD, OpenBSD, and Linux. So
to say that this is unique to Linux is quite incorrect. Furthermore, Irix,
Linux, and OpenBSD killall implementations are all independent, and do not
derive from the same source; the authors of each of them consciously made
the same design decision. "Killall" as a program which kills all process
and which does not take an argument is actually a System V'ism; it was never
in BSD 4.3 or Version 7 Unix. [Dik Winter and Anton Ertl noted that Irix
killall also is able to accept a processname and thus behaves like Linux
killall. Anton Ertl and Toby Cabot both noted that Linux's killall is not
GNU, it is from Werner Almesberger's psmisc package (and that is specific to
the Linux /proc filesystem). PGN]
> Back in 1975 professionals designed an OS called Unix. Being professionals,
> they realised the need for certain design principles. Such as splitting a
> task into a number of smaller subtasks and designing a separate tool to
> handle each subtask (that does one thing, and does it well) [0].
[Kragen Sitaker noted that Unix began in 1969, not 1975. True. Ken
Thompson was on our Bell Labs Multics team until 1969, and Oleg Broytman
noted that Unix was designed *for* professionals (and primarily for their
own use!). PGN]
This design principle is oft quoted, but rarely in its entirety. The
*other* half of the Unix design principal was to observe what combinations
of commands were frequently used, and optimize those. So for example,
although you could search files looking for a particular pattern by using
/bin/ed's "g/<re>/p" command, it was used often enough that it was worth
while for the Bell Labs "professionals" to create the command "grep".
If only the first part of the Unix minimalist design principle was all there
was to it, there would be no need for the grep command. Heck, there would
be no need for the System V "killall" command --- it would be possible to
implement it via smaller sub-components, so by what rights should it exist?
Indeed, the System V "killall" command is used so rarely (only by the
shutdown scripts), that arguably, it shouldn't exist at all, even using the
argument which justifies the existence of grep. It's only used in one
instance, so there's no justification for optimizing it or packaging it as a
stand-alone command.
In addition, the reason why people get into trouble when trying to run
"killall -HUP inetd" on a Solaris machine is because the Solaris version
doesn't check the arguments it was passed; a well designed and robust
program should check the validity of its arguments, and fail if they are not
what was expected --- and this includes the presence of arguments when none
are supported.
For this reason, since the System V killall is such a useless,
unjustifiable, badly implemented botch of an idea --- let me defend the
honor of the "1975 professionals" which Dimitri invoked by stating that
killall was not their fault. It was the fault of the people at AT&T who
among other things, gave us abominations such as System V streams. [To
amplify that, Peter van Hooft notes that there was no killall in Version 7,
and Anton Ertl did similarly. PGN]
So to the extent that the Linux, *BSD, and Irix maintainers all decided to
ignore this "prior art" in favor of something much more useful, this was a
Good Thing. After all, "killall -HUP inetd" is far more useful and commonly
used that it's worth being able to do this rather forcing the system
administrator to type:
kill -HUP `ps ax | grep inetd | awk '{print $1}'`
Don't you think?
> As software becomes more complex, it requires more sophisticated build
> tools. More and more open source software is being developed using GNU
> compilers and build tools, and it is becoming dependent on them. The result?
> -- While portability at the level of each compilation unit is still
> maintained, the whole thing is not portable anymore. It fails to build on
> non-GNU systems [2].
This is an old problem, and it's not unique to GNU or Linux --- remember
back in the days of BSD 4.3, and Henry Spencer's 10th Commandment of his
"Ten Commandments of Unix"?
"10. Thou shalt forswear, renounce, and abjure the vile heresy which
claimeth that ``All the world's a VAX'', and have no commerce with the
benighted heathens who cling to this barbarous belief, that the days of
thy program may be long even though the days of thy current machine be
short."
The annotated version http://www.lysator.liu.se/c/ten-commandments.html
points out that an updated version of this heresy would be "All the world's
a Sun". In the years before Linux became ascendent, I remember having to
work with lots of programs available on the Internet which only worked on
Solaris machines, and didn't compile or run correctly on Ultrix or OSF/1
machines. This is not a new issue; your claim that this is somehow unique
to GNU/Linux systems is simply not fair nor historically accurate.
The main problem is that most of the bug-fixing takes place on those
platforms which are most easily available no matter what it was. Back in
the day when the development community had plenty of VAX systems running BSD
4.3, guess what most programs were optimized for? And back when most people
with Internet access were running SunOS, guess what most programs were most
likely to run under?
Sun is actually partially at fault here, actually, at least with respect to
the problem you stated of many programs assuming the use of the GCC and the
GNU toolchain. Back when Sun stupidly started charging for their C compiler
as a separate, optional package, and never looked back, many people switched
to using the GNU toolchain, and never looked back. Given that GCC is
available for Solaris, the incentive to making packages portable to the
native tool chain is simply much, much less. (But hey, if it's so important
to you, the whole point of Open Source software is that you can fix the
problem yourself, and contribute the changes back to the maintainer.)
>And since GNU compiler and build tools are unable to produce 64-bit
>code on Solaris, the libraries, and all software that uses them must
>be built as 32-bit binaries. Now, why did I pay for that 64-bit
>hardware, again?
Actually, that last is a very good question. Unless you needed the address
space the 64-bit provides, in most cases there is no advantage to compiling
programs to run in 64-bit mode. Indeed, to the extent that the pointers and
integers take up more space, programs can actually run more slowly and take
up more memory unnecessarily when you compile them as 64-bit binaries.
Given that most of the Open Source programs you want to run probably don't
need the 64-bit address space (very few programs, other than scientific
computation and database engines do), the fact that you're compiling GNOME
or Gnumeric in 32-bit mode is probably the right thing; it's certainly not a
problem. [Anthony W. Youngman notes that linux ran on SPARC in full 64-bit
mode long before Solaris did. PGN]
> GNU project in particular did a great service to software community by
> promoting and popularizing free software. It also did a great disservice by
> turning the whole thing into a political issue, and pretty much ignoring the
> need for competence and expertise on the part of software developers.
> Instead of sound software engineering, we now have "Free Speech"
> flag-waving [3].
First of all, many members of the Linux development community don't pay much
attention to the FSF's political agenda. (This is the whole reason for the
"Open Source" vs. "Free Software" terminological debate, which in turn
inspired Stallman to try to claim Linux as his own by trying to rename it to
LiGNUx"; as I said, this is not a technical issue, but a political one ---
and many, many people in the Linux world do not subscribe to the FSF
political agenda.)
Secondly, it's very much unfair to make such a huge generalization; many
open source software packages are written or maintained by professionals,
and it shows. Some of it isn't. In general, software which works gets
used. Software which doesn't, doesn't get used. People don't choose to use
Linux because of the FSF's political agenda; they use Linux (or FreeBSD, or
MacOSX) because it does what they need it to do; because it crashes less
often than Windows; because they are able to "look under the hood" and fix
themselves if necessary, instead of being at the mercy of vendor who might
or might not deign to fix a bug which is critically affecting your business.
Open Source software may not be perfect, but then again, no commercially
available operating system which is available at reasonable prices is
perfect, or even close to perfect. The stories in RISKS have proven this
over and over again.
[RISKS received numerous counter-rants (NOTE: ``counterrants'' might seem
like count-errants, as in knight-errants), including admonitions to your
moderator for running Dimitri's rant in the first place. But the mere
existence of his rant suggests that some education is needed, and
hopefully Ted's message has provided some. I have interspersed a few
of the comments received as they apply above. PGN]
Re: GNU is not UNIX (T'so, RISKS-22.07)
<Dimitri Maziuk <dmaziuk@yola.bmrb.wisc.edu>>
Thu, 9 May 2002 13:06:18 -0500[Dimitri's own counter-counter-rant is excerpted here by PGN.] Most software licenses that I bothered to read expressly disclaim "any warranty as to suitability or fitness for any purpose". To me, that spells "crap". Commercial developers are at least [implicitly] honest: they're here to make money. If they can make money from crap, more power to them. When crap is being pushed as the best thing since sliced bread because it's "Free as in Beer" (or "Speech", or whatever) (read: "it makes some nerd with zero social skills feel good about himself"), well, forgive me if all I feel about that is disdain. I'm a great believer in Open Source, Public Science, and sharing of ideas. But you know what? -- my job is to keep software up and running. I don't care if it pays someone's bills, or if it suits someone's political agenda. All I care about is that it works. And in my experience software that works is software written by clueful developers -- professionals, experts, -- be that Wietse Venema, Linus Torvalds, or Borland language design team. None of which has anything to with the fact that GNU is Not Unix, and as more people come to grown-up Unices from GNU background, screw-ups like the one with killall will become more and more common. PS. As for 64-bit hardware, here's a hint: if you have SPARC Solaris 7 or 8, run this script: #!/bin/bash if [ $((2048*1024*1024)) -lt 0 ] ; then echo "Your bash has Alzheimer's" ; fi and see what it says. And if you have any shell scripts that e.g. monitor RAM or disk usage, better go check them really carefully.
More on Klez (Re: Slade, RISKS-22.05)
<"Bob Morrell" <bmorrell@wfubmc.edu>>
Thu, 9 May 2002 12:18:12 -0400Rob Slade's comments on Klez was a useful summary of the broader aspects of this recent worm. I agree that the unusual lack of publicity on this worm is puzzling and problematic. However, the much more disruptive aspect of this worm which Mr. Slade mentioned has been Klez's penchent for sending e-mail in other people's name. Person A, gets the virus, and his computer sends an infected e-mail to person B in the name of person C. At this point several things can happen, all of which cost the users (and their Network admins oodles of time). The most common is that person B sends an angry e-mail to person C (whom they often do not know) or worse, to person C's business/domain. Non computer system admins want to know what person C is doing (and their questions are more pointed when Klez uses a sexually suggestive subject header). They look suspiciously at C, and at C's LAN admin, who had certified that C's computer was patched and had adequate virus protection. Explaining the complexities of this worm to less than computer literate admins often takes two or three attempts, and even then I think some of them still think they should ding someone. Person B has a server based e-mail viral scanner and sends a notification of failure to deliver to C, who flips out, believing their computer is infected. Again, the complexities of this worm are hard to communicate, and much time is wasted trying to explain, and all the assurances you have given them about how up to date an secure their computer is (and why it is worth all the time and effort you put into antiviral and patches) suffer a credibility hit. User C may even try to contact B's domain seeking an explanation (and more time is wasted on all sides). This is in effect, a new form of identity theft, and the time wasted in orientation (what is going on?) and repairing perceptions and reputations can be substantial. The risk? Too many e-mail users still believe that 'from' header, unaware how easy it is to fake. As Klez forces them to understand this, they almost certainly will over-react, which ultimately will undermine the efforts to make digital signatures and online validation more common. Bob Morrell, Cancer Center, http://home.triad.rr.com/bmorrell/
Re: More on Klez (Re: Slade, RISKS-22.05)
<Bitwolf Programming <mp0283@bitwolf.net>>
Thu, 09 May 2002 01:47:17 -0400I have a minor addition to Rob Slade's observations and comments. Though not vulnerable myself to Klez, I've had to deal with infections on a number of e-mail lists. To my experience, the Return-Path header generally contains the infected person's address, or enough of a clue that you can narrow down the listmember[0] who _might_ be infected. Given that this is malware, there is a RISK in relying on it, but for the moment it does provide clues useful in identifying and alerting folks with infected machines. Paul Mech [0] Many folks on an e-mail list will have other people from the list bookmarked. Thus some of the malware's forgeries will fool some lists into accepting it's spawn.

Report problems with the web pages to the maintainer