The RISKS Digest
Volume 17 Issue 68

Tuesday, 30th January 1996

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Extremely undesirable errors
Jordin Kare
20 bits, 40 bits, 60 bits, $10
A. Padgett Peterson
Re: Cost to crack Netscape Security falls...
Larry Kilgallen
Peter Kaiser
Peter Curran
Simulation vs. Live Action (Japanese plane incident)
John Bredehoft
Re: Unintended missile launches
Thomas L Martin
Robin Kenny
Re: Security hole in SSH 1.2.0
Bear R Giles
Mike Alexander
Re: WebCard Visa: It's everywhere you (don't) want to be?
Li Gong
Computer Control and Human Error (Kletz+Chung+Broomfield+Shen-Orr)
C. Shen-Orr
Re: "Learning Machines" and "Learning People"
Michael J Zehr
Legacy security holes
Ted Russ
ABRIDGED info on RISKS (comp.risks)

Extremely undesirable errors

Jordin Kare <jtk@s1.gov>
Fri, 26 Jan 96 14:27:06 PST

The recent discussion of modems hanging up on "+++" made me think of the following:

Q: What is the worst error message the pilot of an F-14 flying over the Persian Gulf can receive?

A: NO CARRIER

Jordin Kare
[Cute. I would also hate to see a sequence of messages on a fly-by-wire cockpit screen something like this: "Garbage collection. Undefined error at location 75421. Dumped core. Rebooting, please wait. Flight-data loss likely." PGN]

20 bits, 40 bits, 60 bits, $10

A. Padgett Peterson <padgett@tccslr.dnet.mmc.com>
Thu, 25 Jan 96 22:22:43 -0500

Subject: Single computer breaks 40-bit RC4 in under 8 days (Kocher, 17.67)

>On a typical PC, there are just too many other security weaknesses
>for there to be much practical difference between 3DES and 40-bit RC4.

Paul and I have different worldviews and this is one example. "other security weaknesses usually revolve around theft, compromise, and "buying an employee" - the last is often the cheapest (social engineering) but is traceable. The nice thing about brute force is that it is essentially passive, the target never needs to have a clue that it happened.

Given that, I apply a quanta rule to crypto, these days I use DES as "strong enough" since SOA to crack 56 bits is measures in years, not hours.

OTOH, with the right equipment, RC4-40 will usually succumb in either 3 hours of $500 whichever comes first, down in the noise as far as cost
and wholly passive ("other" methods are active or at least local). So clearly 40 bits is not "Good Enough"(C) while at 56, other methods of attack are easier. The real economic cross-over is somewhere in the middle: probably around a week or 46 bits/$30,000 but with 56 easy/available, why bother with less (would cost *more*).

So while it is possible that in theory, there is little difference between 3DES and RC4-40, in the real world of cost/reward, I suspect that there
is a gap that would take many doubloons to cross.

Padgett
ps personally, I like 64 bits. Is a chunk that fits nicely into 32 bit registers. Examined from a CISC cycle cost perspective, why bother with less?


Re: Cost to crack Netscape Security falls... (Curran, RISKS-17.67)

"Larry Kilgallen, LJK Software" <KILGALLEN@Eisner.DECUS.Org>
Fri, 26 Jan 1996 10:37:13 -0500 (EST)

There is a simple solution to the ITAR problem - develop the software in a location not subject to US export laws (i.e. almost anywhere else in the world).

While that may be a viable approach for companies in those other countries, it is not relevant to Netscape. As a US corporation, Netscape would be most likely prosecuted under ITAR if they had such software developed by a subsidiary or a contractor in some other country, even if the results never touched US territory. This government attitude even extends to not allowing US companies to add convenient hooks for the provision of crypto once a product is delivered overseas.

While the USA has no monopoly on software development expertise, it is _all_ of the software which must be developed elsewhere, not just the crypto part. Since Netscape has such a hold on the browser market it would appear that the US _does_ have a monopoly on "something" (marketing?) for such market penetration despite ITAR limitations, or perhaps the old adage is true "Security does not sell products".

Larry Kilgallen

Re: Cost to crack Netscape Security falls... (Curran, RISKS-17.67)

Peter Kaiser <kaiser@heron.enet.dec.com>
Fri, 26 Jan 96 08:20:11 MET

Not so simple. It must be developed not only outside the USA, but by a non-US company and in a locale that permits unrestricted export of the product.

Under those conditions it's more difficult for the that company's profit to accrue to Netscape (or whomever), because the USA authorities pay attention to whether there's a real separation of entities. For instance, my company is incorporated separately in most countries, but because we have a US parent we are everywhere considered a US corporation for these purposes; and in my experience, the authorities do pay attention to this. So there's nowhere on earth that we can develop crypto for unrestricted export.

See Bert-Jaap Koops's "Crypto Law Survey".

Pete kaiser@acm.org +33 92.95.62.97 FAX +33 92.95.50.50

Re: Cost to crack Netscape Security falls... (Kaiser, RISKS-17.78)

Peter Curran <pcurran@inforamp.net>
Fri, 26 Jan 1996 11:55:14 -0500

I am certainly not an expert on this topic, but...

This view is, of course, based on the American view of the situation. Most other countries find this attempt at extra-territorial application of US law to be extremely offensive, and illegal. A company incorporated in any country is subject to the laws of that country, not US law, regardless of the ownership. US laws claiming otherwise are generally regarded as null and void outside the US. In Canada, for example, I believe we have a law making it explicitly illegal for subsidiaries in Canada to obey US law when it conflicts with Canadian law. This normally arises in relation to the US embargo of Cuba, but it applies in other areas.

Of course, in reality this can simply create an unresolvable conflict for the companies - subject to one penalty for acting, and another for not. However, in regard to the ITAR rules, I think it is pretty clear that a company could contract out the development of a piece of software to a offshore company, and the result would not be subject to US laws. Obviously there are complications involved in doing this, but if a company is seriously trying maintain a global leadership role in data communications services, and seriously trying to provide acceptable security to its customers, something along this line is necessary.

Peter Curran pcurran@inforamp.net

Re: Unintended missile launches (Shafer, RISKS-17.67)

Thomas L Martin <tlm@GLOBE.EDRC.CMU.EDU>
Fri, 26 Jan 96 15:26:26 EST

This sounds like an incident a high school teacher of mine talked about several times. During Vietnam, on the aircraft carrier U.S.S. Forrestal, someone forgot to raise the heat shields behind a departing jet. When the jet fired up its engines to take off, a missile locked onto the heat and launched. (I don't remember whether this missile was on another jet or still being loaded.) The initial explosion led to several other missiles going off and a huge fire. My high school teacher worked in the psych ward of a Navy hospital on Guam, and treated a number of sailors involved.

Sorry I'm so sketchy--it's been a number of years since I heard the story.

Tom Martin, Carnegie Mellon University, 5000 Forbes Ave., Hamburg Hall/EDRC, Pittsburgh, PA 15213 (412) 268 6480 tlm@globe.edrc.cmu.edu

Re: Unintended missile launches

Robin Kenny <robink@aus.hp.com>
Tue, 30 Jan 96 17:30:06 EDT

In RISKS-17.67 Mary Shafer commented on unintentional missile launches being common; an ex-RAAF technician once related to me how he was in a tower at an airfield and two SideWinder missiles loosed and shot either side of it while he was on top... he believes the tower being wood saved him. I wish I could have seen this, and heard him afterwards, as he was quite an articulate gentleman.

robink@aus.hp.com Melbourne, Australia UTC +10 hours

Simulation vs. Live Action (Japanese plane incident)(Ishikawa, 17.65)

John Bredehoft <johnb@bird.Printrak.Com>
Wed, 24 Jan 1996 13:48:38 +0800

In RISKS-17.65, Chiaki Ishikawa described an incident in which a Japanese air force fighter plane, armed with live missiles, shot down another plane during a training exercise. According to the information provided, a message appeared on the firearm status screen which should only appear when the master firearm switch is activated. Some officials wondered why the pilot didn't realize that something was wrong when this particular message appeared.

This portion of the story raises an interesting question: if the plane had been functioning "correctly," then a training exercise (in which the "missiles activated" message would NOT appear) would differ from a live combat situation (in which this message WOULD appear).

I was wondering if this difference between training and live action would be a risk in itself (how would a pilot in combat react to a message that had never appeared during training?).

Or would the "fix" (i.e., displaying a "missiles activated" message when no missiles are actually activated) be riskier?

I'm sure that RISKS readers have faced simulation vs. actual issues before; I'd be interested in your thoughts.

John E. Bredehoft, Proposals Department, Printrak International Inc.
johnb@printrak.com 72604.2235@compuserve.com

Re: Security hole in SSH 1.2.0 (Alexander, RISKS-17.67)

Bear R Giles <bear@indra.com>
Fri, 26 Jan 1996 19:00:26 -0700 (MST)

>... Unix security bugs that result from the fact that Unix associates all
>access controls with users and has no way to assign access rights to a
>program independent of the user running the program.

That's not true at all. Unix can do this, it just requires a bit of work.

  1. Create an /etc/passwd entry for the program. You can set the password and shell fields so that logins are impossible.
  2. Change the ownership of the program to this ID, and set the program's "set uid" bit.
I know this works because I have a slew of realtime ingest programs which all run as a non-existent user. The human users get access to the data through a shared "group" id... but we have only read access to most of that data.

It becomes more complex if your program requires root permissions, but I know that NCSA httpd is setuid to root but generally runs as a specified user id — and it's highly recommended that this user-id be for a nonexistent user with no privileges. For many systems, that's nobody (-1) and nogroup (-1). It only uses the special privileges to grab a privileged socket number.

>and access to files and other system resources can be granted to the
>program (or a combination of a program and a user)

You could fake the latter in Unix, with some work. You would need a "group" for each (user, program) pair, and a small program with root permissions. After it determined the user-id of the person who invoked the program it could change the user and group id to "nobody" and "user-program" (in a manner to permanently lose root permissions!) and then exec() the actual program.

Does Unix require too much work to achieve this goal? That's a hard question to answer; allowing a more direct way to achieve these objectives also increases the risk of undetected errors in the privilege logic. There's a lot to be said for handling one case well instead of spreading your effort over three or four different approaches.

Bear Giles bear@indra.com

Re: WebCard Visa: It's everywhere you (don't) want to be?

Li Gong <gong@csl.sri.com>
Fri, 26 Jan 1996 12:24:13 -0800 (PST)

In RISKS-17.67, Doug Claar <dclaar@hprtnyc.ptp.hp.com> mentioned the "WebCard Visa", which will allow users to access their account statements via Internet. Wells Fargo Bank, another major bank in the SF Bay Area, has started offering online access to account information, not through the old dia-up, but via the Internet. From the marketing brochure, it is unclear whether the bank would or could put my records separate from those who have signed up to use the service. If not, my records would be exposed to the net whether I want it or not, and will be vulnerable to holes/bugs in web software.

Li GONG, SRI International, URL http://www.csl.sri.com/~gong/

Re: Security hole in SSH 1.2.0 (Giles, RISKS-17.68)

Mike Alexander <mta@umich.edu>
Sat, 27 Jan 1996 00:32:48 -0500

Sure that works, but those users are non-existent only in your mind, not in Unix's view. If there is a bug in one of the programs that uses this technique that lets a person get control of the process while it is running, then that person is using that userID whether or not you think the user exists. Of course the person will only have whatever access the supposedly non-existent user has, which is better than root access, but still.... The advantage of program keys in this context is that if the program crashes, the user doesn't inherit the access the program had, only whatever access they are supposed to have given the userID they are using (and whatever other program they happen to run). This is quite different.

It's certainly possible to write secure software in Unix, but it's not trivial. Too much depends on each program (sendmail, ftpd, popd, etc., etc.) being well written and well debugged. Putting this facility in the kernel and doing it well makes it much, much easier to write secure server programs. People with only limited programming ability have written server-style programs for MTS which have been at least as secure as the typical Unix server — even in Fortran although I wouldn't recommend that. :-)

Please don't take me wrong and start yet another operating system war. Unix is fine, but it doesn't necessarily have the only answer to all the world's problems. The same could be said for many operating systems, certainly for MTS. I'm just trying to point out that there is more than one way to solve this problem, and looking at other ways may teach us something.

Mike Alexander 1309 Gardner, Ann Arbor, MI, USA University of Michigan
+1-313-663-1759 mta@umich.edu America Online: MAlexander

Computer Control and Human Error (Kletz +Chung+Broomfield+Shen-Orr)

"C. Shen-Orr" <shenorr@actcom.co.il>
Sun, 28 Jan 1996 00:27:55 +0200 (EET)

Computer Control and Human Error, by Trevor Kletz, with Paul Chung, Eamon Broomfield and Chaim Shen-Orr. Institution of Chemical Engineers (IchemE) (UK) - ISBN 0 85295 362 3, Gulf Publishing (US) - ISBN 0 88415 269 3. July 1995.

Back-cover blurb: Whether you design computer-controlled systems, work with them or merely write the software for them, this is the book for you. It presents computers and error in everyday language, making them accessible to non-specialists and specialists alike. Trevor Kletz takes you through a range of incidents and accidents which occurred on computer-controlled plants and equipment. He is a master at bringing out the lessons. Researchers Paul Chung and Eamon Broomfield survey experience to date with hazard and operability studies (Hazops) on computers - an attempt to spot and correct errors before they threaten the safety of the controlled systems. And Chaim Shen-Orr, who directs safety programmes for RAFAEL in Israel, explains why computer-controlled systems can fail. He adds advice for system designers and managers designing or maintaining safety-sensitive installations. Human error is generally to blame - either in interaction with the computer or in writing the software which it uses. A compelling read!

C. Shen-Orr, Sc.D., 17 Lionel Watson St., Haifa, Israel 34751
Tel: +972 (0)4 834-4831 Fax: +972 (0)4 834-4831 shenorr@actcom.co.il

Re: "Learning Machines" and "Learning People"

<tada@MIT.EDU>
Mon, 29 Jan 96 17:29:13 -0500

... or "When is a spam a real risk?"

When I first read the message, I wondered why it was included in RISKS. Then I thought about the kind of risk it would be if someone actually *did* develop a system as described.

Suppose the device works, and it can program your mind to say "Me llamo Miguel" when you're introducing yourself to someone speaking Spanish. But what if there were a transmission error while you were connected to the machine and it actually programmed your brain to stop your heart when you sneezed!

I'd be more leery of using such a device if I thought it worked than if I was sure it didn't. :)

-michael j zehr
[That is sort of why it got through my sieve in the first place. TNX. PGN]

Legacy security holes.

Ted Russ <ted@vale.faroc.com.au>
Tue, 30 Jan 1996 00:20:45 -0800

Here is something which will make some people shudder, and others break out in a cold sweat - a transcription (with all identifiable references changed to protect the parties involved) of a memo which I have been given:

(A brief note first: This company has just recently consolidated all their various systems, and incorporated a WAN connection to an affiliated company which is interstate, and whose WAN is connected to Internet via a firewall.)

[This is a very old problem. However, because of the feeding frenzy of everyone getting on the Internet, it seems worthy of inclusion. PGN]
- -------------------------------------------------------- Brief Description:
On Monday 29th, I discovered a security hole in the new Computer Services system, and made the sysadm aware of it, then helped him to plug it. The potential for almost unlimited access to the system was there, and as shall be seen, there's probably no way to find out if someone hasn't already taken advantage of the insecurity.

Details:
I used a quite legitimate program which was installed on my PC by Computer Services as part of the new networking packages. It allowed me, under the sysadm's supervision, to copy a file to my PC from the IBM machine.

The program is called FTP (File Transfer Protocol,) and is commonly used on Internet to move files around any TCP/IP network, such as we are running. Most people who have used Internet would be familiar with it's operation.

FTP requires a server program, called the FTP daemon, to run on the host machine. Such an FTP daemon was running on the IBM, and it was by de-activating this daemon that the sysadm and I subsequently closed the security breach.

FTP daemons can be set to log all transactions, and to restrict login access; however due to there never having been a requirement for such measures before, the daemon on the IBM machine had not been set to logging mode, nor restricted in any way.

Implications:
Since the security of the IBM from network programs has never been a problem, it was possible to read almost every file on the IBM system, and probably write to or overwrite about half of those.

Since FTP is a well-known program, and since it has been installed on every PC which is connected to our network, around 20% of personnel on the premises could have exploited this weakness to copy sensitive data to their PC, and then read it or copy it to a floppy.

Since our TCP/IP network is also connected to XXXX's WAN, anyone on that system could also have exploited this weakness in a similar fashion. Copies of sensitive material could have ended in the wrong hands, details could have been changed and then written back onto the IBM, etc.

It is necessary to have a username and password in order to successfully log onto an FTP server, however there are several commonly-used account names such as "sysadm" which most users (and certainly ALL would-be system breakers) would be familiar with, and which would then only require the password to be found out, in order to gain access.

One of the first targets for system breakers then becomes the password file, which they could, once they have copied it, decrypt at their leisure and thus obtain the root password.

And since the FTP server on the IBM was not logging operations, it would not be immediately obvious that someone had been logging on and trying random passwords in their tens of thousands in an effort to break into the system.

Lastly, because the server was not logging, it is very possible that someone has already exploited this weakness and we would have no trace of their activities....

Other Thoughts:
The IBM and Novell systems have been linked together, and linked to the Network LAN, for around a month, I believe. It took me less than three minutes to realise that here was a potential system risk, find out the necessary IP address, and prove that there was a risk.

Since I run a similar but much more exposed system, I adhere to an accepted code of conduct, and informed the sysadm immediately on discovering the breach. Another user, perhaps one dissatisfied with the Company or tempted by offers and promises of an outside party, might not have been so considerate of the Company's interests . . .

[Luckily, this case ended with a general tightening of security and no damage. (To date, anyhow...) Ted Russ]

Please report problems with the web pages to the maintainer

x
Top