San Jose, California, has issued no garbage bills (186,000 homes and 3765 apartments) since the beginning of October — because of ``faulty procedures in saving backup records.'' The city is spending $360,000 to rectify the situation, which will take another month. [Source: *San Jose Mercury News*, 13 Nov 1996, courtesy of Babak Taheri.] Garbage collections continue, while revenue collections do not. GOGIgigo = Garbage Out, Garbage In, garbage in, garbage out (with respect to real and virtual garbage, respectively).
Some Web shoppers have recently had their worst fears about electronic commerce confirmed — the credit card information they trustingly typed in was accessible by anyone using a simple Web browser. The sites affected had improperly installed a software program called SoftCart, made by Mercantec Inc., to handle their transactions. "Our standard documentation clearly explains how to avoid these security break-ins," says Mercantec's president. The problem was attributed to human error, which occurred when inexperienced installers failed to place completed order forms in directories not accessible to Web browsers. Vendors affected by the glitch say they've taken steps to remedy the situation. (*Wall Street Journal*, 8 Nov 1996, B6)
Many researchers have noted security flaws in existing Java implementations as well as fundamental weaknesses in Java's security model. Examples of the former include attacks that confuse Java's type system, ultimately allowing applets to execute arbitrary code with the full permission of the user invoking the browser, and examples of the latter include the lack of audit trails and Java's single-line-of-defense strategy. Dean, Felten, and Wallach's paper "Java Security: From HotJava to Netscape and Beyond" brought most of these issues to light, sending shock waves throughout the computing community. (See http://www.cs.princeton.edu/sip). Until now users and system designers have been content to consider these problems transient, confident that bugs will be mended quickly enough to limit any damage. Netscape, for instance, has been admirably quick in responding to the most serious problems. However, the giant installed base of Java-enabled browsers---each inviting an adversary to determine the browser's actions---gives reason to suspect some kind of fallout even in "secure" implementations of Java. Our paper, available at http://www.cs.bu.edu/techreports/96-026-java-firewalls.ps.Z, describes attacks on firewalls that can be launched from legal Java applets. In certain firewall environments, a Java applet that finds itself running in a browser behind the firewall can cause the firewall to allow incoming telnet (or other TCP) connections to that host. In some cases, the applet can even use the firewall to access arbitrary hosts supposedly protected by the firewall. The weaknesses exploited by these attacks are neither in the Java implementation nor in the firewall as such, but rather in the composition of the two---and in the security model that results when browsers give adversaries such ready access to "protected" hosts. Our paper also describes methods for preventing applets from crossing a firewall; this is one way to prevent such attacks. In any case we strongly recommend that managers of firewalled sites containing Java-enabled browsers take a good look at the issues involved and make appropriate policy decisions. David Martin <firstname.lastname@example.org>, Computer Science, Boston University Sivaramakrishnan Rajagopalan <email@example.com>, Bellcore Avi Rubin <firstname.lastname@example.org>, Bellcore
There has been a great deal of talk about how ActiveX controls can be written to do malicious things on the Internet. However, what has not being recognized is that even standard ActiveX controls can be made to do malicious things via HTML and VBScript. Here are two simple examples of "good" ActiveX controls being made to do "bad" things: The computer crashing URL - file:///aux If Microsoft's ActiveMovie control is told to play a movie from the URL file:///aux Internet Explorer will go into an infinite loop under Windows 95. Attempting to shutdown Internet Explorer by doing an "End Task" will more often then not crash Windows 95. This bug can be exploited by the "bad guys" to create HTML pages that will crash people's computers when the pages are downloaded from a web site. VBScript and ActiveX combo disk crasher Even more worrisome are ActiveX controls that contain methods (i.e., function calls) that write files to disks. These methods can be used by a simple VBscript program to overwrite key system files like AUTOEXEC.BAT, CONFIG.SYS, REG.DAT etc. The damage is done simply by viewing an HTML page that contains the ActiveX control and the malicious VBScript code. I know of at least three commercially available ActiveX controls that have methods that will save files to disk. Any of these controls, I believe, can be exploited to build a disk crash HTML page. At least two of these controls have valid Authenticode digital signatures so that they can be automatically downloaded and executed even with the highest security settings in Internet Explorer 3. The big question in my mind is what can be done about solving these sorts of ActiveX security problems. Richard Smith
Well, one of our old-favorite sources of RISKS quotes has done it again! Henry Petroski Invention by Design: How Engineers Get from Thought to Thing Harvard University Press, Cambridge, Massachusetts November 1996, 288 pages. ISBN 0-674-46367-6 This book explores the underlying essence of engineering — not so much what makes particular products tick (or not tick), but rather why is it that the process of engineering design can evolve so successfully (or unsuccessfully, as the case may be). The chapters range widely over paper clips, pencils, zippers, aluminum cans, faxes and networks, planes and computers, water and society, bridges and politics, and finally buildings and systems. It has long been my contention that those of us involved in developing and using computer systems have much to learn from the more traditional engineering disciplines. Henry himself modestly eschews analysis of computer system developments and computer engineering, perhaps because there are fundamental differences between his disciplines and our "discipline" (or lack thereof). He typically leaves it to us to bridge the gap. However, this book provides us with an excellent step in that direction.
Robert I. Eachus <email@example.com> wrote in RISKS-18.60: "Actually every Ada compiler supports arbitrary precision arithmetic--at compile time. Static integer constants are required to be evaluated without overflow, and the easiest way to guarantee this is to evaluate them using an arbitrary precision arithmetic package." I had an integer constant evaluate *with* overflow during this past week. Whilst updating a DNS record, I accidentally set the serial number to 19961100808, when it should have been 1996110808, corresponding to the four digit year, two digit month and two digit hour of the day. When checking the DNS record from the name daemon using nslookup (which is essentially viewing a run-time interpretation of the DNS record), the serial number looked nothing like what was entered in the DNS record. The serial number had been evaluated, overflowed and the truncated result was being used, without any complaint from the name daemon when it had re-loaded the DNS record. As DNS serial numbers have to always increase for updates to propagate, and I wanted something that wouldn't overflow, I ended up using 2996111117 as the new serial number. As an accidental increase of a DNS serial number cannot be easily rectified, I believe that the name daemon should have refused to load the DNS record with a serial number that would cause an overflow. Arthur Marsh, tel +61-8-8370-2365, fax +61-8-8223-5082 firstname.lastname@example.org
As I was looking at Jerry Leichter's article about "-32768 and strong typing", I was surprised to notice that it appeared under the line: Date: Thu, 7 Nov 96 00:33:42 EDT As North American readers know, daylight saving time in the USA (where Jerry's site is located) ended for the year on October 27. So should we take this as "Wed, 6 Nov 96 23:33:42 EST", or "Thu, 7 Nov 96 00:33:42 EST", or something else? And is there an interesting reason why it was wrong? Mark Brader email@example.com SoftQuad Inc., Toronto
[This version completely supersedes the earlier draft that inadvertently appeared in RISKS-18.59. PGN] WHY CRYPTOGRAPHY IS HARDER THAN IT LOOKS Bruce Schneier, Counterpane Systems ............................................................................ Copyright Nov 1996 by Bruce Schneier. All rights reserved. Permission is given to distribute this essay, providing that it is distributed in its entirety (including this copyright notice). For more information on Counterpane Systems's cryptography and security consulting, see http://www.counterpane.com. [The intent of this notice is stronger than the default RISKS policy: This piece may be freely reproduced in its entirety, but only in its entirety without modification, including this banner and Bruce's final line of identifying information. By RISKS policy, such reproduced copies should also bear the relevant RISKS masthead identification: Risks-Forum Digest (comp.risks) Friday 15 November 1996 Volume 18 Issue 61 ........................................................................... >From e-mail to cellular communications, from secure Web access to digital cash, cryptography is an essential part of today's information systems. Cryptography helps provide accountability, fairness, accuracy, and confidentiality. It can prevent fraud in electronic commerce and assure the validity of financial transactions. It can prove your identity or protect your anonymity. It can keep vandals from altering your Web page and prevent industrial competitors from reading your confidential documents. And in the future, as commerce and communications continue to move to computer networks, cryptography will become more and more vital. But the cryptography now on the market doesn't provide the level of security it advertises. Most systems are not designed and implemented in concert with cryptographers, but by engineers who thought of cryptography as just another component. It's not. You can't make systems secure by tacking on cryptography as an afterthought. You have to know what you are doing every step of the way, from conception through installation. Billions of dollars are spent on computer security, and most of it is wasted on insecure products. After all, weak cryptography looks the same on the shelf as strong cryptography. Two e-mail encryption products may have almost the same user interface, yet one is secure while the other permits eavesdropping. A comparison chart may suggest that two programs have similar features, although one has gaping security holes that the other doesn't. An experienced cryptographer can tell the difference. So can a thief. Present-day computer security is a house of cards; it may stand for now, but it can't last. Many insecure products have not yet been broken because they are still in their infancy. But when these products are widely used, they will become tempting targets for criminals. The press will publicize the attacks, undermining public confidence in these systems. Ultimately, products will win or lose in the marketplace depending on the strength of their security. THREATS TO COMPUTER SYSTEMS Every form of commerce ever invented has been subject to fraud, from rigged scales in a farmers' market to counterfeit currency to phony invoices. Electronic commerce schemes will also face fraud, through forgery, misrepresentation, denial of service, and cheating. In fact, computerization makes the risks even greater, by allowing attacks that are impossible against non-automated systems. A thief can make a living skimming a penny from every Visa cardholder. You can't walk the streets wearing a mask of someone else's face, but in the digital world it is easy to impersonate others. Only strong cryptography can protect against these attacks. Privacy violations are another threat. Some attacks on privacy are targeted: a member of the press tries to read a public figure's e-mail, or a company tries to intercept a competitor's communications. Others are broad data-harvesting attacks, searching a sea of data for interesting information: a list of rich widows, AZT users, or people who view a particular Web page. Electronic vandalism is an increasingly serious problem. Computer vandals have already graffitied the CIA's web page, mail-bombed Internet providers, and canceled thousands of newsgroup messages. And of course, vandals and thieves routinely break into networked computer systems. When security safeguards aren't adequate, trespassers run little risk of getting caught. Attackers don't follow rules; they cheat. They can attack a system using techniques the designers never thought of. Art thieves have burgled homes by cutting through the walls with a chain saw. Home security systems, no matter how expensive and sophisticated, won't stand a chance against this attack. Computer thieves come through the walls too. They steal technical data, bribe insiders, modify software, and collude. They take advantage of technologies newer than the system, and even invent new mathematics to attack the system with. The odds favor the attacker. Bad guys have more to gain by examining a system than good guys. Defenders have to protect against every possible vulnerability, but an attacker only has to find one security flaw to compromise the whole system. WHAT CRYPTOGRAPHY CAN AND CAN'T DO No one can guarantee 100% security. But we can work toward 100% risk acceptance. Fraud exists in current commerce systems: cash can be counterfeited, checks altered, credit card numbers stolen. Yet these systems are still successful because the benefits and conveniences outweigh the losses. Privacy systems — wall safes, door locks, curtains — are not perfect, but they're often good enough. A good cryptographic system strikes a balance between what is possible and what is acceptable. Strong cryptography can withstand targeted attacks up to a point — the point at which it becomes easier to get the information some other way. A computer encryption program, no matter how good, will not prevent an attacker from going through someone's garbage. But it can prevent data-harvesting attacks absolutely; no attacker can go through enough trash to find every AZT user in the country. And it can protect communications against non-invasive attacks: it's one thing to tap a phone line from the safety of the telephone central office, but quite another to break into someone's house to install a bug. The good news about cryptography is that we already have the algorithms and protocols we need to secure our systems. The bad news is that that was the easy part; implementing the protocols successfully requires considerable expertise. The areas of security that interact with people — key management, human/computer interface security, access control — often defy analysis. And the disciplines of public-key infrastructure, software security, computer security, network security, and tamper-resistant hardware design are very poorly understood. Companies often get the easy part wrong, and implement insecure algorithms and protocols. But even so, practical cryptography is rarely broken through the mathematics; other parts of systems are much easier to break. The best protocol ever invented can fall to an easy attack if no one pays attention to the more complex and subtle implementation issues. Netscape's security fell to a bug in the random-number generator. Flaws can be anywhere: the threat model, the system design, the software or hardware implementation, the system management. Security is a chain, and a single weak link can break the entire system. Fatal bugs may be far removed from the security portion of the software; a design decision that has nothing to do with security can nonetheless create a security flaw. Once you find a security flaw, you can fix it. But finding the flaws in a product can be incredibly difficult. Security is different from any other design requirement, because functionality does not equal quality. If a word processor prints successfully, you know that the print function works. Security is different; just because a safe recognizes the correct combination does not mean that its contents are secure from a safecracker. No amount of general beta testing will reveal a security flaw, and there's no test possible that can prove the absence of flaws. THREAT MODELS A good design starts with a threat model: what the system is designed to protect, from whom, and for how long. The threat model must take the entire system into account — not just the data to be protected, but the people who will use the system and how they will use it. What motivates the attackers? Must attacks be prevented, or can they just be detected? If the worst happens and one of the fundamental security assumptions of a system is broken, what kind of disaster recovery is possible? The answers to these questions can't be standardized; they're different for every system. Too often, designers don't take the time to build accurate threat models or analyze the real risks. Threat models allow both product designers and consumers to determine what security measures they need. Does it makes sense to encrypt your hard drive if you don't put your files in a safe? How can someone inside the company defraud the commerce system? How much would it cost to defeat the tamper-resistance on the smart card? You can't design a secure system unless you understand what it has to be secure against. SYSTEM DESIGN Design work is the mainstay of the science of cryptography, and it is very specialized. Cryptography blends several areas of mathematics: number theory, complexity theory, information theory, probability theory, abstract algebra, and formal analysis, among others. Few can do the science properly, and a little knowledge is a dangerous thing: inexperienced cryptographers almost always design flawed systems. Good cryptographers know that nothing substitutes for extensive peer review and years of analysis. Quality systems use published and well-understood algorithms and protocols; using unpublished or unproven elements in a design is risky at best. Cryptographic system design is also an art. A designer must strike a balance between security and accessibility, anonymity and accountability, privacy and availability. Science alone cannot prove security; only experience, and the intuition born of experience, can help the cryptographer design secure systems and find flaws in existing designs. IMPLEMENTATION There is an enormous difference between a mathematical algorithm and its concrete implementation in hardware or software. Cryptographic system designs are fragile. Just because a protocol is logically secure doesn't mean it will stay secure when a designer starts defining message structures and passing bits around. Close isn't close enough; these systems must be implemented exactly, perfectly, or they will fail. A poorly-designed user interface can make a hard-drive encryption program completely insecure. A false reliance on tamper-resistant hardware can render an electronic commerce system all but useless. Since these mistakes aren't apparent in testing, they end up in finished products. Many flaws in implementation cannot be studied in the scientific literature because they are not technically interesting. That's why they crop up in product after product. Under pressure from budgets and deadlines, implementers use bad random-number generators, don't check properly for error conditions, and leave secret information in swap files. The only way to learn how to prevent these flaws is to make and break systems, again and again. CRYPTOGRAPHY FOR PEOPLE In the end, many security systems are broken by the people who use them. Most fraud against commerce systems is perpetrated by insiders. Honest users cause problems because they usually don't care about security. They want simplicity, convenience, and compatibility with existing (insecure) systems. They choose bad passwords, write them down, give friends and relatives their private keys, leave computers logged in, and so on. It's hard to sell door locks to people who don't want to be bothered with keys. A well-designed system must take people into account. Often the hardest part of cryptography is getting people to use it. It's hard to convince consumers that their financial privacy is important when they are willing to leave a detailed purchase record in exchange for one thousandth of a free trip to Hawaii. It's hard to build a system that provides strong authentication on top of systems that can be penetrated by knowing someone's mother's maiden name. Security is routinely bypassed by store clerks, senior executives, and anyone else who just needs to get the job done. Only when cryptography is designed with careful consideration of users' needs and then smoothly integrated, can it protect their systems, resources, and data. THE STATE OF SECURITY Right now, users have no good way of comparing secure systems. Computer magazines compare security products by listing their features, not by evaluating their security. Marketing literature makes claims that are just not true; a competing product that is more secure and more expensive will only fare worse in the market. People rely on the government to look out for their safety and security in areas where they lack the knowledge to make evaluations — food packaging, aviation, medicine. But for cryptography, the U.S. government is doing just the opposite. When an airplane crashes, there are inquiries, analyses, and reports. Information is widely disseminated, and everyone learns from the failure. You can read a complete record of airline accidents from the beginning of commercial aviation. When a bank's electronic commerce system is breached and defrauded, it's usually covered up. If it does make the newspapers, details are omitted. No one analyzes the attack; no one learns from the mistake. The bank tries to patch things in secret, hoping that the public won't lose confidence in a system that deserves no confidence. In the long run, secrecy paves the way for more serious breaches. Laws are no substitute for engineering. The U.S. cellular phone industry has lobbied for protective laws, instead of spending the money to fix what should have been designed correctly the first time. It's no longer good enough to install security patches in response to attacks. Computer systems move too quickly; a security flaw can be described on the Internet and exploited by thousands. Today's systems must anticipate future attacks. Any comprehensive system — whether for authenticated communications, secure data storage, or electronic commerce — is likely to remain in use for five years or more. It must be able to withstand the future: smarter attackers, more computational power, and greater incentives to subvert a widespread system. There won't be time to upgrade them in the field. History has taught us: never underestimate the amount of money, time, and effort someone will expend to thwart a security system. It's always better to assume the worst. Assume your adversaries are better than they are. Assume science and technology will soon be able to do things they cannot yet. Give yourself a margin for error. Give yourself more security than you need today. When the unexpected happens, you'll be glad you did. Bruce Schneier firstname.lastname@example.org http://www.counterpane.com
In EE Product News (a free advertising journal for the electronics industry) October 1996, p. 13, highlighted as a "Semiconductor of the Month" was the following: (I have emphasized certain portions with *...*) > XL103 CryptChip is claimed as the industry's first real-time encryption/ > decryption chip. *It protects data streams in applications as diverse as > the Internet, modems, cellular telephones, pagers, and TV set-top decoders*. > The easy-to-use IC requires no external components and *protects data > without the burden of learning cryptography*. Users also need not write > complicated and difficult-to-maintain software. > The chip's algorithm is a protected, hard-wired circuit that's *more secure > than software because it can't be read or copied, preventing reverse > engineering*. > The chip encrypts (or decrypts) data at a rate of 6.5 bits/ms. *With the > ability to hold eight different 64-bit keys in internal EEPROM, the device > can handle data from eight different secure systems. In lots of 1000, > pricing is $0.94 each* in an 8-pin plastic DIP and $0.97 in an SOIC. How many RISKS are there in this single advertising blurb? 1. Assuming that one encryption method is suitable for any data type. 2. Is designed to be utilized by an engineer who has no understanding of how (or if) it works. 3. Assuming that a hardware device is more secure than a software algorithm. 4. Offering the potential attacker the ability to use 8 keys at a time for generating cyphertext and selling him the part for a buck... --Gene S. Berkowitz
Please report problems with the web pages to the maintainer