The RISKS Digest
Volume 12 Issue 46

Thursday, 10th October 1991

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Encryption Exportability (Clark Weissman)
from ``Inside Risks''
Security Criteria, Evaluation and the International Environment
Steve Lipner
Info on RISKS (comp.risks)

Encryption Exportability, by Clark Weissman

"Peter G. Neumann" <neumann@csl.sri.com>
Thu, 10 Oct 91 08:01:02 PDT
``Inside Risks'', Comm. ACM, vol 34, no 10, October 1991, page 162

               A NATIONAL DEBATE ON ENCRYPTION EXPORTABILITY
                 Clark Weissman (clark@NISD.CAM.UNISYS.COM)

Traditionally, cryptography has been an exclusively military technology
controlled by the National Security Agency (NSA).  Therefore,
U.S. International Traffic in Arms Regulations (ITARS) require licenses for all
export of modern cryptographic methods.  Some methods, such as the Data
Encryption Standard (DES), are easily obtained for export to the Coordinating
Committee for Multilateral Export Controls (COCOM) countries, but not Soviet
block countries, or most third world nations.

The recent National Research Council (NRC) report, ``Computers At Risk'' [1]
describes advances in computer security uses for cryptography beyond
traditional COMSEC (communications security) applications of secure
text encoding.  These permit business to be conducted over the network and
include identification and authentication ``signatures,'' permission credentials
transactions, registration and notarizing by third parties, unforgeable
integrity checksums, ``indelible'' date/time stamp, non-repudiation of message
receipt, and electronic money.  These new applications make cryptography a
``dual use'' technology for both civilian and military users.  Encryption is used
in the civil sector for international banking, electronic information exchange,
electronic mail, machine safety, and internetwork commerce.  It is the
separation of secrecy and authentication encryption that underlies the dual use
argument.  This separation was made explicit in public key cryptosystems.

Industry needs cryptography for vitality and growth, which must be
international in scope to address common encryption algorithms, encryption
applications, key management and distribution methods irrespective of national
boundaries.  However, most governments have policies to restrict public access
to encryption services in telecommunications.  U.S. export controls constrain
domestic and international growth of encryption services.  Our international
trading partners have less severe export restrictions.

The U.S. finds itself in a dilemma: harm our economic growth and
competitiveness in the expanding world internet products and services
industries if we prohibit cryptographic applications, or permit such
cryptographic export and potentially weaken military security by providing new
encryption capabilities to our adversaries.  In both cases U.S. National
Security is at issue as noted in two other NRC studies [2,3].  These reports
define the agenda and technical foundations for a major encryption policy
debate, while there is still time to influence the market place.  We risk
diminution of our U.S. role in the advancing world market for
telecommunications at worst, and lost opportunity to lead the international
democratic societies in establishing standard, quality, privacy
telecommunication services world wide, at best.

The debate is international in applicability.  However, U.S. policy on
encryption appears most severe, so I urge a U.S. National debate to begin the
dialog, and start with some questions.  Do we gain more by strengthening our
commercial competitiveness and products, upon which the military is
increasingly dependent, than we lose by permitting international commonality in
cryptographic services, which may weaken military capabilities?  Who should
debate, Congress, DOD, NSA, NIST, National Security Council, public and private
agencies, and industry? Can National Security issues be given a fair hearing if
the technical and political facts are classified?  Will public confidence be
raised or weakened by such debate?  The proposed Senate bill S.266 required
U.S. cryptographic equipment include government ``trapdoors'' which lessened
public confidence.  Earlier fears of weakness of the DES have diminished
because continuing study and dialog suggest the DES to be free of trapdoors
[4].  One practical solution by a vendor for product export license used strong
encryption for authentication and weak encryption for secrecy.  Is this an
acceptable compromise solution out of the dual use dilemma?

Western democracies have been strengthened by debate on significant issues
of public policy.  Encryption policy should likewise be debated in the era of
a new world order.

REFERENCES:

1. National Research Council (NRC), 1991.  Computers at Risk: Safe Computing In
the Information Age.  Computer Science and Telecommunications Board (CSTB),
National Academy Press, Washington, D.C.

2. NRC, 1991.  Finding Common Ground: U.S. Export Controls in a Changed Global
Environment.  CSTB...

3. NRC, 1988. Global Trends in Computer Technology and Their Impact on Export
Control.  CSTB...

4. Denning, D.E. The Data Encryption Standard: Fifteen Years of Public Scrutiny.
Dist. Lecture, 6th Ann. Comp. Security Appl. Conf., IEEE Comp. Soc. Press 1990,
pp. x-xv.

Clark Weissman is Director of Secure Networks for Unisys Defense Systems, Inc.
His career has included advances in security penetrations analysis, virtual
machine OS, DBMS, and networks, on such projects as KVM, BLACKER, and DNSIX
LANs.  He has served on many industry, government, and professional security
panels.


Security Criteria, Evaluation and the International Environment

Steven B. Lipner <lipner@ultra.enet.dec.com>
Wed, 9 Oct 91 06:23:57 PDT
The following message contains the text of a paper that I gave as the keynote
address at IFIP-SEC '91, the annual conference of IFIP TC11, Security and
Privacy.  The conference was held in Brighton, England, May 1991.

The paper addresses the status and prospects of the "trusted systems"
evaluation process in the US, and its relationship to evaluation process
developments in Europe and elsewhere.  Briefly, I conclude that the current
process in the US is not really serving the needs of vendors, users, or
security authorities, and that the European ITSEC is not much of an
improvement.  I also give some suggestions for an improved process, probably
not as clearly articulated as if I were rewriting the paper today.

At last week's National Computer Security Conference, I ran into quite a few
people who seemed interested in the paper but who hadn't seen it.  The
conference proceedings are available, but perhaps a little obscure.
I think that it would be of interest to the RISKS audience.

       = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

             Criteria, Evaluation and the International Environment:
                     where have we been, where are we going?

                                 Steven B. Lipner
                          Digital Equipment Corporation
                          Littleton, Massachusetts, USA

INTRODUCTION

     This paper presents a few observations on trusted system evaluation
criteria.  It begins with a summary of the history of the U.S. criteria and the
current state of the European criteria.  It then discusses the impact of
criteria on the vendors and users of commercial computer systems.  After
suggesting some guidelines for the developers of new criteria, it goes on to
suggest a new direction that may better serve the purposes of vendor and user
communities, though at the price of abandoning some long-held beliefs.

     I should stress at the outset that this paper deals only with commercial
computer systems and commercial applications (and with civil government
computer applications that are indistinguishable from commercial ones).
Defense and national security applications have their own unique attributes --
particularly the need to deal with labelled or classified information.  After
more than ten years of looking, I am convinced that the unique attributes
having to do with labelled information are not required in the commercial and
civil sector.

     I would also offer the caveat that this paper reflects about twenty years
of experience in computer security, the last ten gained while working in the
employ of a commercial computer manufacturer.  While it reflects early
experience as a defense security researcher and many discussions with security
evaluators, researchers, criteria developers, and most of all, would-be users
of secure systems, it is clearly written from a vendor perspective and should
be read in that light.


THE EVOLUTION OF EVALUATION

     About ten years ago, the United States Department of Defense issued the
directive establishing the Department of Defense Computer Security Center.  The
primary role of the Center was to establish and operate a program that would
evaluate the security properties of commercial computer vendors' products.  The
theory underlying the Center was that commercial and defense users of computer
security products had common needs, and that by evaluating commercial products,
the Center would improve the security available both to the defense and
commercial users.

     The Center began its task by drafting, coordinating, and publishing the
Trusted Computer System Evaluation Criteria (TCSEC, or Orange Book).  The TCSEC
was published in 1983, and specifies a range of security evaluation classes
that apply to operating systems.  By 1986 the Center — now renamed the
National Computer Security Center (NCSC) and with scope encompassing the entire
U.S. Federal Government — had evaluated a handful of commercial operating
systems.  As we meet, the NCSC's charter has been reduced to the needs of U.S.
government agencies that process defense classified information, and a few
dozen operating systems have been evaluated.

     While the NCSC has undergone its decade-long evolution, the rest of the
world has not stood still.  U.S. legislation has given the U.S. National
Institute for Standards and Technology (NIST) the dominant role in setting
security standards for civil government and, by implication, the private
sector.  NIST has not been granted resources consistent with this
responsibility.  Recently NIST and NCSC have stated that they will work
together "to create a new federal computer security criteria document" that can
be applied across the entire U.S. government, including civil and defense
sectors.

     A potentially more significant development is that the governments of the
U.K., Germany, France and the Netherlands have begun joint development of the
Information Technology Security Evaluation Criteria (ITSEC).  The European
developers of the ITSEC have begun trial use of the draft, and the European
Community has begun the process of establishing EC-wide security criteria based
on the ITSEC.  The governments of Canada, Australia, and Sweden have also
expressed, to varying levels, the intent to develop their own information
security criteria.  NIST is representing the U.S. government in discussions
with the ITSEC developers and EC of common evaluation criteria and processes.


LIVING WITH THE TCSEC — THE VENDOR'S VIEW

     Almost every U.S. manufacturer of computer systems has completed at least
one evaluation of a Class C2 operating system.  C2 systems are the workhorses
of commercial computing.  They incorporate user identification and
authentication mechanisms, auditing, discretionary access controls, and
controls over storage residues.  They are tested to insure that the controls
work "as advertised".  They are thus well-suited to the vast majority of
commercial multiuser applications.  While the TCSEC is widely condemned as
applicable only to protection of data from unauthorized disclosure, a C2
operating system provides basic mechanisms that can be used to enforce a level
of data integrity as well.  Some users require features beyond those required
by Class C2 (such as more controls over the passwords used for authentication)
and these are frequently added by vendors.  Other users DO NOT require some of
the features mandated by Class C2, but those unnecessary features can be
"turned off" by system administrators.  On the whole, C2 is a good common
demoninator.

     The "goodness" of C2 systems, however is marred by two deficiencies that
are especially visible to the vendor: First, no one can tell what a C2 system
is; and second, by the time a C2 system is developed and evaluated, it is
obsolete.  I will spend the next few paragraphs amplifying on these
deficiencies.

     At first blush, it appears easy to tell what is in a C2 system — I
summarized such a system two paragraphs above.  An early senior manager of the
NCSC once observed to me that developers of sytems up through class B1 should
almost be able to self-evaluate — to tell whether they had met the
requirements by comparing their system to the criteria.  Unfortunately, any
development manager who has taken a system through the NCSC process knows
better.  Developers and "evaluators" go through seemingly endless disputes:

 o  Is interprocess communication an "object" requiring access
    controls and auditing?

 o  How is it that self goup/public access controls for one
    system meet the C2 criteria while those for another do not?

 o  Is the system's internal design documentation (not
    required at all by the Orange Book) adequate so that the
    "evaluators" can tell that all of the relevant tests have
    been developed and objects controlled?

The answers to such questions are not found in the TCSEC.  They are defined in
a set of "interpretations" of the TCSEC maintained by the "evaluators".
Because the interpretation process is often triggered by proprietary aspects of
vendors' products, the total set of "interpretations" is not visible outside of
the NCSC.  As new vendor issues arise, however, new "interpretations" are added
and cumulatively imposed on future evaluations of all vendors' products.  Since
every system is different, there are plentiful issues that require new
"interpretations".  U.S. vendors refer to this phenomenon as "criteria creep".
It has the effect that a 1986 C2 system is NOT the same as a 1990 C2 system,
and that no one can tell what a 1992 C2 system will be.

     The problem of obsolesence results from the evaluation process.  The
vendor negotiates with the NCSC what C2 means for his system, and eventually
gets all of the required features, security tests, and design documentation
incorporated in a version.  When the version goes to customer field test, the
NCSC begins its formal evaluation process — a final review of the
documentation, running independent tests and so forth.  The NCSC process takes
time, and it is likely that by the conclusion of that process, the vendor is
shipping the version that succeeds the one under evaluation.  Pressures on
vendors are to get releases out more often, while the pressures on the NCSC are
to do a more thorough job of evaluating.  Hence, obsolete evaluated versions.

     The NCSC has developed a "Rating Maintenance Program" or RAMP that is
intended to allow vendors to self-evaluate future versions subject to NCSC
review and audit once one version has been evaluated by the NCSC.  This process
requires the vendor to apply the Class B2 requirements for configuration
management (as "interpreted") to systems in classes C2 and B1.  These
requirements impose on the vendor's development process an overlay of
paperwork, checking, bureaucracy and mistrust.  For those of you familiar with
the U.S. Defense "procurement scandals", it is the paperwork, checking,
bureaucracy and mistrust associated with configuration management over all
aspects of a development process that makes for $800 hammers and the like.
There is considerable resistance to RAMP as specified in the U.S. vendor
community.


LIVING WITH THE TCSEC — THE USER'S VIEW

REAL ENVIRONMENTS

     The TCSEC states security requirements for multi-user computer operating
systems — in effect, for monolithic time-sharing systems.  My employer is
putting substantial resources into the development of an updated version of its
proprietary operating system that will meet the Orange Book criteria for Class
C2 and B1.  I can, however, state with high confidence that no user will
operate that system in its evaluated configuration.  The reason for my
confidence is that TCSEC evaluations exclude general networking facilities,
while essentially all "real" computers are installed in networks.  Users will
buy the system, install it, and set the controls as best they can.  We will
document our judgment on the security of systems in networks, in the part of
the system security manual that is "outside" the C2 evaluation.

     The NCSC has issued a "trusted network" interpretation (TNI) of the Orange
Book.  It sets forth criteria for networks of systems designed, built,
installed and managed as though they were a single time-sharing system.
However, real users proceed by buying a computer, hooking it up, using it, then
adding a second computer, wiring it up to the first, adding a third and so on.
At some point, they connect to the Internet, to a world-wide public network or
both.  The TNI is not comprehensible, much less helpful, to such users.

REAL APPLICATIONS

     A second concern with the TCSEC deals with applications.  When I log into
the NCSC's system (through a network!) I use it in a "B2" way.  I have an
account, I send and receive mail, I access files in various sensitivity
classes.  However, many computer systems are dominated by large data bases and
large applications that work "across" users.  The users may never see the files
and applications program at the base of the Orange Book paradigm.  Furthermore,
the sharing of information may be controlled by data base systems or
applications — not just the operating system.  In such a configuration --
common in the commercial world — one installs the application with "privilege"
to override the operating system controls, and the evaluated product becomes
irrelevant.  The hard-earned NCSC evaluation is invalidated by the addition of
privileged ("trusted") software that was not part of the configuration
evaluated by the NCSC.

     The NCSC is developing a "data base interpretation" of the TCSEC.  Time
will tell whether it meets the needs of real secure data base systems.  It is
clear in any case that end users and software houses need more guidance on the
development of secure applications than criteria have yet provided.

REAL SYSTEM MANAGEMENT

     In my work, I have occasionally encountered a user who has experienced a
security penetration despite using an evaluated system.  The common denominator
among such incidents is system configuration and management: the user attempts
to install the secure system in his environment, and "gets something wrong"
that allows a hostile party to go where he shouldn't be.  The trusted system
evaluation is irrelevant because the user is doing something he should not --
operating in a network or running an application — to get the job done.

     Vendors try to document real-world ways of using systems securely.  They
offer tools to help users with this task.  When an unexpected problem arises,
the vendors learn from it and update their tools and documentation.  Real
systems used in real applications are complex, and evaluations are not a
substitute for experience.

     The U.S. evaluation process (the only one where we have a rich experience
over time) may actually have evolved into an obstacle to security in the area
of system management.  As the U.S. evaluators insist that more of the objects
(containers that hold information) in a computer system be subject to more
controls, the security management documents get thicker and thicker.
Developers, writers, and system managers are faced with the challenge of
designing, documenting, and finding secure ways of using the systems in the
face of a forest of controls and auditing options of interest only to the
evaluators.


ARE THERE ANY BENEFITS?

     The paragraphs above paint a fairly bleak picture of the NCSC and Orange
Book.  The question naturally arises "why bother"?  Why do vendors continue to
have systems evaluated, and what is the benefit to users?

     The first answer to the question "why bother" is that even in the world of
distributed networks and real applications, operating system security is often
fundamental to system security.  A C2 operating system does have a cohesive set
of controls on which a user can build.  For a few applications, one installs
the sytem, configures the controls, and goes.  More often one must do design,
integration, and adaptation.  Regardless, the C2 controls are a useful
foundation.

     The second answer is that many public procurements have mandated evaluated
systems, and more are likely to do so in the future.  Thus vendors have little
choice but to develop (at least C2 and B1) evaluated systems.  The fact that
the NCSC has thus dominated the domain of discourse about secure systems
bespeaks a significant accomplishment, though potentially at the price of
foreclosing other options.  For some organizations, if the answer can't be
expressed as an evaluated system, it can't be expressed — although the real
world is usually much more complex than the domain of the TCSEC.

     The third answer is that the NCSC process has been fair in a competitive
sense.  If the criteria change over time, they are applied fairly at any point
in time.  The vendors know that they will get an unbiased (though painful)
evaluation, while users know that they can use the evaluation class as part of
a fair competitive procurement.

     These three attributes — basic security, a wide base of application, and
a fair process — bespeak significant accomplishments.  Future criteria writers
and evaluators should strive to do as well.


EVOLVING EVALUATION

     When I began to draft this paper, I thought briefly of including some
high-level observations on the ITSEC and Canadian CTCPEC.  I decided not to do
so, because I felt I would be shooting at a "moving target" — by the time I
delivered the paper, the criteria in question would likely have been revised
and my comments would have become moot.  I will try instead to offer a few
"timeless" guidelines based largely on real-world experience with the ITSEC and
NCSC.


THE WORLD MARKET

     My first comment, then, is that that the computer industry is global and
evaluations should likewise be global.  The NCSC probably contributed in some
measure to the development of the ITSEC by excluding non-US vendors from TCSEC
evaluations.  Happily, the European evaluators have not chosen to reciprocate
by exluding U.S. vendors.  However, the development of a set of criteria
different from the TCSEC would appear to impose a sufficient obstacle --
especially if ITSEC evaluations are as costly and painful for the vendor as
those under the TCSEC.

     Some have proposed a "feature mapping" scheme that would compare criteria
by breaking them down into their finest elements.  This scheme is likely to be
time-consuming and ineffective, if it is feasible at all.  A more sensible
approach is for criteria developers to agree — if not on criteria, at least on
those classes that are comparable.  It should not be necessary for a product to
undergo more than one evaluation worldwide — at least at classes up through B1
that are not used to protect the most sensitive defense information and that
are most interesting commercially.


AMBIGUITY

     The discussion above of the TCSEC and NCSC made it clear that the
descriptions in the TCSEC are not sufficiently explicit.  I acknowledge to my
chagrin that I was a reviewer of the TCSEC drafts and was as stunned as anyone
when "interpretations" started to roll in.  My surprise was all the greater
since the TCSEC was subject to extensive public review and comment during
development, while the "interpretation" process is almost completely conducted
behind closed doors.

     If criteria or standards are intended as mandatory guidance for
procurements, they should be very explicit about what features are required,
where they must be applied, and what assurances must be provided.  The first
draft ITSEC was relatively precise about assurance of correctness (though see
the next section), but the effectiveness and functionality criteria in the
ITSEC, and the functionality criteria in the second draft CTCPEC shared with
the TCSEC significant ambiguity.  Experience teaches us that we must do better.

     One might ask "why set feature requirements at all"?  The ITSEC answered
this question by allowing the sponsor to specify arbitrary security features.
The answer to this question goes back to one of the benefits of the TCSEC — a
fair and competitive process.  If vendors may or must go off completely on
their own in selecting security feature sets, competitive procurement will
likely suffer as procuring organizations find themselves unable to find any set
of security features common to two or more competing vendors.  Instead,
criteria should be very explicit about the core set of security features
required, and allow vendors to "add value" by offering additional security
features and functions.


ADAPTABLE PROCESS

     In today's computer industry, there is immense pressure to deliver
products faster, with more features and better performance.  This pressure is
at odds with the sorts of rigid development processes proposed in the first
draft ITSEC and the NCSC's RAMP.

     Development processes for secure commercial products should be consistent
with the real commercial development environment.  They should not attempt to
make computer systems into $800 hammers, nor should they impose an atmosphere
of mistrust on the development process.

     This is not to say that vendors should be allowed to "get away with
anything".  They should not.  But evaluation processes should take into account
differences among vendors, the need to repair flaws, and the likely
impossibility of preventing them totally.  They should also allow for process
improvement — a key ingredient in the quest for improved product quality that
will yield better security.


STABILITY

     Many of the difficulties with the TCSEC result from the fact that it was
not tested until after it had been promulgated.  Future criteria should be used
on real (not toy) systems in substantially final form before they are made
authoritative.


NEW DIRECTIONS

     The suggestions above can guide the development of more useful criteria
for the evaluation of secure operating systems.  Diligently applied, they might
reduce the cost and increase the timeliness of developing secure operating
system products.  They do not, however, "solve the problem" of computer
security.

     Documents such as criteria or standards that are to meet the needs of
users and of the custodians of data that require protection must support the
development and installation of real systems.  This is a daunting challenge.
What can we say about heterogeneous networks?  about data bases?  about
real-time systems and commercial applications?  Stories abound in the United
States of officials from the NCSC visiting banks, offering them copies of the
TCSEC and saying "this is the answer to your security problems".  Needless to
say, no banker believed that assertion once he had examined the TCSEC, thugh
most banks DO use systems that have been evaluated in Class C2.

     Criteria for secure time-sharing systems will not "make it" in the
nineties, but it is not clear that we know enough to write evaluation criteria
for networks, data bases or applications.  The ITSEC specifies measures for
assurance and allow arbitrary functionality; the total composition of the
secure system can be up to the end user.  However, it seems that few end users
would be rich or sophisticated enough to apply the costly ITSEC assurance
measures to a unique application system.

     What then to do?  I suggest that we should stabilize criteria as a way of
evaluating operating system security, and concentrate on removing the blatant
silliness and unpredictability that have crept into the NCSC process.  We do
not know enough to have criteria for everything, and we shouldn't try.

     Instead, write guidelines for products and practices "outside" the
operating system that embody what we do know and think we know.  Offer those
guidelines to users with proper humility, try them out, and revise them often.
Work with users who have the real problem of combining evaluated operating
systems with unevaluated applications, data bases, and networks and see if we
can develop suggested techniques and guidelines to apply as needed.  Identify
useful features and document their attributes in clear language that can be
used for competitive procurements.

     Each user will ultimately select features, products, and custom
development to meet his own needs.  The most that common standards can do is to
identify often-needed sets of products or features and suggest, as application
notes, ways of configuring and applying them in real-world situations.  This
latter sort of guidance will give users help in the all-important area of
configuring and managing the products that do meet evaluation criteria.  If we
listen to real experience, in time the guidelines may improve.  When the rate
of needed revision slows to the point of "stability", we can think about
standards.  It may even be that there will be additional areas of application
for criteria and evaluation, though I for one am not convinced.


IN CLOSING

     There is an old saying from the American West that goes "You can tell the
pioneers; they're the ones with the arrows in their backs".  The NCSC went
first with security evaluation criteria.  They have made mistakes, but they
have also changed the way the world does — and thinks about — computer
security.

     It is up to us all now to recognize that evaluation and criteria, at least
for the moment, are limited to operating system products.  Rather than stretch
the paradigm where it has no business going, we should concentrate on
establishing stable and economical operating system evaluation processes, but
put the major focus of our efforts on more broadly applicable guidelines that
help to guide choice by users in the development or selection of cost-effective
security measures.

Please report problems with the web pages to the maintainer

x
Top