CRYPTO-GRAM, March 15, 2000

From Bruce Schneier <schneier@counterpane.com>
Date Wed, 15 Mar 2000 15:26:23 -0600


[: hacktivism :]

[:  moderator says:  This is Clue, getcha some.  :]


                  CRYPTO-GRAM

                March 15, 2000

               by Bruce Schneier
                Founder and CTO
       Counterpane Internet Security, Inc.
            schneier@counterpane.com
           http://www.counterpane.com


A free monthly newsletter providing summaries, analyses, insights, and 
commentaries on computer security and cryptography.

Back issues are available at http://www.counterpane.com.  To subscribe or 
unsubscribe, see below.


Copyright (c) 2000 by Counterpane Internet Security, Inc.


** *** ***** ******* *********** *************

In this issue:
      Kerberos and Windows 2000
      Counterpane -- Featured Research
      News
      AES News
      Counterpane Internet Security News
      Software as a Burglary Tool
      The Doghouse:  The Virginia Legislature
      Software Complexity and Security
      Comments from Readers


** *** ***** ******* *********** *************

          Kerberos and Windows 2000



Kerberos is a symmetric-key authentication scheme.  It was developed at MIT 
as part of their Project Athena in the 1980s -- the protocol was published 
in October 1988 -- and has been implemented on various flavors of 
UNIX.  The current version is Kerberos Version 5, which corrected some 
security vulnerabilities in Version 4.  It's never taken over the 
authentication world, but it is used in many networks.  These days, the 
Internet Engineering Task Force (IETF) controls the specification for Kerberos.

Kerberos is a client-server authentication protocol.  (_Applied 
Cryptography_ goes into the protocol in detail.)  For the point of this 
article, remember that there is a secure Kerberos server on a 
network.  Clients log into the Kerberos server and get secure 
"tickets."  The clients can use these tickets to log onto other servers on 
the network: file servers, databases, etc.

Kerberos is now part of Microsoft Windows 2000, sort of.  The issue is that 
Microsoft has made changes to the protocol to make it noninteroperable with 
the Kerberos standard, and with any products that implement Kerberos correctly.

Specifically, the incompatibility has to do with something called the "data 
authorization field" in the Kerberos messages.  All major Kerberos 
implementations leave the field blank.  The new Microsoft implementation 
does not; it uses the field to exchange access privileges between the 
Kerberos server and the client.

There are two ways to look at this:

o   Since the field has no specific uses in the protocol (and no one else 
uses it), the fact that Microsoft is using the protocol is harmless.

o   Because Microsoft is refusing to publish details about its proprietary 
use of the field, they are harming interoperability and 
standardization.  Other Kerberos vendors cannot directly support Windows 
2000 clients.

Even worse, Microsoft bypassed the IETF in this process (there's a 
procedure you're supposed follow if you want to enhance, deviate from, or 
modify an IETF standard).

On the surface, this is just nasty business practices.  If you're a company 
that has invested in a UNIX-based Kerberos authentication system and you 
want to support Windows 2000 desktops, your only real option is to buy a 
Windows 2000 Kerberos server and pay for the integration.  I'm sure this is 
what Microsoft wants.

My worry is more about the security.  Protocols are very fragile; we've 
learned that time and time again.  You can't just make changes to a 
security protocol and assume the changed protocol will be 
secure.  Microsoft has taken the Kerberos protocol -- a published protocol 
that has gone through over a decade of peer review -- and has made changes 
in it that affect security.  Even worse, they have made those changes in 
secret and have not released the details to the world.

Don't be fooled.  The Kerberos in Windows 2000 is not Kerberos.  It does 
not conform to the Kerberos standard.  It is Kerberos-like, but we don't 
know how secure it is.

Kerberos Web page:
<http://www.isi.edu/gost/gost-group/products/kerberos/>

IETF Specification:
<ftp://ftp.isi.edu/in-notes/rfc1510.txt>
<ftp://athena-dist.mit.edu/pub/kerberos/doc/techplan.txt>

Microsoft Kerberos information:
Windows 2000 Kerberos Authentication white paper --
<http://www.microsoft.com/windows2000/library/howitworks/security/kerberos.asp>
Introduction to Windows 2000 Security Services --
<http://www.microsoft.com/WINDOWS2000/guide/server/features/secintro.asp>
Guide to Kerberos Interoperability --
<http://www.microsoft.com/windows2000/library/planning/security/kerbsteps.asp>
Article by David Chappell about Kerberos and Windows 2000 --
<http://www.microsoft.com/msj/defaulttop.asp?page=/msj/0899/kerberos/kerbero 
stop.htm>


** *** ***** ******* *********** *************

       Counterpane -- Featured Research



"A Performance Comparison of the Five AES Finalists"

B. Schneier and D. Whiting, Third AES Candidate Conference, 2000, to appear.

In 1997, NIST announced a program to develop and choose an Advanced 
Encryption Standard (AES) to replace the aging Data Encryption Standard 
(DES).  NIST chose five finalists in 1999.  We compare the performance of 
the five AES finalists on a variety of common software platforms: current 
32-bit CPUs (both large microprocessors and smaller, smart card and 
embedded microprocessors) and high-end 64-bit CPUs.  Our intent is to show 
roughly how the algorithms' speeds compare across a variety of CPUs.  Then, 
we give the maximum rounds cryptanalyzed for each of the algorithms, and 
re-examine all the performance numbers for these variants. We then compare 
the algorithms again, using the minimal secure variants as a way to more 
fairly align the security of the five algorithms.

<http://www.counterpane.com/aes-comparison.html>


** *** ***** ******* *********** *************

                     News



More commentary on the ethics of publicizing vulnerabilities:
<http://boardwatch.internet.com/mag/99/oct/bwm62.html>
<http://cgi.zdnet.com/slink?22157:8469234>

An opinion on DDS attacks and the CD Universe fiasco:
<http://www.osopinion.com/Opinions/GaryMurphy/GaryMurphy7.html>

There's a new DSS standard:
Text --
<http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=2000_register&doc 
id=00-3450-filed>
PDF --
<http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=2000_register&doc 
id=00-3450-filed.pdf>

BAIT, DIRT, and other law-enforcement hacker tools.  Some of the PR fluff 
sounds too good to be true.
<http://www.codexdatasystems.com/>

H&R Block insecurity:
<http://news.cnet.com/news/0-1005-200-1550948.html?tag=st.ne.1002.tgif?st.ne 
fd.gif.d>

The worst security product is the one that isn't used.  Here are the 
results of a PGP usability study.  Most people can't figure out how to use 
it.  Some sent e-mail out unencrypted, believing it was secure.
<http://www.wired.com/news/news/business/story/21484.html>
<http://www.cs.cmu.edu/~alma/johnny.pdf>

Novell published a "security flaw in MS Active Directory Services" the day 
before the MS launch of Windows 2000.  Microsoft published a response 
shortly thereafter.  Both documents are full of marketing spin.  Russ 
Cooper has written an objective description of the non-issue:
<http://ntbugtraq.ntadvice.com/NDSvsADS-01.asp>

Good security software:  a command-line tool for statically scanning C and
C++ source code for security vulnerabilities.  It's called ITS4.
<http://www.rstcorp.com/its4/>

Mixter's paper on cracking:
<http://mixter.void.ru/crack.txt>

Excellent essay on the difference between hackers and vandals:
<http://www.villagevoice.com/issues/0007/thieme.shtml>

Commentaries on distributed denial-of-service attacks:
<http://www.pbs.org/cringely/pulpit/pulpit20000217.html>
<http://www.thenation.com/issue/000313/0313klein.shtml>

Usernames and passwords for sale:
<http://www.wired.com/news/politics/0,1283,34515,00.html?tw=wn20000224>

Sony PlayStation 2 is being held up for export (from Japan) due to crypto 
in the system:
<http://www.theregister.co.uk/000302-000026.html>

Navajo code-talking GI Joe doll:
<http://www.gijoe.com/lnavajo_code_talker.html>

More speculation about Echelon:
<http://www.zdnet.com/enterprise/stories/security/news/0,7922,2455560,00.html>
<http://www.wired.com/news/politics/0,1283,34932,00.html>

Interesting use of a honey pot by the San Diego Supercomputer Center (or, 
SDSC Hacks the Hackers):
<http://security.sdsc.edu/incidents/worm.2000.01.18.shtml>


** *** ***** ******* *********** *************

                    AES News



The big AES news is the week of 10-14 April, 2000, in New York.  Monday, 
Tuesday, and Wednesday are the 7th Fast Software Encryption workshop (FSE 
2000).  Thursday and Friday are the 3rd AES Candidate Conference 
(AES3).  Both are in the New York Hilton and Towers.  FSE 2000 will have 
several excellent papers on the AES candidates (new attacks on MARS, RC6, 
Rijndael, and Serpent), and AES3 will have nothing but.  The papers for FSE 
2000 have been selected, and are listed on the Web site.  The papers for 
AES3 have not been announced yet.  (The submission deadline for both 
conferences is long past.)

Come, be a part of cryptography history.  It'll be fun.

FSE 2000:
<http://www.counterpane.com/fse.html>

AES3:
<http://csrc.nist.gov/encryption/aes/round2/conf3/aes3conf.htm>


** *** ***** ******* *********** *************

      Counterpane Internet Security News



Bruce Schneier was interviewed in Business Week:
<http://www.businessweek.com/2000/00_10/b3671089.htm>


** *** ***** ******* *********** *************


          Software as a Burglary Tool



This is a weird one.  Two people in Minneapolis who allegedly stole 
information from their employers were charged with the possession of a 
"burglary tool" -- L0phtcrack, the program that automatically breaks 
Windows passwords.

The ramifications of this are unclear.  There are some burglary tools that 
you can't carry unless you are a licensed professional (certain lockpicking 
tooks, for example); just having them is illegal.  But screwdrivers and 
bolt cutters can also be burglary tools if they are used with the intent to 
commit a crime.

What it means to me is that the law is getting serious about this.

<http://www.channel4000.com/news/stories/news-20000217-164727.html?&_ref=100 
  5006010>


** *** ***** ******* *********** *************

    The Doghouse:  The Virginia Legislature



They recently passed the Uniform Computer Information Transactions Act 
(UCITA).  It's deeply disturbing.  It could be subtitled "The Software 
Industry Wish List" for the amount of control (and absence of 
accountability) it gives UNDER LAW to software distributors.

Under the UCITA, Microsoft not only doesn't have to fix any of the 63,000 
Windows 2000 bugs, it wouldn't even have to tell you any of them 
existed.  It could also disable the OS of anyone it wants for essentially 
any reason it wanted (e.g., failing to abide by the license terms which 
restrict you from any public mention of apparent bugs in the software).

The governor has not signed the bill into law yet, but he is expected to.

<http://www.lawnewsnetwork.com/practice/techlaw/news/A16380-2000Feb16.html>
<http://www4.zdnet.com:80/intweek/stories/news/0,4164,2436874,00.html>
<http://www.computerworld.com/home/print.nsf/CWFlash/000215ECDA>
<http://www.cnn.com/2000/TECH/computing/03/07/ucita.idg/index.html>


** *** ***** ******* *********** *************

        Software Complexity and Security



The future of digital systems is complexity, and complexity is the worst 
enemy of security.

Digital technology has been an unending series of innovations, unintended 
consequences, and surprises, and there's no reason to believe that will 
stop anytime soon.  But there is one thing that has held constant through 
it all, and it's that digital systems have gotten more complicated.

We've seen it over the past several years.  Microprocessors have gotten 
more complex.  Operating systems have gotten more complex.  Computers have 
gotten more complex.  Networks have gotten more complex.  Individual 
networks have combined, further increasing the complexity.  I've said it 
before, but it's worth repeating:  The Internet is probably the most 
complex machine mankind has ever built.  And it's not getting any simpler 
anytime soon.

As a consumer, I think this complexity is great.  There are more choices, 
more options, more things I can do.  As a security professional, I think 
it's terrifying.  Complexity is the worst enemy of security.  This has been 
true since the beginning of computers, and is likely to be true for the 
foreseeable future.  And as cyberspace continues to get more complex, it 
will continue to get less secure.  There are several reasons why this is true.

The first reason is the number of security bugs.  All software contains 
bugs.  And as the complexity of the software goes up, the number of bugs 
goes up.  And a percentage of these bugs will affect security.

The second reason is the modularity of complex systems.  Complex systems 
are necessarily modular; there's no other way to handle the complexity than 
by breaking it up into manageable pieces.  We could never have made the 
Internet as complex and interesting as it is today without modularity.  But 
increased modularity means increased security flaws, because security often 
fails where two modules interact.

We've already seen examples of this as everything becomes 
Internet-aware.  For years we knew that Internet applications like sendmail 
and rlogin had to be secure, but the recent epidemic of macro viruses shows 
that Microsoft Word and Excel need to be secure.  Java applets not only 
need to be secure for the uses they are intended, they also need to be 
secure for any other use an attacker might think of.  Photocopiers, 
maintenance ports on routers, mass storage units: these can all be made 
Internet-aware, with the associated security risks.  Rogue printer drivers 
can compromise Windows NT.  Malicious e-mail attachments can tunnel through 
firewalls.  Convenience features in Microsoft Outlook can compromise security.

The third reason is the increased testing requirements for complex 
systems.  I've talked elsewhere about security and failure testing.  The 
only reasonable way to test the security of a system is to perform security 
evaluations on it.  However, the more complex the system is, the harder a 
security evaluation becomes.  A more complex system will have more 
security-related errors in the specification, design, and 
implementation.  And unfortunately, the number of errors and the difficulty 
of evaluation does not grow in step with the complexity, but in fact grows 
much faster.

For the sake of simplicity, let's assume the system has ten different 
settings, each with two possible choices.  Then there are 45 different 
pairs of choices that could interact in unexpected ways, and 1024 different 
configurations altogether.  Each possible interaction can lead to a 
security weakness, and must be explicitly tested.  Now, assume that the 
system has twenty different settings.  This means 190 different pairs of 
choices, and about a million different configurations.  Thirty different 
settings means 435 different pairs and a billion different 
configurations.  Even slight increases in the complexity of systems mean an 
explosion in the number of different configurations . . . any one of which 
could hide a security weakness.

The increased number of possible interactions creates more work during the 
security evaluation.  For a system with a moderate number of options, 
checking all the two-option interactions becomes a huge amount of 
work.  Checking every possible configuration is effectively 
impossible.  Thus the difficulty of performing security evaluations also 
grows very rapidly with increasing complexity.  The combination of 
additional (potential) weaknesses and a more difficult security analysis 
unavoidably results in insecure systems.

The fourth reason is that the more complex a system is, the harder it is to 
understand.  There are all sorts of vulnerability points -- human-computer 
interface, system interactions -- that become much larger when you can't 
keep the entire system in your head.

The fifth (and final) reason is the difficulty of analysis.  The more 
complex a system is, the harder it is to do this kind of 
analysis.  Everything is more complicated: the specification, the design, 
the implementation, the use.  And as we've seen again and again, everything 
is relevant to security analysis.

A more complex system loses on all fronts.  It contains more weaknesses to 
start with, its modularity exacerbates those weaknesses, it's harder to 
test, it's harder to understand, and it's harder to analyze.

It gets worse:  This increase in the number of security weaknesses 
interacts destructively with the weakest-link property of security: the 
security of the overall system is limited by the security of its weakest 
link.  Any single weakness can destroy the security of the entire system.

Real systems show no signs of becoming less complex.  In fact, they are 
becoming more complex faster and faster.  Microsoft Windows is a poster 
child for this trend to complexity.  Windows 3.1, released in 1992, had 3 
million lines of code; Windows 95 has 15 million and Windows 98 has 18 
million.  The original Windows NT (also 1992) had 4 million lines of code; 
NT 4.0 (1996) has 16.5 million.  In 1998, Windows NT 5.0 was estimated to 
have 20 million lines of code; by the time it was renamed Windows 2000 (in 
1999) it had between 35 million and 60 million lines of code, depending on 
who you believe.  (As points of comparison, Solaris has held pretty stable 
at about 7 to 8 million lines of code for the last few releases, and Linux, 
even with the addition of X Windows and Apache, is still under 5 million 
lines of code.)

The size of Windows 2000 is absolutely amazing, and it will have more 
security bugs than Windows NT 4.0 and Windows 98 combined.  In its defense, 
Microsoft has claimed that it spent 500 people-years to make Windows 2000 
reliable.  I only reprint this number because it will serve to illustrate 
how inadequate 500 people-years is.

The networks of the future, necessarily more complex, will be less 
secure.  The technology industry is driven by demand for features, for 
options, for speed.  There are no standards for quality or security, and 
there is no liability for insecure software.  Hence, there is no economic 
incentive to create high quality.  Instead, there is an economic incentive 
to create the lowest quality the market will bear.  And unless customers 
demand higher quality and better security, this will never change.

I see two alternatives.  The first is to recognize that the digital world 
will be one of ever-expanding features and options, of ever-faster product 
releases, of ever-increasing complexity, and of ever-decreasing 
security.  This is the world we have today, and we can decide to embrace it 
knowingly.

The other choice is to slow down, to simplify, and to try to add 
security.  Customers won't demand this -- the issues are too complex for 
them to understand -- so a consumer advocacy group is required.  I can 
easily imagine an FDA-like organization for the Internet, but it can take a 
decade to approve a new prescription drug for sale, so this solution might 
not be economically viable.

I repeat: complexity is the worst enemy of security.  Secure systems should 
be cut to the bone and made as simple as possible.  There is no substitute 
for simplicity.

Unfortunately, simplicity goes against everything our digital future stands 
for.


** *** ***** ******* *********** *************

              Comments from Readers



From: Shawn Hernan <svh@cert.org>
Subject: Full Disclosure

I was intrigued by your recent series of editorials in Crypto-Gram 
regarding full-disclosure, and especially, CERT.  I am writing to respond 
to the article.

Some of your criticisms of CERT are valid, and I agree with them; but I 
wanted to point out a couple of things that you may not realize about our 
current practices.

When deciding what to publish and when, we use a variety of different criteria.

First, whatever we publish has to be *true* -- we go to great lengths to 
validate and verify everything we say in an advisory, and you can imagine 
some of the arguments that ensue over what is "true."

Second, as a rule of thumb, our advisories are generally about very serious 
problems.  We have a formal metric that we use to attempt to put 
vulnerabilities on a linear scale of "severity" and we use that as a 
first-order estimate of the gravity of the problem, and use our experience 
as the final judge.  Generally, the problems issued in advisories are in 
the 90th percentile of this scale (internally called the "threat metric").

Third, although it may have been true in the past, it has never been the 
case in my time here (about 4 years now) that our publication schedule was 
dependent on all (or even any) of the fixes being available.  We certainly 
prefer to have fixes available at publication time, but if we discover that 
a vulnerability is being exploited we will publish regardless of the 
availability of any fixes or patches.  My team (the vulnerability handling 
team) works very closely on a daily basis with the incident response team 
to understand if a vulnerability is being exploited.

Given all that, I am trying to find responsible, practical ways to publish 
more information about vulnerabilities in a variety of forms.  We are a 
relatively small organization, and I'm not willing to sacrifice truth for 
expediency.


From: Ryan Russell <ryan@securityfocus.com>
Subject: Distributing Exploits

You're still not totally consistent in what you say:

  >Third, I believe that it is irresponsible, and possibly
  >criminal, to distribute exploits.

You've already acknowledged that that's what it takes to get action.

  >Reverse-engineering security systems, discovering
  >vulnerabilities, and writing research papers about them
  >benefits research; it makes us smarter at designing secure
  >systems. Distributing exploits just make us more vulnerable.

You acknowledge your behavior being inconsistent with your words, which is 
neither here nor there.  It not only often takes an exploit, but it takes a 
press release sometimes.  Thievco released an "exploit" to decode Netscape 
passwords a year and a half ago.  Netscape did nothing.  RST Corp. did the 
same, with a press release.  That got Netscape's attention.

  >For example, Mixter is a German hacker who wrote the
  >Tribal Flood Network tool used in some of the distributed
  >denial-of-service attacks. I believe he has a lot to answer
  >for. His attack tool served no good.

Not true.  Were it not for him, we'd probably be looking at mystery tools 
that were being used that we didn't have the source for, and couldn't as 
easily analyze.  Mixter has combated much FUD by showing us exactly the 
type of thing that can be used, so that the reporters couldn't run off and 
tell the public that the evil hackers have superweapons the security 
experts know nothing about.

  >It enabled criminals and cost a lot of companies a lot of
  >money. Its existence makes networks less secure.

As you say, like any tool, it enables both good and bad guys.  As you've 
pointed out, the security problem was already there, the tools just 
highlight it.

Let me speak to the subtext of your rant against Mixter.  Some people think 
Mixter may deserve some punishment.  I don't, but I can see some of the 
logic.  Really, I think if anyone deserves punishment, it's the guys who 
used the tool.

Did Mixter and even the attackers actually do anything in the spirit of 
full disclosure?  Yes.

We've been complaining for years about the spoofing problem, and expecting 
ISPs to do filtering.  Nothing has happened.  Mixter put out his 
tool.  Some meetings to discuss DDoS happened.  No actual change to 
behavior, but there was some amount of advanced planning, which was good 
preparation.  Finally, some person (yes, criminal) put their neck on the 
line and actually used them.  They didn't take down the security sites to 
make them look bad.  They didn't go after the government.  They went after 
e-commerce, which I have to assume was designed for maximum reaction.

I think we'll get some action now.


From: Brian Bartholomew <bb@wv.com>
Subject: Publishing exploits

  > Second, I believe in giving the vendor advance notice.  CERT took
  > this to an extreme, sometimes giving the vendor years to fix the
  > problem.  I'd like to see the researcher tell the vendor that he
  > will publish the vulnerability in a month, or three weeks (no fair
  > giving the vendor just seven days to fix the problem).  Hopefully
  > the vulnerability announcement can occur at the same time as the
  > patch announcement.  This benefits everybody.

Whatever CERT's motivations were, they had the effect of increasing user 
trust (because a new sheriff is in town) while decreasing trustability 
(because they sat on vulnerabilities users handed off to them).  This is 
backwards, in two places.

I prefer the following approach: announce existence of vulnerability and 
promise a kiddy script in a month; wait a month for vendor to react; 
publish kiddy script.

  > Publishing vulnerabilities in critical systems that cannot be easily
  > fixed and whose exploitation will cause serious harm (e.g., the air
  > traffic control system) is bad.

Publishing is *very important* in these cases so the stakeholders know to 
reduce their trust in these systems.  If air traffic control is vulnerable, 
tell me so I can stop taking airplanes!

A non-life-safety version of this problem was the publishing of a script 
that gave an existing process root privileges using the memory debugger 
abilities of the console monitor ("L1-A") of a Sun.  This debugger could be 
disabled, but nobody did because it disabled the software reset 
button.  This reported vulnerability allowed users to adjust their trust of 
the security of root sharply downward, corresponding more closely to the 
actual security of it in practice.

  > Third, I believe that it is irresponsible, and possibly criminal, to
  > distribute exploits.

This is gun control: "Don't punish murder, ban the gun instead!  Exploits 
are an evil instrumentality!  Exploits help a good boy go bad!"  The right 
answer is: Humans are held responsible for their behavior.  Guns, bricks, 
and exploits are just tools.


From: Greg Guerin <glguerin@amug.org>
Subject: publicity attack loops?

I have to admit that I was chuckling all the way through the 
Fernandes/Cryptonym letter in the Feb 2000 Crypto-Gram.  Especially when at 
the end he wraps himself in the mantle of professional integrity.  I've 
already written two essays on the Fernandes discovery and his downloadable 
"repair" ZIP:
    <http://amug.org/~glguerin/opinion/win-nsa-key.html>
    <http://amug.org/~glguerin/opinion/crypto-repair-kit.html>

Though neither one is about Fernandes's professional integrity, per se, 
they do make a number of points about specific practices.  To summarize the 
points (see the essays for the full explanation):

    1) the ZIP held 2 EXE's, 2 DLL's, and 1 source file.
    2) the downloadable ZIP had no digital signature.
    3) nothing within the ZIP had a separate digital signature.
    4) Fernandes's PGP key had no introducers at all.
    5) no pointers to others who could vouch for points 2-4.
    6) source was not compilable as supplied (missing header).

Point 6 is only a little important because it means the EXE's must be 
trusted as given.  But there was only one source file anyway, so you're 
already trusting the other EXE completely.  And both DLL's must be trusted 
completely.  Risk-wise, 75% blind trust is virtually identical to 100% 
blind trust, so it's not all that useful a distinction.  It's like choosing 
whether to kill something 3 times over or 4 times -- correctly killing it 
once suffices.

Note that at no point does "professional integrity" come into this, only 
"professional practice".  I'm not disputing INTENT (integrity), I'm only 
describing OUTCOME (practice).  Spotless integrity and intent cannot long 
survive avoidable errors in practice.  By observing practices an observer 
might infer skill, integrity, or both, or neither.  Those judgements, and 
the trustworthiness criteria underlying them, are left completely to the
particular observer.  All I can say is what I would infer from my 
observations, and why.  You should draw your own conclusions, since my 
criteria for trustworthiness may differ from yours.  But you should also 
invest in understanding why you came to those conclusions -- flaws in the 
process can lead you astray.


From: "Rolf Oppliger" <rolf.oppliger@esecurity.ch>
Subject: Distributed Denial-of-Service Attacks

First of all, I'd like to congratulate you for your description and 
analysis of distributed denial-of-service (DDoS) attacks in the February 
issue of your Crypto-Gram newsletter.  I fully agree with most of your 
statements, including your pessimistic view that all existing approaches to 
address the problem are unsatisfactory in one way or another.

In your article, however, you also argue that "in the long term, 
out-of-band signaling is the only way to deal with many of the 
vulnerabilities of the Internet, DDS attacks among them."  I don't agree 
with this statement.  Any out-of-band signaling channel can also be 
subjected to DoS and DDoS attacks.  I believe that the reason why telephone 
networks are not subjected to large-scale DoS and DDoS attacks is due to 
the fact that they address charging and billing, rather than their use of 
out-of-band signaling (out-of-band signaling has many advantages in other 
areas).  Trying to establish a huge quantity of connections in a telephone 
network is simply too expensive ... I think that the lesson learnt from 
telephone networks is that packet-based charging and billing -- combined 
with adequate security mechanisms -- may be a viable solution to protect 
against large-scale DoS and DDoS attacks on the Internet (rather than 
out-of-band signaling). However, packet-based charging and billing also has 
many disadvantages, including, for example, a huge administration 
overhead.  Consequently, I guess that packet-based charging and billing 
will not be applied on the Internet, and that "intelligent" 
packet-filtering performed by ISPs will be the major weapon to protect 
against large-scale DoS and DDoS attacks in the future.


From: Ethan Benatan <benatan@duq.edu>
Subject: Defending Against DOS Attacks: Draining the Swamp

If you'll pardon the musings of a biologist, I'd like to comment on your 
swamp analogy.  I know you never stated so but it bears pointing out that 
swamps are not "bad" in any defensible sense, nor is draining them "good," 
even though doing so may have one immediate desirable consequence.  I am 
sure that in your own field you can think of many examples where a cure, 
though effective, may have been worse than the disease.  The RISK here is 
forgetting that in any complex system change comes at some cost; the more 
complex (or less well understood) the system, the harder it is to predict 
the cost.  I think this applies to the Internet.  It certainly applies to 
the natural world, in spades.  I will not bore you with examples.


From: pclites@cdsfulfillment.com
Subject: deCCS

In the February 2000 Crypto-Gram, you wrote: "An important point is that 
DVDs can be copied and pirated without using deCSS or any other decryption, 
which certainly makes the original claim of 'prevents piracy' look either 
astoundingly ignorant or brazenly deceptive."

There is a sense in which the "prevents piracy" claim makes sense.  deCSS 
makes it easy to copy the data on a DVD not just onto another DVD, but into 
another format, one which is easier to copy & transmit.  In that sense, one 
could characterize it as making piracy easier.  Kind of like the rationale 
behind the distinction between printed & electronic versions of source code 
in the original crypto export restrictions; but for a consumer data 
product, I think it's a more meaningful distinction.  I would have to 
characterize the court's ruling as a correct application of a bad law, in 
what may turn out to be a watershed case.


From: "Bryan Alexander" <xande1@bellsouth.net>
Subject: Secure Linux

  > The NSA has contracted with Secure Computing Corp. for
  > a secure version of Linux.  Personally, I don't know if
  > the Linux license allows the NSA to make a secure version
  > of the operating system if they are not going to freely
  > distribute the results.

Actually the GPL (Gnu Public License, which covers almost all parts of 
Linux) does allow this.  There is no language in the license that requires 
that you redistribute anything based on the GPL, only what you are required 
to do *if* you redistribute a work based on the GPL.  In addition, the GNU 
Project has said specifically that the license is not intended to prevent 
people from creating (without being forced to distribute) their own 
modified versions of GPLed software for their own use.  The text of the GPL 
is located at: <http://www.gnu.org/copyleft/gpl.html>.

A statement about being forced to distribute modified versions of software 
being an "unacceptable restriction" can be found at 
<http://www.gnu.org/philosophy/apsl.html> under the heading "Disrespect for 
Privacy."  This is part of a discussion of the "fatal flaws" in the Apple 
APSL license.  (I can't find the original source for the comment about this 
as it relates to the GPL right now, sorry.)


** *** ***** ******* *********** *************

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, 
insights, and commentaries on computer security and cryptography.

To subscribe, visit http://www.counterpane.com/crypto-gram.html or send a 
blank message to crypto-gram-subscribe@chaparraltree.com.  To unsubscribe, 
visit http://www.counterpane.com/unsubform.html.  Back issues are available 
on http://www.counterpane.com.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will 
find it valuable.  Permission is granted to reprint CRYPTO-GRAM, as long as 
it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier.  Schneier is founder and CTO of 
Counterpane Internet Security Inc., the author of "Applied Cryptography," 
and an inventor of the Blowfish, Twofish, and Yarrow algorithms.  He served 
on the board of the International Association for Cryptologic Research, 
EPIC, and VTW.  He is a frequent writer and lecturer on computer security 
and cryptography.

Counterpane Internet Security, Inc. is a venture-funded company bringing 
innovative managed security solutions to the enterprise.

http://www.counterpane.com/

Copyright (c) 2000 by Counterpane Internet Security, Inc.



[: hacktivism :]
[: for unsubscribe instructions or list info consult the list FAQ :]
[: http://hacktivism.tao.ca/ :]