Linux, OpenBSD, Windows Server Comparison:
Windows Security
FAT vs. NTFS
It's true NT's concepts are not "the same" as UNIX's but Mr.
Wainwright would never have put it the way he did, if he any
useful knowledge of Windows NT file and directory permissions.
NT's file and directory security system is much more flexible and
complex than the standard UNIX permissions. In Mr. Wainwright's
defense, somehow the bizarre notion that FAT file systems are
easier to fix than NTFS became widely accepted even among
knowledgeable NT administrators, and thus many NT systems have
(or had) FAT file systems. The file and directory permissions,
discussed below are not available on FAT file systems.
In over five years, for all the NT problems I've seen, I've
always used NTFS and never once encountered any file system
corruption (until perhaps the
final NT server failure).
Fixing corrupt FATs with Norton Utilities, used be one of the
most common administrative functions on DOS systems. It's
disturbing, that in one of the areas where Microsoft really made
major improvements, that administrators often assumed old
problems still existed, and made choices that assured they could
continue to exercise their disk repair expertise, rather than
accept change which would eliminate a major computer
administration chore and, fix a problem that sometimes involved
costly data losses. One wonders if the fear was really the lack
of tools to fix corrupt NTFS file systems or reluctance to learn
something fundamentally new, very different, and much better.
Windows, FAT and Dual Boot
There is no excuse for installing any Windows NT or 2000 system
with a FAT file system in a business environment. The only
reason for a FAT file system on a Windows NT or 2000 system is
for compatibility with another operating system such as a Windows
95, 98 or ME system. This implies a dual boot configuration.
Dual boot machines should be limited to experimental machines in
a home environment. There are enough setup and compatibility
issues with dual boot machines, that it is questionable if they
can be used to test single boot configurations for general
deployment. If a business is testing new configurations, the
test machines should be stripped clean and set up in the same
configuration that's being tested. If an employee has need of
two or more operating systems there are much better choices.
Dual boot limits communication between the different logical
machines and wastes time switching from one configuration to the
other. Two or more machines on a KVM makes much better sense and
allows simultaneous network communication. Alternatively,
running two or more OSs simultaneously on the same machine under
a product like VMware on a single powerful desktop machine,
provides the maximum communication between machines,
independently of a network. This will be more stable if a
Windows system runs under a Linux or other UNIX system rather
than vice versa.
UNIX File and Directory Security
UNIX allows controls only by owner, group and other. Each of
these can be set to any combination of read, write and execute.
For directories, execute becomes the equivalent of directory
access. In addition there are the SUID and GUID settings plus
the sticky bit for owner and group. Fortunately, modern UNIXs
allow users to be in as many groups as necessary (or at least a
useful number). This security system is sufficient for most
sites and more than many sites really take advantage of. To
access a subdirectory a user needs to be able to access its
parent directories.
NT File and Directory Security
Standard UNIX file and directory security is very limited
compared to Windows NT flexible access control list capabilities.
Every user or group can individually be granted or denied access
to every directory and file on the system via all of the
following permissions. Every file and directory has a "full
control" setting which includes separate read, write, execute,
delete, change permissions and take ownership access permissions.
For files there are also named access levels read and change
which include specific combinations of the six basic permissions.
Directories also have list, add and "add & read" named
combinations. The six basic permissions can always be set
individually via the "special access" options. New files can be
set to inherit appropriate directory permissions or given a
different set of permissions. Existing files can be set
individually. Generally permissions are additive so a user could
pick up read rights from one group to which he or she belongs and
write rights from another. There is a "no access" setting that
overrides all others. Being assigned no access either
individually or via any group denies all access to the specified
file or directory.
There is also a moderately long list of user rights which
includes things like log on to the computer locally, i.e., via
console and separately via the network, set computer time,
shutdown the computer and debug programs. A particularly
interesting user right is "bypass traverse checking" which by
default is assigned to the group Everyone. The bypass traverse
checking allows access to subdirectories without having access to
parent directories. Unlike UNIX and Novell, some NT groups are
not maintainable. Everyone automatically includes every defined
user; no user can be excluded from Everyone. There are also non
maintainable INTERACTIVE and NETWORK groups that automatically
and respectively include any user currently logged in locally or
via the network.
Poor Windows File and Directory Security Tools
The GUI tool (Explorer, Properties, Security) supplied with
Windows to maintain file and directory security is both very
awkward to use (right mouse click, {Alt+R} or mouse menu
selection, mouse tab selection - no hot key, {Alt+ P} or mouse
button click, then add, remove or otherwise modify settings) and
like a sledge hammer making it almost impossible to achieve the
highly granular access controls that NTFS is intrinsically
capable of.
If the subdirectory option is used, every setting that is being
set at the current directory level is carried to all lower level
files and directories. The GUI provides absolutely no way for
the settings for a single user or group to be changed in the
current directory and all subdirectories, without also forcing
all the settings for all subdirectories to exactly match those in
the current directory. Thus, if a large directory tree has a
variety of different permissions set in different directories,
and a new group must be added that has read rights throughout the
directory tree, there is no automatic way to add this new group,
without undoing all the lower level directory differences.
You must choose between working through every directory
individually or at least getting to points where you are sure
that all lower lever security settings are the same as the
current level or adding the new group to all existing
subdirectories but also propagating all current level settings to
all subdirectories. If you choose the latter (very dangerous and
almost guaranteed to lose existing settings), then you need to
selectively work down to each lower level directory and remove
new settings that have been added or restore old settings that
were lost when the new group was added.
The command line cacls (cacls.exe) supplements the GUI with the
ability to propagate a specific security setting down a directory
tree without resetting all other settings. Most Windows
administrators are not versed in the use of cacls. With cacls,
you would simply CD to the proper directory and issue the command
to give read rights to the new group to all files with the
subdirectory option set. That's it, you're done. If the new
group should not be included in one or more specific branches of
the subdirectory tree, CD to those directories and remove
the new rights just granted in the current directory and any
subdirectories. In both cases no other rights will be changed.
Cacls, however, can't change more than one combination of user
and access permissions at a time. The GUI is like chmod with
only the mask options, e.g. 752 and cacls is like chmod with only
the symbolic permissions options, e.g., g+r.
The two tools together, or with an extraordinary amount of manual
labor, the GUI tool alone, can be used to apply varied granular
access controls to a directory tree. Care must be exercised
during various administrative activities not to accidentally
reset a carefully built permission system on subdirectories. I've
done this myself accidentally, more than once, forgetting that
some subdirectories had different permissions than the current
one when using the GUI tools.
You get the feeling Microsoft wanted to be in a position to claim
it had a secure system because the system possesses a rather
sophisticated file and directory permission facility. On the
other hand, you sense that they didn't really expect the system
to be used or not beyond its most basic capabilities, because it
cannot be used practically with the standard GUI administration
interface. The only way to use Windows NT file and directory
permissions efficiently, is to use both the GUI and cacls.exe
together.
NT Throwaway Security
Further, Microsoft has effectively thrown the powerful and
flexible file and directory security system in the trash can, by
using the worst possible security setting as the default in most
cases. Typically, the default Microsoft install gives the system
defined group Everyone, of which all defined users are
automatically a part, full control, i.e., the highest level of
access, to nearly everything. This is the UNIX equivalent of
setting the entire disk system to 777. Effectively, even if NTFS
is used, security is turned off during the install process. So
even though it comes with a file and directory security system
that at least in terms of features and flexibility, is more
powerful than UNIX, until an administrator takes additional
steps, no practical benefit is derived from this system. This
must be dealt with after the basic OS install is completed.
Password Hashes
Passwords are generally the first level of security on most
computers. They are a fundamental weak point in most system's
security because of the tendency of most users, even technical
users and system administrators who should know better, to choose
weak passwords. Microsoft has aggravated the situation by
designing an abysmal password storage system. It's
understandable why they had to provide for backwards
compatibility with LANMAN hashes. An option to disable the use
of LANMAN passwords in authenticating is provided but this does
not clear or even prevent future passwords from being stored
using the abysmal LANMAN hash. This causes NT's nominally 14
character passwords to be broken into two independent 7 character
pieces, padded if necessary, and with all letters forced to upper
case before the password is hashed. See my
web page
that gives details on NT password storage, cracking and
countermeasures.
This means there are approximately 1000 times less possible NT
passwords than UNIX systems that allow 8 character mixed case
passwords. Windows does not make use of the "salt" concept used by
UNIX either, where each unique password can be hashed to one of 4096
different values. Thus NT password storage is about 4 million times
weaker than traditional UNIX password storage. Since both Linux and
OpenBSD allow much longer passwords the real difference is even
greater. A 10 character password using the full keyboard is about
8,000 times stronger than a full character set, 8 character password.
Thus as a practical matter NT password storage is about 32 billion
times weaker than OpenBSD or Linux. Well chosen Linux and OpenBSD
passwords simply cannot be cracked by any technique available today or
for some years into the future. In contrast, a fast desktop system
running l0phtcrack, or its replacement LC3, can try all possible NT
passwords in about a month. If a UNIX like system administrator is
willing to use even longer passwords with a mixture of case, digits
and symbols the difference can theoretically be much greater. As no
computer today can crack a 10 character password using a suitable
mixture of characters from the full keyboard, the longer passwords are
largely academic.
Microsoft might counter that you can't get to the hashes without
administrative access. This is true but does not mean you need to log
on locally as an administrator. There are several possible methods of
getting remote administrative access without having a current
administrator password. Prior to 2001 this was known to be possible,
but to the best of my knowledge, only exploits that involved some form
of "social engineering" had actually accomplished this. The recent
attacks on GRC.com and worms including Code Red Worm, show the
necessary weaknesses exist, and that the intruders have the tools to
exploit them. No active assistance of a Windows user or administrator
is needed any longer, though many exploits continue to rely on such
assistance.
The fundamental problem with weak Windows password storage is
that an administratively compromised Windows NT or 2000 machine
will yield the administrator passwords to a concerted effort, no
matter how carefully chosen those passwords are (unless the
passwords contain non typeable characters). It's easy for root
and members of wheel (or security) on UNIX systems to choose
passwords than cannot be cracked by today's technology, so a
rooted UNIX system will not reveal these passwords. Due to the
tendency of users, including administrators, to use the same
passwords across systems, the compromised NT or 2000 system, plus
it's passwords, may allow other machines on the network with the
compromised machine, to be also compromised when otherwise they
would not be. Besides detecting the intrusion and doing all the
cleanup necessary to undo it, all the passwords on all accessible
systems should be changed or the persons who have illicitly
accessed the NT or 2000 server may be able to use cracked
passwords for some time into the future to access other systems.
Whether or not such passwords would be of any actual value would
depend on the same (or similar) passwords being used on other
systems and what methods of remote access those systems provide
and how they were protected in addition to passwords. Given one
high level compromise it would probably be appropriate to assume
that security in place on other systems might also not be
adequate.
NT Too "Easy" To Be Secure
Microsoft has compounded the security problem by their
efforts to make the system "easy to use". This part of the
reason for the excessively lax permissions that NT comes with.
Meaningful security prevents people from doing things and
Microsoft does not want to put obstacles in the way of any
potential user. By wrapping everything in a menu driven GUI
people can stumble around and eventually get to just about
anything or everything without any useful understanding of what
they are doing. Given compatible hardware of a fairly typical
configuration, e.g., not having to deal with special drivers for
raid arrays and such, a smart non technical person can
install NT and set up a functioning web server.
This is really not a desirable situation. NT makes enough of the
system administration basics easy enough that many non technical
users with no training or background end up being NT
"administrators". They "get the job done" but often won't get
much beyond default settings and are likely to have no idea of
the consequences of what they are doing. It doesn't take
much knowledge to get a Windows NT Server connected to the
Internet and also the LAN so users can update web content.
A default Internet Information Server (IIS) install puts
everything on the system partition. It puts everything in
standard locations. Anyone who knows one default IIS install,
knows what the large majority look like. Lots of powerful,
buggy, little used optional components are typically installed
with IIS. With no security by default and many thousands of
"administrators" who haven't a clue what they are doing setting
up web sites, it should not be surprising that NT, and
increasingly Windows 2000 as it is more widely deployed, tops the
lists of compromised web sites by a wide margin. The compromised
web sites are only a part of the problem. The much larger danger
is that a compromised web server may make some of the internal
LAN accessible with consequences that will be site specific, but
could threaten an organization's very existence.
Recent Windows E-Commerce Compromises
One would think that by now, given the large number of Windows NT and
2000 web site defacements, that Windows system administrators and
organizations using Windows would have gotten the message that blindly
following default installs is not generally a good idea. Apparently
not. In the spring of 2001, the Center for Internet Security publicly
released a tool to determine if a Windows web server was vulnerable to
exploits related to those the FBI was investigating; normally the
Center only provides tools to its members, but so many Windows servers
remained vulnerable to the reported problems, that the Center made
this tool publicly available.
More than 40 Window e-commerce sites had been systematically
targeted by Eastern European crackers, exploiting vulnerabilities
known for as long as three years. My reading of the Microsoft
advisories suggests that any site that used NT's file and
directory permissions intelligently, did not install web document
root directories on system partitions, did not install unneeded
option pack components, and carefully evaluated the security
settings on those components that were installed, would not have
been vulnerable to any of these problems, even without the
specific patches and workarounds provided by Microsoft.
Breaking IIS Exploits
One trivial step breaks many of the exploits that take advantage
of IIS bugs that have allowed NT site compromises; NT web sites
should never be installed on the system partition. The default
install places the executables in a subdirectory of
c:\winnt\system32 and the document root in c:\inetpub\wwwroot.
Many of the IIS security bugs relate to the ability to create
malformed URLs that allow the execution of programs assumed to be
in standard locations. The use of Unicode and other character
codings in URLs, have repeatedly allowed web site users to access
resources outside the document tree. In essence, the URL
contains a string that does not look like but gets translated to
something like "/../../winnt/system32/cmd.exe" before being
executed. None of the bug descriptions that I've seen allows
reference to another partition (drive). Even without
implementing file and directory security, putting the IIS
document root on a non system partition breaks a number of IIS
exploits.
This, and one other step that should be (but apparently is not)
obvious to anyone with any kind of meaningful professional training or
experience, is to use intelligent file and directory permissions on
only NTFS partitions. The problem is that the anonymous IIS user,
IUSR_machinename, is and must be part of the group Everyone,
which typically has full access to just about everything. Thus there
is no access controls to what an anonymous web user can use. The
group Everyone should be removed from all access to just about
everything and new groups such as "Staff" should be given appropriate
access to resources that are near universal for staff. Things like
the Resource Kit if it's installed should be reserved for
administrators only. IUSR_machinename is probably the one user
who should always be treated as an individual and never part of a
group as this "single" user is inherently a group in itself, i.e.
everyone who accesses your website without a password. This second
step should take no more than an hour if done immediately after an NT
install. If it's delayed until a complicated directory structure is
created, it may take a few hours. Together these would eliminate the
large majority of NT web site compromises. Neither requires
application of patches, service packs, or other upgrades, and does not
require a firewall, or anything other than intelligent and traditional,
host based security measures. These two techniques are specific to
Windows drive and IIS security architecture.
Two other important steps round out what should be standard
practices when dealing with Microsoft products, IIS, in
particular. These techniques are widely applicable to other
brands and products. Do not install optional products you don't
need and do not enable, or disable, optional products that are
installed automatically. The best thing about all these
techniques is that they break most exploits before they are
discovered. For example, all IIS web sites with index server
disabled, would not have been vulnerable to Code Red, even without
the patch.
The large number of NT web site compromises is a comment on the
horrible default security settings that Microsoft established for
NT, and the lack of suitable training for many persons who
administer NT systems. NT system compromises tend to be highly
visible and embarrassing because so many are related to IIS.
Until recently, few have been of the more serious nature, where a
remote intruder achieves administrative access, without knowledge
of an administrator password.
Window's Single User Origins
Traditionally, UNIX has been more vulnerable to remote root
compromises, because it has always been a true multi user system,
where administration was as likely to be done remotely as
locally. Except for a very few tasks, that can only be done in
single user mode, for which Windows NT and 2000 systems have no
counterpart, and is typically considered maintenance downtime, all
UNIX administrative functions can be performed as easily remotely
as locally. Without the addition of tools not included with the
core OS, many NT administrative tasks simply cannot be performed
from the command line. The addition of the NT Resource Kit or as
it's sometimes known, the NT Hackers Kit, greatly expands what can
be done from the command though its still not all administrative
functions. Most NT administration is performed through graphical
tools that can only be run by a validly logged in local user,
require that the local username and password match the remote
system, or require the interactive entry of a valid username and
password for the system to be remotely administered. Thus these
functions can only be performed from another Windows NT or 2000
system. One of the most irritating characteristics of NT, is
that not all standard administrative functions can be performed
remotely. While file, user and log maintenance can all easily be
done remotely, there is no remote counterpart to the Control
Panel, a rather essential piece of Windows system administration.
Some parts of the control panel do have command line counterparts.
For example NT services (daemons or background jobs) can be controlled
via the "net start" and "net stop" commands. These can be done
remotely from other NT systems with the appropriate network access.
If a third party telnet server or SSH server is added to NT, then
services can be controlled from any machine with a telnet or SSH
client, that has network access to the NT machine. If the Resource Kit
is also installed on the server with telnet, then a lot more can be
done remotely. Still these tools aren't as useful to intruders as
similar native capabilities would be. Many, perhaps most NT systems
won't have them at all. If they do, they are less likely to be
installed in standard locations and more likely to be appropriately
protected.
UNIX Root Compromises
UNIXs and in particular, Linux and Sun, have had many remote root
vulnerabilities related to buffer overflows in various daemons
(services). These are more serious than the more visible NT web
breaches which have been common in previous years, because someone who
should have no access, may gain complete remote control of the
compromised system and it may go undetected for a significant time
period.
Prior to 2001 one could, with some justification, claim that
properly configured Windows NT and 2000 systems are more secure
than UNIX systems, because the frequency of Windows system
compromises is more than compensated for by the seriousness of
UNIX system compromises. Further, no one really knows how many
UNIX systems are compromised, because there is typically nothing
publicly visible about a root compromise. When one is
discovered, it's fixed quietly and no one but the affected
organization and the intruder who now no longer has the remote
access knows of it. As with Windows, these compromises are often
more of a commentary on who is running the system, than the intrinsic
capabilities of the compromised UNIX system.
NT Rootkit Compromises
Since its release, there have also been some pretty serious Windows
2000 bugs, that would allow remote access in a system security
context, as opposed to the more common IIS problem, where a bug can
only be exploited in the security context of the anonymous user, IUSR_
machinename. With the default permissions, anonymous access is
only slightly less powerful than administrator access, and may be
sufficient to create the conditions that will subsequently result in
administrator access.
When I began writing this in mid 2001, though the possibility had
existed for years, I'd not yet encountered reliably
documented cases of remote administrative compromise of Windows
NT or 2000 systems. It wasn't until early 2001, that there was
proof of concept root kit attack demonstrated against an NT
system. While doing additional research for this, I encountered
a large scale case of Windows remote administrative compromises,
that leave no doubt that these will be common place, if they are
not already common place. This was accomplished with a readily
available rootkit, attacking vulnerabilities present in default
IIS installs on both Windows NT and Windows 2000 Server, that can
be detected by remote scans, and that cannot be stopped by
firewalls. It is merely a matter of time until large numbers of
Windows web servers are administratively compromised.
On June 20, 2001, over 150 compromised Window NT and 2000
machines running IIS were used to launch a denial of service
attack against GRC.com. This is described in
http://www.grc.com/dos/attacklog.htm.
After more than two weeks and multiple attempts to contact the
owners of these machines many were still not secured. I visited
a few and confirmed they still exhibited behavior indicative that
the bug(s) that allowed their compromise were still not patched.
One test allowed a carefully constructed URL to display a C:\
directory listing. A sampling of the files shown by a C:\
directory listing of one such machine included the following
filenames:
GET_MICROSOFT_PATCH_ASAP,
HO_HO_HO__________,
IT_IS_BEING_USED_TO_ATTACK_OTHER_WEB_SITES_WITH_A_TROJAN.txt,
I_CANNOT_BELIEVE_THAT_YOU_HAVE_NOT_SECURED_YOUR_WEB_SERVER,
JACKASS,
VISIT__grc.com_SLASH_dos_SLASH_attacklog.htm__TO_SEE_HOW.txt,
your_a_lamer_secure_your_system,
YOUR_MACHINE_HAS_BEEN_HACKED_INTO.txt,
YOUR_MACHINE_IS_ATTACKING_GRC_dot_COM.txt,
YOU_NEED_TO_UPDATE_YOUR_SYSTEM_SECURITY.txt.
This machine obviously has had a number of subsequent visitors
who left calling cards, i.e., obtained write access to C:\.
The GRC.com web page linked to another page (no longer accessible) that included additional
analysis of the compromised machines:
http://keir.net/attacklist.html
A security expert, Robin Keir, performed the analysis of the
compromised machines and found that most were unprotected,
running unnecessary services and had already had "a root kit
installed". He mentioned the "BackGate" rootkit and described
how to identify signs of compromise.
There is one serious problem the descriptions provided by Steve
Gibson and Robin Keir. The attack was supposedly run by a 13
year old hacker who used a modified (binary edited) version of a
Windows "bot" that allows remote control of the attacking
machines. Among the long list of attacking Windows machines were
included a Linux, a FreeBSD, an OpenBSD, a Firebox firewall, a
Netopia DSL router and two Cisco routers. Anyone who knows much of
anything about computers knows that while the same binary code
can and often does run across the entire Win32 platform line,
this same code cannot run on Linux or BSD family machines unless
they are running Windows emulation. Theoretically the Firebox
might be running one of the open source operating systems but one
would expect it to be identified as such and not a Firebox.
Regardless, a firewall with Windows emulation software does seem
rather a stretch.
Of course custom code could be written for each of these four
systems that allowed them to be controlled by the same master
controlling the Windows machines. Actually accomplishing such a
feat, not just developing the code, but compromising the four
different systems to deploy it, suggests almost extraordinary hacking
ability, not a 13 year old. I'm skeptical the routers even have
the necessary capabilities but if they do the hacking skills to
exploit them must be something out of this world.
One has to assume these seven machines have been misidentified or
switched between the attack and the probes that identified them.
The latter seems unlikely for seven machines in the fairly short
time frame (one month). Perhaps compromised Windows 2000
machines spoofed their addresses and that's how these seven were
found? While I have little reason to doubt the main parts of
what Steve Gibson described, I think some additional
investigation and explanation is needed before we can fully
accept all the details as provided. This why I've lowered the
numbers provided by GRC.com.
Thus, at a minimum there is one widely available rootkit
available for Windows systems which only requires exploiting
certain very common (default install) Unicode or Double Decode
problems. Basically these allow a specially constructed URL to
access programs that should otherwise not be accessible.
Vulnerable systems can be identified by automated scanning
techniques. If the affected machine is intended to be running a
public web server, a firewall cannot protect the web server
without blocking access to it. Since apparently one cracker, with
no exceptional technical skills, was able to compromise approximately
150 Windows machines that had "216.n.n.n" IP addresses, it
seems safe to say that any apparent advantage Windows systems may
have once had with regards to resisting root compromises are now
gone.
The steps already discussed previously should break these bugs as
well though it may be necessary to apply some of the recommended
patches to be sure. When I say that it's not necessary to apply
a security patch to fix a specific Microsoft of IIS bug, I'm not
suggesting the patch should not be applied but rather commenting
on the appalling combination of Microsoft stupidity and system
administrator ignorance implied by default Microsoft product
installs. Most of these bugs break when normal host security
practices are followed.
Within weeks of the GRC incidents, Microsoft products experienced
at least three more security problems that warranted CERT
advisories. The W32LeaveWorm piggy backed on top of a
pre-existing compromise called Sub7. Shortly after came the Sircam
e-mail worm that used the amazingly insecure Outlook and its address
books. SirCam included its own mail transfer agent so was not
exclusively limited to Outlook but appeared to look for e-mail
addresses where Microsoft products stored them. In July 2001, Cert
quoted the Spanish Anti virus vendor Panda Software, as saying "Sir
Cam has infected hundreds of thousands of computers worldwide." mostly
in Spanish speaking countries.
Overlapping with SirCam was a relatively benign worm that
exploited a bug that could have allowed very serious negative
consequences, the "Code Red Worm". On July 19, 2001, CERT issued
an advisory indicating up to 225,000 had been infected with this.
Within three days they had raised the estimate to 250,000. A
subsequent Microsoft security bulletin acknowledged "more than
250,000 systems" and some estimates placed it as high as 400,000.
Cleanup costs were estimated at 1.2 billion for a worm that did
no direct damage. Most affected systems, including some of
Microsoft's own servers, did not have the patch Microsoft made
available in mid June. (The last numbers I saw estimated 700,000
machines compromised by Code Red and Code Red II which quickly
followed and remained active until November.)
Code Red depended on a buffer overflow in Index Server and thus
nearly all NT 4 systems running IIS 4 and Windows 2000 systems
running IIS 5 were vulnerable. Unlike most other IIS related
bugs, this one ran in the System security context and even the
good host based security measures that I've mentioned elsewhere
would not protect a system. Systems had to be patched. The
buffer overflow could allow remote administrative access, i.e.
complete system compromise. Further, because the exploit was an
http request, anyone running a public web server could not use
their firewall to protect themselves from this as that would have
cut off access to the web server.
The first Code Red Worm was relatively benign compared to what it
could have been. After it infected a computer, if the default
language was English, it would deface cached page images but not
change the disk file, it would "randomly" try to infect other IIS
servers and periodically, launch DDoS attacks on the White House
web site. The so called random numbers actually always repeated
the same sequence so computers low in the sequence would be hit
and re-infected, repeatedly, until patched.
Several government agencies shut down their web servers out of
fear for Code Red; this was a gross over reaction, more damaging
than the worm itself. Some of the shutdown servers were not
Microsoft and not vulnerable and the others could easily have
been protected with the patch. My NT server was patched before
Code Red was released and was up through the July incidents and
despite a number of probes, had no problem with the worm. My
Linux and OpenBSD web sites logged several dozen failed infection
attempts, clearly containing the exploit signature string and on
July 19, my firewall blocked 275 attempts to reach non existent
web servers on my other machines. Thus my small LAN saw well over
300 infections attempts. I used a web browser to review the IP
addresses from which these packets came.
Nearly half were down or off line but I got a change to look at a
lot of web pages. Based on the languages and alphabets I saw,
there were many Asian, Middle East and European and a few South
American countries represented. There were several US sites. A
number of sites were clearly live production sites but most were
"under construction" or completely empty. A couple showed
default IIS demonstration pages with links to all the sample
applications that came with that product. This shows the danger
of setting up test and development sites in live or exposed
environments or installing services you don't actually use. I
only saw one defaced site.
Code Red reappeared on August 1 and a much more dangerous worm
called Code Red 2 (CR2) appeared on August 4. The second worm
used the same infection mechanism but the payload was entirely
different. Its propagation method was enhanced so that it sent
most packets to randomly selected hosts with IP addresses
starting with the same two octets as the infected server. It
sent less to randomly selected hosts with IP addresses starting
with the same octet. It sent relatively few packets to truly
random IP addresses. This non random distribution insured the
greatest activity in the more densely populated IP address ranges
but still spread the infection to all parts of the IP address
spectrum. Any Windows 2000 server infected with CR2 could be
easily compromised remotely with system level access by anyone
who knew what they were doing.
I've been completely baffled by the limited coverage of CR2. The
July outbreak was estimated to have infected 250,000 to 400,000
hosts. My firewall and web logs recorded
almost as many Code Red attempts on Aug. 1 as July 19 and the
number increased almost every day until by Aug. 7 I recorded 2713
probes. By that point CR2 was about 90% of the activity. CR1
usually sent three connection attempts to any specific host where
CR2 would sometimes send 60 or so, but other times there would
only be one or two attempts from an infected system. Thus the
number of connection attempts does not directly correlate to the
number of infected systems. Activity declined almost daily
after the 7th but I still saw 267 probes on Aug. 20. It seems to
me that CR2 is much bigger than CR1 but not based on media
coverage. Because of the possibility of administrative
compromise, I suspect the effects from CR2 will be visible for
months or longer as the compromised systems are used to launch
other attacks. I sent firewall logs of blocked CR2 to my ISP.
After the ninth report, they final sent a short note that said
"at this point we have over a thousand complaints about Code Red
scanning."
By mid August the virus / worm SirCam was still maintaining
fairly steady infection levels. Since mid July SirCam had
achieved and maintained the number one virus spot. Where most
e-mail virus / worms have had a life of several days, this one
seems unstoppable because of it's ability to create unique
Subjects based on the contents of documents it finds and sends.
SirCam e-mails private documents some of which have been quite
confidential and or embarrassing.
Nimda followed Code Red and came after I wrote this OS comparison.
It attracted little media attention which baffles me as the activity
levels dwarfed Code Red. I saw up to 30,000 probes a day on my LAN
at it's height. In February 2002, as I write this update, its a
rare day that passes without Nimda activity in either or both my
firewall or web logs.
Perhaps the summer of 2001 will be when the public, no that's too
much to hope for, the computer industry wakes up to the fact that
Microsoft simply doesn't understand security or consider security
in the design of their products. They are entirely reactive
dealing with specific threats as they occur and making more basic
changes only in response to customer demands after their products
cause serious damage. Maybe my memory is failing, but I can't
think of a single security event that's affected multiple
companies and had large dollar damage figures associated with
that involved anything except a product with a Microsoft brand
name on it. Yes, Code Red crashed lots of Cisco routers and HP
printers and some other devices, but it was Microsoft products
that allowed it to be created and spread in the first place.
Unneeded Services
If a machine is to function as a public web server then Port 80 needs
to be publicly accessible. The services that were running and visible
to the entire Internet on the compromised machines, used in the attack
on GRC.com is appalling. The number of services turned on, on the
Windows 2000 machines, suggests not surprisingly, that Microsoft has
enabled far too many services by default.
Not only are questionable Microsoft services turned on but some
machines show the addition of third party products exposing the
machines to even further dangers. Some of the services exposed to
public view on the compromised servers besides HTTP include, FTP,
Telnet, SMTP*, DNS*, Kerberos*, POP3, NNTP, NTP, TCP & UDP DCE
Endpoint Resolution (135)*, NetBIOS Name Service*, NetBIOS
Session Service*, IMAP, SNMP, LDAP**, Microsoft-DS*, spooler,
HTTP RPC Ep Map**, Microsoft Global Catalog**, MS Terminal
Server, Remote process execution, Remote login, Remote shell, PC
Anywhere. The ones with the single asterisk occur so frequently
they may be a default; the ones with a double asterisk appear
quite frequently. I have no idea what the "Microsoft" services
are but doubt the machine owners would want crackers having
access to these. I know the NetBIOS information should not be
accessible on the Internet and having servers protected only by a
PC Anywhere password from complete remote control should scare
any administrator.
The solution to these should be the same as it is in the UNIX
environment. Services that are needed for internal purposes
should be protected by a firewall. Otherwise, administrators
should turn off all unneeded services, though Windows makes this
much more difficult than on UNIX because of the way different
functions are tied together in a single "service". Those that
are needed should be kept up-to-date with security patches and
upgrades. Even if there is no firewall (not recommended), a port
with nothing listening, can't be attacked. If a service or
daemon listening on a specific port has no known vulnerabilities
then it's relatively safe, at least until the next bug is found
after which it needs to be patched. Look at my
Ten Practical Security Steps to see
what I think are the most important steps to take to protect
Internet connected systems.
For Windows users, it's worth looking at the chapter on turning a
Windows system into a "bastion host" in the O'Reilly,
Building Internet
Firewalls, 2nd Edition. The authors recommend these steps
for any Internet server, not just dedicated firewalls. Among the
services that they recommend disabling are the NetBIOS services.
They say "Ideally you should avoid this service on bastion
hosts." Also they recommend disabling the "server for inbound
NetBIOS connections" but admit this disables "file sharing,
printer sharing, and remote execution of the Registry Editor,
Event Viewer, and User Manager." These all make excellent sense
from a security perspective but a Windows machine without Windows
networking doesn't look like much of a Windows machine to
administrators and is not accessible to remote users such as web
developers at all. In many, perhaps most cases, these steps are
likely to remove the capabilities that are the very reasons for
using a Windows server in the first place.
Windows Complexity
Window's architectural characteristics have security implications. I
believe that due to the deliberate design decisions discussed above,
it's inherently less stable. Besides the registry and directory issues
discussed, there is the issue that Windows 2000 is estimated to
contain between 30 and 40 million lines of source code. It's huge.
It's huge because it's loaded with features, choices, options. It
doesn't matter which term you choose as they are synonyms or closely
interrelated. The result is complexity, even if the system were built
on a sound architectural foundation.
Microsoft has added all these features because they mean
flexibility. Different users and companies can set the systems
(and applications on top of the systems) to do different things
in different circumstances. To a point these are good but there
are limits. Complexity makes a system harder to build and to
learn and to use. At some point, the large majority of users are
paying both directly and indirectly, for things they will never
use. The direct costs are obvious in license costs, but the
indirect costs come from unreliable, unpredictable and insecure
systems. Most bugs affect only usability but some have direct
security consequences. Most security problems are the result of
bugs. With a huge proprietary system, there will be an unending
stream of bugs, with no advance warning when the next will be
found, or how serious it will be.
Every version of Windows gets more features and thus more
complex. It's actually quite an accomplishment to release a
major upgrade like Windows 2000, and do it with apparently less
bugs than the previous system. Microsoft is not entirely to be
blamed, since to some extent it gives the market what the market
wants. Historically the market has chosen features over
reliability and or security; as the real cost of these choices
begin to become apparent to professionals outside the security
community, this may change. Microsoft does share the blame. It's
hardly a blind follower. It spends billions on advertising and
plays an important role in shaping the market. As a monopoly, it
has not only used aggressive business tactics but also illegal
anti competitive methods to limit customer options.
Top of Page -
Site Map
Copyright © 2000 - 2014 by George Shaffer. This material may be
distributed only subject to the terms and conditions set forth in
https://geodsoft.com/terms.htm
(or https://geodsoft.com/cgi-bin/terms.pl).
These terms are subject to change. Distribution is subject to
the current terms, or at the choice of the distributor, those
in an earlier, digitally signed electronic copy of
https://geodsoft.com/terms.htm (or cgi-bin/terms.pl) from the
time of the distribution. Distribution of substantively modified
versions of GeodSoft content is prohibited without the explicit written
permission of George Shaffer. Distribution of the work or derivatives
of the work, in whole or in part, for commercial purposes is prohibited
unless prior written permission is obtained from George Shaffer.
Distribution in accordance with these terms, for unrestricted and
uncompensated public access, non profit, or internal company use is
allowed.
|