Linux, OpenBSD, Windows Server Comparison:
Development Model, Bug Fixes, Security & Reliability
Looking at the respective development models of Linux, OpenBSD
and Windows will help understand how they impact reliability and
the closely related security issues.
OpenBSD
OpenBSD is the smallest system with perhaps the simplest
development model. Compared to Linux and Windows the number of
developers is small. Compared to Linux they are closely
coordinated. The opening lines labeled "Goal" on their
Security web page
is instructive:
OpenBSD believes in strong security. Our aspiration is to be
NUMBER ONE in the industry for security (if we are not already
there). Our open software development model permits us to take a
more uncompromising view towards increased security than Sun,
SGI, IBM, HP, or other vendors are able to. We can make changes
the vendors would not make.
They tell us their goal but "if we are not already there" is as
close as they come to stating their achievements. They do not
claim to be secure or more secure than any competing system.
They leave it to others, to say that they have
accomplished their goal. Because they make no claims, every bug
they find and fix quickly and publicly moves them closer to their
goals, without calling into question the validity of previous
claims already made. Nothing is absolute and they are fully
aware that security is an ongoing process as discussed further
down the page.
Though they continue to add new features and improve existing
functions, given their goal, fixing problems that are found, is
the most immediate priority as evidenced by the examples already
discussed and one of the claims made further down the page:
Statements like This problem was fixed in OpenBSD about 6
months ago'' have become commonplace in security forums like
BUGTRAQ.' This suggests their auditing process is effective;
as that's internal we can't know how quickly found problems are
fixed. From the external examples of the Sendmail and IP Filter
bugs, we know that their response is quick.
Both of these bugs illustrate an issue faced by both OpenBSD and
Linux. Even a minimum install of either system includes many
pieces that are provided by other parties. When you consider all
the GNU utilities and TCP/IP servers that are actually not part
of the core OS, it's possible a majority of the code comes from
other sources. To the extent the authors of the code are
responsive in responding to reported bugs, it's very much in the
interests of the OpenBSD and Linux authors to wait for a fix from
the original author. This eliminates duplicate work and
diverging source trees. A product like IP Filter that is
normally compiled into the kernel as a static module may require
some modifications. The OS authors naturally want to limit
their changes to those necessary to incorporate the product into
the OS. My impression is that the original authors are typically
highly responsive; many have built their professional reputations
on the open source products they have created. The open source
licensing model, does give the OS authors, the right to fix
problems in other distributed products, if the authors are not
responsive.
OpenBSD, unlike either Linux or the Windows family, is a single
product developed by one group and upgraded every six months in
June and December. Since I've used OpenBSD, every version has
been released on time. All supported platforms, which are
numerous given the NetBSD heritage, are released at the same time
on the same set of CDs.
OpenBSD has an undisputed leader, Theo de Raadt, and an official
web site at
OpenBSD.org. Between
releases two source branches are developed and available for
download. There is always a current branch, which includes
whatever is the most recent, including possible new bugs and
instabilities and partially developed new features. A separate
patch branch is also available. This includes the last release,
plus any bug fixes that were deemed important enough, that they
need to be released prior to the next version release. Unlike
the current branch, patches are tested before being released. A
bugs mailing list includes bug reports and follow-up discussion
and a security-announce mailing list announces the release of
security related fixes.
For products distributed as part of OpenBSD, the
security-announce list is the only source of security information
you are likely to need. (Presumably if you are experiencing a
bug in a component you are using, you'll check the site to see if
a fix is available or report it if not.)
Security Notification Lists
If you add third party products to your systems, especially
servers that expose ports to attack, you'll need to find another
source for security news. The Bugtraq and NTBugtraq lists are
rather comprehensive discussions of UNIX and Windows bug related
issues in general. They are high volume lists with much technical
discussion and argument regarding the importance of bugs under
discussion. Though comprehensive, they are too distracting for
most administrators who just want to know what they have that
needs fixing. CERT Advisories cover the most important security
issues but focus mainly on those issues that are likely to lead
to the compromise of other systems. The SANS Institute's
evolving security newsletter deserves consideration as a
single source for security news.
IP Filter Bug
To return to an example I've dealt with elsewhere, let's review
"IPF contains a serious bug with its handling of fragment
caching." It was serious in that any client that could access a
public service through an IP Filter firewall could construct
fragmented packets that would allow that client to reach every
port on the machine including even switching between TCP and UDP
protocols. From a firewall perspective this is very serious
because a machine that should be completely protected except one
or more public ports, is "wide open" to any client that has
established a connection to the public service.
From the perspective of what is necessary to exploit the bug, we
might reach a different conclusion. To actually be exploited the
following conditions must exist. A server must be running one or
more public services and be protected by an IP Filter firewall.
The same server must be running one or more private services
(protected by the firewall) that contains a known and exploitable
security vulnerability, e.g., a buffer overflow. Next, the
operator of a client that can reach the public service, must know
or guess that the public service is behind a vulnerable IP Filter
firewall, and is running a protected vulnerable service. This might
be guessed from the public service's headers but is by no means
certain.
The remote client operator must be able to write fairly
sophisticated custom code that first builds the necessary
fragmented packets to bypass the IP Filter checks and then
package within these packets, an exploit to take advantage of the
private service's known vulnerability. Finally, the vulnerable
private service should not also be protected by any additional
security mechanism such as TCP wrappers or an Immunix (Linux)
security system that provides generalized protection against
buffer overflows.
Nothing in the descriptions of the bug that I read, suggested
that it included any reliable remote method of gathering the
necessary background information. Thus a potential intruder
risks a lot of work for nothing if there is no vulnerable private
service running or it's otherwise protected. They risk getting
caught, if the firewall is not a vulnerable version of IP Filter,
or lacking their own IP Filter firewall to test on, their
fragmentation code contains a bug, that triggers even a
vulnerable IP Filter to block and log the packets. Thus, I
prefer to characterize this as in interesting theoretical bug,
as exploiting it requires a very high level of technical
skill plus the possession of multiple pieces of information
that cannot be gathered remotely prior to successfully
exploiting the bug.
Linux
Linus Torvalds wrote the original Linux kernel, owns "Linux" as a
registered trademark, and is still Linux's leader. Unlike
OpenBSD, Linux is not unified and it wasn't until I was writing
this that I learned that the Linux kernel source, including the
latest versions, is maintained at
www.kernel.org. The home
page states "Linux is a clone of the operating system Unix,
written from scratch by Linus Torvalds with assistance from a
loosely-knit team of hackers across the Net. It aims towards
POSIX and Single UNIX Specification compliance."
There are dozens of web sites devoted to Linux. Aside from the
kernel site, it's not clear that any is more "official" than the
others. The kernel appears to be under pretty much continuous
development and more than one version is being developed
simultaneously. Versions 2.2.0 through 2.2.19 were released
between Jan. 25, 1999 and Mar. 25, 2001. Versions 2.3.0 through
2.3.51 and 2.3.99-pre1 through 2.3.99-pre9 were released between
May 11, 1999 and May 23, 2000. Versions 2.4.0 through 2.4.6 were
released between Jan. 4, 2001 and July 3, 2001. All the
distributions that I know of, used 2.2 kernels at some point.
2.3 kernels were not widely distributed and were the development
precursors to 2.4 which included several major changes. Several
distributions began using 2.4 kernels shortly after their
release.
Linux is normally obtained as a distribution which is an
installation routine that includes the kernel, various more or
less standard utilities, and whatever else the distribution
creator feels is appropriate. Www.linux.org used to have
links to the
download sites for six different distributions: Caldera,
Debian,
Slackware,
TurboLinux,
Red Hat (Fedora) and
SuSE. Now they
list
various ways that you can get or buy Linux.
Other distributions include
CentOS,
Gentoo,
Mandriva,
Ubuntu,
and variations on distributions, such as
Trustix, which is a
pre-hardened version of a Red Hat distribution which lags
somewhat behind the current version of Red Hat.
EnGarde is a secure
distribution of Linux not based on any pre-existing
distribution. A detailed comparison between this distribution
and OpenBSD would be interesting. Where OpenBSD has merely
disabled by default, telnet, FTP, NFS and the RPC related
services, EnGarde has removed them entirely. The entire install
CD contents are only 134MB. OpenSSH and OpenSSL, built by the
OpenBSD team, are provided as replacements for telnet and FTP for
secure remote administration and file transfer. The standard
UNIX standard mail transfer agent, Sendmail, which has had a
history of security related bugs, has been replaced by the more
secure Postfix. The system install process gives the user the
option to automatically start web, mail, DNS, IMAP and POP
servers but all are off by default. The IMAP and POP as well as
httpd servers are automatically configured with SSL. An SSL web
management interface is included so the server can be set up and
run without ever having a keyboard or monitor connected.
There are no workstation options and it would be a major project
to turn an EnGarde machine into one. Both network and host based
intrusion detection are included. In some important ways EnGarde
has moved beyond OpenBSD, and really is not a general
purpose computer operating system. It would interesting to see
how OpenBSD's higher quality code base offsets the more secure
configuration choices of EnGarde. The services OpenBSD turns on
and EnGarde leaves off and programs OpenBSD includes and EnGarde
does not, can be removed from OpenBSD; that's precisely what my
hardening page is about. When these steps are taken though, its
not default OpenBSD.
Each different Linux distribution is faced with similar issues to
those OpenBSD faces when it comes to dealing with security
related bugs; fix themselves or wait for the product developers
to provide a fix. The difference, unlike OpenBSD which maintains
it's own kernel and core utilities, is that the Linux kernel and
most programs are separate development projects. The only
components that a distributor typically has a large role in
developing or modifying, are the install processes that are
normally proprietary. A project like EnGarde may add a
significant component such as its web management interface, that
appears to be proprietary.
The quality of the installation process, including the
thoroughness with which an installation recognizes and correctly
configures a wide variety of hardware components, the range and
intelligence of the configuration options presented to the user
during installation and especially the ease with which a working
system can be set up is an important part of what distinguishes
one Linux distribution from another. The resulting configuration
of the installed system including what components are installed,
where they are installed, and whether or not they are sufficient
to use the system as it's likely to be used, also distinguish the
various distributions. Early in 2001, all the major
distributions, reached an agreement on a standard directory
structure and the locations for the common system components.
I've checked the six distributions listed at Linux.org and each
has a list of security advisories somewhere on their web sit.
SuSE has an easy to spot link "Security Announcements" as one of
their standard navigation aids. The others required actively
looking through the site to find their security lists. Some are
only available as archives of a security-announce e-mail list.
When I checked in July 2001, Slackware only listed 4 items for
2001 suggesting they may not be staying up-to-date. The others
typically listed a few dozen items, but all are not necessarily
security bug fixes, so it would take a great deal of research to
equate these lists, and see how each distribution is doing,
compared to all the publicly reported Linux security issues.
Linux kernel development is a purely volunteer, non-commercial
activity. Though all distributions sell media, several appear to
be not-for-profit. Some including Caldera, Red Hat and
TurboLinux are clearly commercial enterprises, intending to make a
profit by selling bundled support services with the sale of media
or customer registrations. These vendors tend to have multiple
offerings, typically configured to perform a fairly specific role
in a commercial environment such as business desktop, development
workstation, or server.
The GNU license under which the Linux kernel and most core
utilities are distributed, requires making source code available
if the products are modified or redistributed. Though there
could be other ways to meet this requirement, all distributions
currently have freely downloadable CD-ROM images of their install
CD-ROMs. In any business environment, when staff costs are
accounted for, it's typically cheaper to buy the CD-ROMs
than to download them and burn CDs from them. All distributions
typically include some additional components that may have
different licenses. The commercial distributions may include
additional commercial products included as trial versions, that
may require separate license fees if used on a production basis.
Microsoft
In contrast to Linux and OpenBSD, which are fundamentally not
commercial endeavors, Microsoft is the largest software company
in the world and is purely commercial. It's unlikely Microsoft
does anything without considering its bottom line impact
including its philanthropic contributions which should be
regarded as part of its marketing efforts, i.e., an attempt to
improve its public image. Unlike OpenBSD which is reticent to
make claims about its software or actively promote itself,
Microsoft aggressively markets its products and is not shy about
claiming a variety of advantages to be had by using Microsoft
products. The Blue Screen Ad cited previously includes the
following phases: reliable Microsoft, full control of large-scale
system installations, total lockdown, business data is available
to users either locally or remotely and is secure, the complete
OS for the digital economy.
Each one of these is an unqualified absolute except perhaps
"reliable" which is subsequently qualified with numerical data.
Related to "secure", several features are listed. Readers are
encouraged to believe that the possession of some security
related features makes a system secure. Microsoft seems unaware
of the concepts that no useable computer can ever be fully secure
or of relative degrees of security in different contexts or of
matching security to the resources that need to be protected.
This may only be advertising copy, but it does create a public
position that Microsoft subsequently needs to defend. Since the
release of NT, Microsoft has been claiming that NT and its
successors are secure operating systems. Despite
hundreds or thousands of security related bugs found and
fixed over the years, Microsoft continues to claim its systems
are secure.
The general public and even most of the computer industry has
little real understanding about computer security. Most want to
believe that security is a state that can be achieved and then
forgotten about. Few are prepared to believe that security is
inherently an ongoing process that requires continued effort.
Microsoft panders to this misconception, telling its potential
customers that if they buy Microsoft products they will be
secure. Though Microsoft customers who fail to patch their
systems bear some blame when their compromised systems are used
to attack other systems, Microsoft deserves most of this blame
because they tell their customers, that if they buy a Microsoft
system, it will be secure. Why shouldn't a customer believe the
advertising that leads to the purchase of a system? If a system
is secure, why should the customer need to check for security
updates? Microsoft has no one but itself to blame for this
state.
More than any other computer company in the world, Microsoft
tries to convince the public and the computer profession that
"one size fits all." They want us to believe that the different
Windows operating systems are all basically the same whether
intended for a consumer home PC or a multi processor server in a
major
e-commerce site. They are so desperate to keep us thinking about
the "Windows" product line rather than its different
incarnations, that after it was clear that the trade press was
going refer to the NT line as NT and not Windows that they
changed the name to Windows 2000.
XP is a development from NT and 2000 and with it's advent, the
consumer line of 95, 98 and ME is supposed to end. Eventually
there will be a single XP line from a consumer version, to
business workstation and at some point servers, though for the
time being Microsoft seems to be settling on a 2002 server line.
They try to convince us that if you know one, you know them all.
In different operating system products, specific features
relevant to the intended market are added. Licensing and
performance specific optimizations that allow more processors,
larger memory space and greater disk capacity are all that
separates them. To a large extent this is true, as the kernel
code base and user interface are (or will be) largely the same
code.
They try to scare computer professionals into believing that if
you're not running Microsoft servers, you may not be able to get
the next application or jump on the latest trend, as if that
really should be an important factor in server selection. They
don't want IT professionals thinking about: Should a server that
performs a limited and very defined set of functions be based on
a complex OS with a rich mixture of services that cannot be
easily disassociated? Should a server on a LAN provide the same
services and capabilities as one that provides public Internet
services? Are the management tasks and skill sets on a server
fundamentally the same as fleet of corporate desktop PCs or a
home consumer PC? Can the most monolithic OS available today,
really be the best solution for everything from a consumer home
PC to a 64 bit multi processor dedicated web or database server?
Can a system based entirely on proprietary code, hidden from
public scrutiny, be reliably counted on to do exactly what the
manufacturer claims and the customer hopes it will do and no more
and no less?
Because of the public stance Microsoft takes in its marketing
campaigns, every security related bug acknowledged, undermines
its simple but absolute claims regarding security. Is it
plausible that as of mid July 2001, across Microsoft's entire
product line of operating systems, server applications,
development tools and office products that there have only been
38 bugs with security implications? As of July 13, that's how
many security announcements they made. In the same time frame,
comparatively tiny OpenBSD has reported 14. This number includes
all the products developed by other groups but distributed with
OpenBSD. Are we to conclude that OpenBSD really has relatively
a lot more security bugs than Microsoft? No. OpenBSD looks
better, the more responsive it is to security related bugs; this
may lead to a tendency to over report. Among the security
related bugs were: "Programs using the fts routines can be
tricked into changing into the wrong directory." "a non-
exploitable buffer overflow was fixed in sudo(8)." "rnd(4) did
not use all of its input when written to." I'm not saying that
if these are understood, they don't have some security
implications. I'm sure they do but doubt that they likely to
have a significant practical effect on anyone, let alone a
meaningful number of OpenBSD users. OpenBSD is very serious
about security. If a bug could have security implications,
regardless of how limited, they fix it and report it as security
issue. Actually they fix all bugs that are identified. I
recently read one version of Internet Explorer had at one point
in time well over 100,000 unresolved bugs.
Microsoft responds in a reasonably timely fashion to reported and
serious security bugs. It's primarily such bugs that their
security alerts describe. This doesn't mean they necessarily get
the first fix right. If Microsoft doesn't think a bug is a
serious, and only they make that determination, they respond in
whatever they decide is an appropriate manner. This may be to
delay any fixes until the next Service Pack. Microsoft decides
when to release a Service Pack and even then they may not get
that right as shown by SP1 and the disastrous SP6 for NT 4.
There was well over a year wait between SP3 and SP4, over which
time there was a significant accumulation of hot fixes. Those
with limited resources, who waited for SP4, faced a growing
number of potentially serious security issues.
From a business perspective this may be entirely reasonable.
After all, Microsoft's first obligation is to its shareholders.
The only reason for a commercial enterprise to exist is to make a
profit. It doesn't make good business sense to divert resources
from the development of new features and products to unprofitable
bug fixes unless the bug creates a real or highly visible
vulnerability for Microsoft's customers. It doesn't even make
good sense to delay the release of a Service Pack by expending
resources on hot fixes that must be separately tested, documented,
and deployed if Microsoft's assessment is correct, that the
problem is not likely to cause any practical harm to its
customers.
There can be little question that the computer market, even
including businesses with significant security exposures,
repeatedly choose functionality over security. In this
environment, it cannot be profitable to pursue reliability and
security at the expense of functionality. That's why only a not-
for-profit can actually make reliability and security the true
top priority and stick with this priority at the expense of
others. To be fair and inclusive, we'd also have to consider
the security specialty firms like Argus and their "trusted"
operating systems; Argus' intent is to make a profit by building
as secure an OS as practical with proprietary technology.
Setting priorities is about making choices and by definition, no
organization can have two top priorities. By definition, any
commercial enterprise must have the top priority of either
surviving or making a profit. It doesn't make much difference
which way you choose to put it. Over the long run the two are
the same for a commercial endeavor, as no company that
continuously fails to make a profit will survive indefinitely.
Whether or not a company believes the best way to make profit is
to deliver value to it's customers or perhaps sees business as a
zero sum game that can only be won at the expense of others,
these are strategies, the underlying goal remains making a
profit.
Microsoft has been enormously successful for a number of years
now primarily because it has understood what the computer
software market valued most. This is functionality or feature
lists, the ability to do whatever a customer asks and over time,
for an ever larger number of customers. By understanding the
market, it has successfully pursued its primary goal of making
profits. What it hasn't done is build secure systems.
System Tradeoffs
Any system, including software systems, will include a number of
factors which have various relationships to each other. There
may be no apparent correlation between two factors or there may
be loose or strong positive or negative correlations between
different factors. This will depend on the kind of system being
examined and the specific factors. For software in general and
operating systems in particular, some of the factors are
reliability, security, scalability, ease of use, ease of
learning, functionality, performance, and cost of development.
All the listed factors have some degree of positive correlation
with cost of development. In other words to make a specific
improvement in any of these areas will require the expenditure of
some resource and thus increase the product's cost of development
regardless of whether those cost are measured in money or time.
The other listed factors are good but cost will usually be
regarded as negative.
Reliability and security have some positive correlation because
reliability is generally a prerequisite for security but there is
by no means a one to one correlation. Reliability is not
sufficient to assure security so they are separate factors. As
soon as you start building security specific features
(functionality) there may be a reduction in reliability. There
may be some positive correlation between performance and
reliability as some bugs may negatively impact both but after a
point performance enhancements are going to come at the expense
of some other feature. Reliability in terms of consistency of
behavior will enhance both ease of learning and ease of use as
unpredictable systems make both difficult. To the extent that
lack of reliability is the result of bugs or mistakes or
inefficiencies, correcting these deficits may benefit one or more
different factors. As soon as reliability becomes a feature as
in redundant fail over systems that requires specific
development, it is just one other factor competing for limited
resources.
Except for some of the limited positive correlations mentioned,
after a certain point all the factors are going have negative or
no correlation with each other. For example, both security and
performance can be improved in a system but at a minimum this
will increase development costs and may negatively impact other
factors. For a set development cost you can optimize for a
single factor at the expense of all others, which is likely to
result in an unusable system, or you can attempt to balance
multiple factors but beyond certain basics you can't improve any
single factor without negatively impacting others. It is
however, possible to make mistakes or include other
inefficiencies in a system, which increase costs without
improving any other factor.
One factor that cannot be ignored in Microsoft operating systems
is Microsoft's desire to simply make them different than other
operating systems to hinder porting Windows applications to other
operating systems. This has had a clear negative impact on
reliability, security, ease of use and even ease of learning
beyond a superficial mechanical level.
What are some of the other factors that Microsoft emphasizes in
its software development and or advertising? While I think their
primary focus has been functionality, they've not pursued that to
the exclusion of all other factors. It seems pretty clear that a
secondary emphasis is ease of learning. Describing their
software as easy to use, seems to be major marketing emphasis.
They mean what I call ease of learning; I don't recall any
Microsoft advertising focused on the automation of repetitive
tasks. They have Visual Basic for Applications, which might have
some uses in this area but I think this is primarily an
application integration tool, that allows applications to use
functions from other applications; it's primarily a means to
enhance functionality.
At least in specific areas where it's generally perceived as
important, Microsoft has pursued performance. Here I'm thinking
primarily of database and web servers. I seem to recall SQL
Server ads touting its performance. Since by at least some
measures, their web server IIS has consistently ranked at the
tops in performance and this is not achieved by accident, they've
clearly expended resources to do this.
As long as I can remember, Microsoft has wanted it's customers to
believe that its products provide value, that is, greater
functionality relative to its costs than competing products.
This used to be a relatively easy sell, because compared to
competing UNIX products, it used to be true. Against PC
competitors they'd typically tout longer feature lists. I wish
I'd kept archives of old trade publications so I could quote
specific language from their ads. As I don't generally save any
of these, I have to work from memory and general impressions. (I
clipped and saved the "Blue Screen" ad because I could not
believe it when I first saw it.)
And finally, at least since they developed NT, Microsoft has been
claiming its products are secure. As the Internet has grown in
importance and especially in the last two or so years as the
frequency of Internet based attacks has come to such prominence,
everyone pays lip service to security. There is no question,
Microsoft has included some significant security features in
their NT and subsequent products and that's part of the problem.
Microsoft's flawed view of security is that security can be
achieved with a list of security features. The sales pitch is
that user's merely need to activate the features Microsoft
provides, and they will have secure systems.
Earlier this year, Steve Gibson of grc.com took exception with
Microsoft's plan for including raw sockets in the consumer
version of XP to be released in the fall of 2001. At the mid 2001 Def Con,
Thomas C. Green of The Register interviewed Microsoft Security
Program Manager, Scott Culp, who made the
following
statement:
What we're saying is, you're going to see them regardless -- raw
sockets are utterly irrelevant to the question of DDoS attacks on
Windows XP, because if someone can compromise a
machine....they'll have every ability they want. Control of the
machine is the hurdle; the availability of raw sockets is not the
hurdle. Once you've got control of the machine, if you don't have
the raw [socket functionality] there you can add it.
This echoes things said in Microsoft's discussions with Gibson.
Regardless of who proves to be right regarding any quantitative
impact of XP's inclusion of raw socket on DDoS attacks, Microsoft
is taking a fairly clear, if shortsighted view of security. As
far as they are concerned, once an intruder has control of your
machine, the game is over. They are unwilling to consider any
functionality compromises, that might make the intruders job more
difficult. If they saw the potential value of disabling
functionality, they would never have built NetBIOS as a
collection of related services, that can't be disabled
selectively, without disabling major chunks of the OS
functionality.
As security has become important to a growing number of Microsoft
customers, Microsoft has moved to comfort them. It approaches
security like other product issues; it adds a new security
feature. Specifically it adds a mediocre firewall and tells the
press that the only real security issue is keeping intruders off
the system.
Given the all of foregoing, the real value Microsoft places on
security has to be rather low. It's clear to me that it ranks
below profits, functionality, ease of leaning, differentiation
from other systems and most likely performance. At best it's
about fourth place and likely lower. In terms of software
specific features, i.e., not considering costs and profit,
security is less important to Microsoft than functionality, ease
of learning and differentiation. There are probably no other
software factors, that tend to be more mutually exclusive with
security than functionality and ease of learning.
Like most of its customers, Microsoft pays lip service to
security. It puts enough emphasis on security, to make somewhat
plausible claims, so that the customers who have selected
Microsoft products for other reasons, can repeat these claims to
their bosses or auditors and appear to be doing their job with
regards to security issues. Microsoft knows that it's not going
to successfully compete for any business where security is the
first or second factor in the selection process and not likely
even if security is third and it does not try. Microsoft might
believe and certainly wants its customers to believe, that the
Microsoft Windows NT and now 2000 Server operating systems, are
"secure enough" for their intended uses. Given the developments
of the Summer of 2001, even this low standard might be
questionable.
Previously a variety of factors were shown to be related to
development costs. Factors could be improved by devoting more
development resources to the factor. By significantly raising
development costs several different factors can be improved. At
some level of effort, it should be possible for Microsoft or
another large company, to develop an operating system that is
feature rich, easy to learn, performs well and is still
reasonably secure. Microsoft is the largest software company in
the world; is there reason to believe they have expended enough
resources to achieve the functionality lead that they clearly
hold while remaining competitive regarding security? My short
answer is that I know of no such evidence.
Despite Microsoft's size, the only thing that really matters is
how much effort has Microsoft devoted to Windows NT and 2000
development. Advertising, staff costs, distribution and many
other corporate costs are not relevant to the discussion. OpenBSD
and Linux don't advertise and the developers work for free. About
the only costs incurred that in any way resemble Microsoft's are
the very limited packaging costs. What matters are the number of
developer hours devoted to the competing products. I've seen
estimates that two billion dollars worth of time have been
devoted to Linux development by early 2000. Like so many other
numbers, I have no way of knowing if this is the Linux kernel and
small number of Linux specific utilities only or is supposed to
include the rather substantial GNU utilities that also come with
OpenBSD and other open source systems.
Returning to Microsoft, no staff time spent on developing Office,
IIS, SQL Server, Exchange, Visual Studio or even Windows 95, 98
and ME count at all. NT ("New Technology") was developed from
scratch without use of code base from 95. It was full 32 bit
from day one and 16 bit support was / is provided through
emulation. That's why early versions of NT had such
compatibility issues with older software. As the products moved
forward, Microsoft was able to expose a larger set of common
APIs, even thought the underlying code was different. This is
what allows most, but not all products to run across the Win 32
line.
Windows 2000 is estimated to contain between 30 and 40 million
lines of code This is a direct outgrowth of the NT project that
began sometime in the early 90's. Both Linux and the BSD family
have roughly contemporaneous origins. While NT attempted to be
all new, BSD and Linux had much more modest goals. All they had
to do, was develop new code to provide the same functionality,
that existing and well documented UNIX systems already provided.
OpenBSD contains something between 1.4 and 2.3 million lines of
code necessary to compile the kernel (including all platforms).
OpenBSD is modular in nature and it includes a variety of
significant products such as Apache, the GNU development tools
and utilities, Perl, the X Window system, most of which are
optional. Without detailed knowledge of the source tree it very
hard to make accurate estimates of how much source code is used
to create OpenBSD or even what should be counted. Further, even
if the Windows kernel could be separated from the GUI, this would
be meaningless as the Windows kernel is not useful without the
GUI where OpenBSD's kernel is fully functional without the GUI.
While OpenBSD began as a separate project in 1996, it started
from the existing NetBSD project. NetBSD began as an organized
project in 1993 but includes source code derived from the
University of California at Berkeley, BSD projects through the
1980's. It appears that the efforts to develop a complete open
source BSD operating system, that performs like UNIX but not
containing any of the actual source code covered by the UNIX
trademarks dates to the early 1990's.
Both Linux and OpenBSD code is subject to outside review by
anyone who is interested. Though most users of these systems
never review any source code, given the size of the communities
that are interested in these projects, it seem likely that
hundreds or even thousands of pairs of eyes, not part of the
development team, have reviewed some part of OpenBSD and many
thousands in the case of Linux. Obscure drivers may not be
looked at by anyone but the author, but it seems likely that all
the core pieces of both Linux and OpenBSD have been reviewed by
at least several independent programmers.
The estimates of Windows 2000 code size (30 - 40 million lines)
almost surely include IIS and other components that don't seem
essential. Unlike Apache which is entirely independent of the
OpenBSD kernel, some advanced Windows 2000 system functions rely on
IIS which is why it's installed by default. Certainly some of
Windows massive code base is not required but only Microsoft
knows for sure what is or is not required. The whole of Windows
2000 is roughly twenty times the size of the core of OpenBSD.
Keeping in mind that Windows NT and 2000 are rather monolithic
and OpenBSD and Linux are very modular, the massive size of 2000,
in addition to providing much more functionality, also provides
much more complexity and interactions. The real complexity is
not a simple linear ratio based on the differences in sizes of
the code base, but some power of the ratio of size differences.
However much time is spent on an average line of OpenBSD code,
more would need to be spent on each line of Windows 2000 code to
assure similar levels of reliability or security.
Another factor affects the apparent number of lines of code is
what I call churn. For example some of the management utilities
were added to NT 4 have been removed, being replaced by MMC
modules in Windows 2000. If the older utility is discarded in
its entirety, then any time spent on it should also be discarded.
If some of the underlying lines such as registry maintenance as
opposed to user interface are kept from one version to the next,
then the time developing the preserved lines would be included in
the time allocated to the project.
To the best of my knowledge none of the necessary figures are
available from any source so it becomes pure speculation as to
the resources that have actually gone into the development of
these products. Because of the unknowns there is no way to
quantify how the size or complexity of the systems affect
security. What we do know is that OpenBSD is smaller and simpler
than Windows, that it has a development history at least as long
as Windows and is based on a model several times older than
Windows. We know that security is an OpenBSD priority second
only to quality which is a priority that significantly overlaps
with security. In contrast, security in Windows is one of
several mid level priorities competing with several higher
priorities that are generally antithetical to security. The
actual track record suggests Microsoft has put only a fraction of
the resources necessary to create a secure system. Nobody really
knows what the various viruses, worms and other intrusions have
actually cost, but it's my recollection that for the past few
years, all the security related incidents that have had large
dollar figures associated with them whether they were simply
cleanup costs or direct costs have been directly related to
Microsoft products. I'm thinking of Melissa, LoveBug, SirCam,
Code Red, and the East European hackers who stole credit
information from approximately 40 sites exploiting well known IIS
weaknesses. I can't remember a single comparable incident
related to any UNIX or open source product. Does Microsoft even
understand what security means?
We then return to the other factors in the development model.
Nothing in the Microsoft model suggests security is a top
Microsoft concern. Just about everything in the OpenBSD model
suggests security is number one or two on their list. It would
be as naive to think any Microsoft Windows operating system
product is as secure as OpenBSD, as it would be to think OpenBSD
provides the same level of functionality as does Windows.
While we don't have enough quantitative information to compare
Windows and OpenBSD, we do have one key piece of information for
Linux that allows at least a limited quantitative comparison. The
Linux kernel is roughly 2.4 to 4 million lines of code which
makes it roughly one tenth the size of Windows 2000. As the
Windows 30 to 40 million lines almost certainly includes
utilities and components that would correspond to parts of Linux
that are not part of the kernel, a better estimate might be that
Windows has roughly five times as much code for its core
functions as Linux does in it's kernel. It seams reasonable,
though possibly wrong to assume the 2 billion dollar estimate for
Linux, which was made in early 2000, applies to the kernel and
core components and not the peripheral servers and utilities such
as Apache and GNU which are not actually part of the Linux
project. Making no allowances for complexity increasing
geometrically as size increases, the modular nature of Linux and
the monolithic nature of Windows, or the more the ambitious
Windows goals of an entirely new system rather than reverse
engineering an established standard, Microsoft needs to have
spent ten billion developing the core components of NT and 2000
to be on comparable reliability and security footing with Linux.
This is a lowest plausible number; the correct number is probably
several and perhaps many times that. Does anyone know what
Microsoft has spent developing the core components of NT and
2000?
Little in the Linux development model suggests security is an
especially high priority. Security should be about as important
to Linux as it is to the UNIX model on which Linux is based.
Given UNIX's long history, its simple but practical security
model, its highly modular nature resulting in a system that lends
itself well to a variety of hardening techniques, there seems
little reason to expect that Windows NT or 2000 should provide
comparable practical security.
Open Source Code Review
One other point applies to all open source products, in contrast
to all proprietary products, where source code is not publicly
available for examination by anyone who wishes to look at it.
The proponents of proprietary code make the point that the bad
guys can review open source code and that this gives them a road
map to any weaknesses in the open source systems and that the bad
guys don't have this advantage with proprietary source code.
This is true but almost irrelevant.
Crackers build their reputations by finding new weakness and ways
to exploit them. Few keep secret the bugs they've found. It
seems very likely that source code for every service and the
kernels of all the open source systems have been examined in
detail by the would be intruders as well as the authors and some
users. The bugs that existed have been found, exploited and
permanently fixed. New products, features and continued
development of existing products assure a stream of new bugs to
exploit. The open source model pretty much assures that this
code will be examined by a variety of individuals with very
different interests and orientations.
Sometimes, possibly even frequently, the black hats will be first
to find the security weaknesses. This gives them a window of
opportunity to exploit the bug before its fixed. Given the speed
with which knowledge about bugs spreads, this window of
opportunity may be hours or weeks. Either way the bug will be
fixed in relatively short order and thus permanently closed, at
least until further development is done on the affected code.
Given the slowness with which most systems are patched, the
rapid release cycles of open source products compared
to the slow Microsoft release cycle, from a security perspective,
it's much more important that the bugs be found and fixed than
who finds them.
Remember that Microsoft sold the same NT 4 install disks for
approximately five years. Over time, the install procedure
became more complicated as service packs, the option pack and
subsequent service packs had to be applied in the correct order.
New Linux kernels are available every few weeks or even days.
Complete updated OpenBSD systems are available every six months
and a stable patch branch free of significant bugs is available
continuously. The major Linux distributions typically release one
or two completely new packages each year without need to install
growing lists of patches. One distribution, the security oriented
Trustix, maintains up-to-date downloadable ISO CD images with all
patches included, so if you install from their current version,
you install a system free of known bugs.
Form NT 4's initial release through Service Pack 6a, over more
than 4 years, there were a few feature upgrades but no fundamental
changes to the product. Patches and mostly service packs are
about problem fixes. I doubt anyone knows if Linux and OpenBSD
users are better about applying patches than Windows users;
they might be because there is never the fear that a Linux or
OpenBSD patch will destroy a system as sometimes happens with
Windows systems. During this time both Linux and OpenBSD were
growing products that fixed existing problems and added new
features. Given that Linux and OpenBSD upgrades come frequently
and at no or very low cost, I'd bet that open source users
upgrade their systems significantly more frequently than Windows
users. When new Windows versions are available, there are significant
upgrade costs.
There is some chance that a hardened criminal, who will keep
secret any bugs they find so that they can exploit them to the
maximum extent possible, will find an open source bug. As
Microsoft products so amply demonstrate, no access to source is
necessary to find and exploit major security bugs. How many
major IIS bugs were found in the first half of 2001? I've lost
track but the Index Server bug that may result in remote
administrative compromise has already affected hundreds of
thousands of systems. When was the last time a comparably
serious bug was found in Apache? Four and half years. Jan. 1997
was the last time a bug that allowed arbitrary execution of code,
which could result in root compromise, was found in Apache. In
Jan. 1998, a less serious buffer overflow was found. According to
eWEEK (no longer available):
In the three and a half years since then, Apache's only
remote security problems have been a handful of denial-of-service
and information leakage problems (where attackers can see files
or directory listings they shouldn't).
Because so many have reviewed all but the newest open source, the
odds of any significant security bug remaining in older products
is small and obvious bugs nil. With proprietary products, the
worst and most obvious (ease to find) bugs usually are found
early but new bugs continue to turn up in products two and more
years old. Where any open source product will be almost fully
debugged within a few months of it's release, proprietary
products can never achieve a comparable level of freedom from
bugs because it is never subjected to a thorough review by any
but a limited number of employees of the company which created
the proprietary product and who are always working under pressure
of deadlines imposed by the need to make a profit on products.
The hardened criminal is much more like to find and exploit bugs
in a proprietary system than an open source system.
Occasionally an entirely new type of bug is found. The Sendmail
"signal handler" bugs were an example of this. When this happens,
the open source model virtually assures that existing products
are examined and fixed if vulnerable. The proprietary model
virtually assures the reverse. Except for specific exploitable
vulnerabilities that are found, can anyone honestly believe that
Microsoft would get all the "stable" NT code out of storage and
review it again in light of the new disclosures. Any responsible
profit based company would add the new problems to any checklist
for new software but one would have to be dreaming to expect the
same company to engage in costly code reviews of already released
products because someone found a new class of bugs that might be
present in existing products. If there are no actual problem
reports or specific vulnerabilities discovered related to
security, no action will be taken by a profit based company, on
old code.
As a general development model, I don't see how anyone who
considers all the issues can possibly conclude that proprietary
source code is more secure than open source.
Since the discussion of the impact of the development model on
security has been almost entirely related to the security impact
of bugs, as opposed to the development of new security features
such as new access control mechanisms and encryption methods, all
that has been said applies equally to reliability as impacted by
bugs. Even assuming all the architectural defects inherent in
Windows were absent, i.e. assume that Microsoft had independently
reverse engineered UNIX as the basis of its Windows systems but
within the framework of a proprietary commercial development
model, it would still be less reliable and less secure than the
open source UNIX like systems.
Top of Page -
Site Map
Copyright © 2000 - 2014 by George Shaffer. This material may be
distributed only subject to the terms and conditions set forth in
http://GeodSoft.com/terms.htm
(or http://GeodSoft.com/cgi-bin/terms.pl).
These terms are subject to change. Distribution is subject to
the current terms, or at the choice of the distributor, those
in an earlier, digitally signed electronic copy of
http://GeodSoft.com/terms.htm (or cgi-bin/terms.pl) from the
time of the distribution. Distribution of substantively modified
versions of GeodSoft content is prohibited without the explicit written
permission of George Shaffer. Distribution of the work or derivatives
of the work, in whole or in part, for commercial purposes is prohibited
unless prior written permission is obtained from George Shaffer.
Distribution in accordance with these terms, for unrestricted and
uncompensated public access, non profit, or internal company use is
allowed.
|