Overview of Internet Related Security Techniques
Here I will discuss some of the standard security techniques that are
applicable to all web sites and not just association web sites. When
dealing with Internet related computer security, security measures are divided
into two broad classes. Host based security is the type of security
that any good system administrator is familiar with; it focuses on one
computer at a time. Network security deals with security measures that
are used when one computer network is connected to another. Here we
are talking about connecting association and business networks, most
often a single LAN, to the Internet. Similar techniques might be
used whenever two networks with different security requirements are
connected.
I will touch on network security first because it introduces a
concept that will be used when discussing host security. The
most familiar component of network security is a firewall.
There is a tendency today to think of a firewall as a single
dedicated appliance or general purpose computer running firewall
software. A broader and more useful definition is any
combination of routers, computers or appliances designed to
control traffic that would otherwise pass between two
networks. Further a firewall often logs selected traffic that is
passed or blocked and may set off some kind of alarm if defined
events occur. A firewall must be located at a point that connects
two networks.
A traditional firewall (packet filtering router)
looks at the IP and TCP, UDP or ICMP headers
and based on properties it finds in the headers allows packets to
pass between the networks or blocks the packets. One of the most
important pieces of information used to make the allow or block
decision are the source and destination addresses of the packet.
Also important are the TCP and UDP port numbers that generally
correspond to specific Internet services such as HTTP, FTP or
Telnet. More advanced or modern firewalls are likely to look deeper
into the packets and use additional criteria to make the allow or
block decision. Such firewalls may verify that a packet directed to
an FTP server is in fact a valid FTP packet or may even be able, with
some protocols, to determine that a packet is for or from a particular
user and pass or block the packet based on this information in
conjunction with the header information.
The process just described is called packet filtering. An advanced
form of this is known as stateful packet filtering. With stateful
packet filtering, the firewall knows even more about the structure of
IP packets and can determine that certain packets are responses to
previously seen packets. The firewall maintains tables of packets
that have been allowed to pass and subsequent responses to these packets
are also allowed to pass.
A firewall may optionally use two other techniques, in addition
to packet filtering to control and direct traffic. Proxying uses
intermediate servers to pass requests between client compters and
servers. Proxying may be used in either or both directions
allowing internal users to access outside Internet services such
as public web servers. A proxy server could also be used to
control outside access to an inside public web site or other type
of server.
Proxy servers typically have the ability to look
deeper into packets than packet filters and may understand data
at the application level. They may be used to control destination
addresses, e.g. keep employees from reaching undesirable web
sites, improve performance by caching content, or even filter
content that is passed, e.g. strip ads from inbound web pages or
block outgoing e-mail containing disallowed words. Usually
client software configuration or user procedures need to be
modified to work with a proxy server though newer transparent
proxy servers are becoming more widespread.
The other technique is Network Address Translation or NAT. This is used
to translate unroutable or invalid internal IP addresses into valid
routable IP addresses. It can allow a single IP address to service
a medium size internal LAN. There are two types of NAT. Static network address
translation maps a single internal IP address to a specific external IP
address. It may map fixed IP address and port combination between the
inside and outside. Dynamic address translation maps specific internal
IP address and port combinations to currently available outside IP address(es)
and ports.
Static NAT may be used for connections initiated either by inside
or outside computers but is typically used to make an internal
server available to the outside world. Dynamic NAT is always for
connections from the inside to the outside, normally to allow a
single IP address to service dozens or hundreds of computers. To
service thousands of internal computers, multiple exterior IP
addresses would normally be needed.
Dynamic NAT only, without
any packet filtering, proxying or static NAT routes, is
effectively a security policy that allows all outbound traffic
(including outside replies) and no inbound traffic initiated by
outside computers. Unless the computer providing NAT is also
protected by packet filtering, the NAT computer itself could be
quite vulnerable to outside attacks.
When defining a security policy that will be implemented by a
firewall, there are two basic approaches: 1) allow specific types
of traffic determined to be good and block everything else or 2)
block specific traffic determined to be bad and allow everything
else. Security professionals regard the second approach as not
workable. It is maintenance intensive as a steady stream (almost
daily) of new threats need to be evaluated and possibly blocked.
It's also guaranteed to allow undesirable traffic from the time
that a new threat is discovered until it is blocked. Given the
reality of typically overworked system administrators, new
threats may never be blocked after the firewall is installed.
The use of blocked and allowed in the following discussion
assumes the first approach. Each site will, or should, have a
different mixture of allowed outbound and inbound traffic. One
of the problems of using commercial firewalls is that they are
often installed with vendor defaults. When this occurs, the site
has accepted a vendor suggested security policy and not
determined its own security needs. Further, crackers will likely
be able to determine what firewall is being used and once this is
determined will know the firewall's strengths and weaknesses.
Once the allowed types of traffic have been determined the goal
of the firewall is rather straight forward. It is to allow all
the acceptable outbound traffic out and the responses or answers
to this traffic in. Also the acceptable inbound connections must
be allowed in and their outbound responses must be allowed out.
It's the matching of responses to the original request that
is sometimes tricky. It does no good to allow outbound web
requests if the requested pages are not allowed back in.
If the organization has one or more public servers such as web or
FTP servers, the firewall needs to allow connections initiated by
outside computers to reach these. Assuming the organization has
a single public web server and no other public servers of any
kind, the firewall should allow TCP traffic to the IP address of
the web server and to the port serviced by the web server (likely
80) as well as allowing outbound reply traffic from the web
server's IP address and port number. It also needs to determine
which inbound traffic is replies to previously allowed outbound
requests and allow these. The firewall should block all other
inbound traffic including any protocol except TCP to anything
including the web server, all traffic to other ports on the web
server and all traffic to other IP addresses. This example is
too simplistic for even the most basic Internet connection
because it does not include e-mail. Incoming TCP traffic to at
least one other IP address and port (25) is needed to allow
incoming e- mail.
A firewall or firewalls should divide the network world into
three areas. Complex internal networks will have more than three
areas but this will serve for our purposes. The outside world is
the Internet. All traffic from the Internet is suspect and
regarded as potentially hostile. The internal network has most
of the organization's computers and all of its confidential or
sensitive data. All network traffic on the internal network must
be hidden from the rest of the network world.
The third area is called either a perimeter network or DMZ (DeMilitarized
Zone). All publicly accessible (from the Internet) servers go
on computers in the DMZ. All computers in the DMZ plus any
computer that is part of the firewall should go through a process
called hardening. Hardening is a series of steps, specifically
host based security steps, that create a limited function, very
secure computer. Such a computer is often referred to as a
bastion host.
Hardening starts with a fresh operating system install. No
optional components not specifically needed for the computer's core
functions are installed. Any currently available security
patches are applied. All services or daemons that are not needed
are turned off; preferably the executables are removed from the
system. On UNIX, a custom kernel with all unnecessary features
removed should be created.
Daemons or services that have not been disabled, should run with
the minimum privileges that allow them to perform their intended
functions.
Everything that is not necessary for
the functions the computer is to perform in the DMZ or firewall
is removed. In particular, be sure to remove any compilers
after necessary customizations have been made. Powerful
scripting languages like Perl should be removed if not needed to
support essential functions such as a web server. Examples or other
tutorial material for
the services that will be running such as web servers should not
be installed; bugs and excessive capabilities in scripts supplied
with Microsoft's IIS, ColdFusion and Apache have all resulted in
security compromises.
Only the minimum number of user accounts necessary to administer
the system are created. The passwords used should be very good.
On UNIX systems this would typically be one account for each system
administrator; generally the administrators should
log in as themselves and then su or sudo as necessary. Since Windows
lacks a fully functional su capability, it's debatable whether
it's better to share a single administrator account or have
several administrator equivalents. Consider renaming the
administrator account.
File and directory permissions throughout the system should be
reviewed and made as restrictive as practical.
Host based intrusion detection such as Tripwire or similar
systems should be installed. The purpose is to create a database
of cryptographically secure signatures of every executable and
configuration file on the system. The reference database should
be made before the newly installed bastion is connected to the
network. The database should be stored off the computer and
regularly (daily) compared with new databases created directly
from the computer in its current state. Done properly, this will
identify every file that has changed.
Unexpected changes need to
be investigated because they are evidence of a possible
intrusion. After all changes have been verified as normal, the
updated database may be moved off the computer and used as a new
baseline for future comparisons to minimize the number of changes
that need to be reviewed. The original baseline should be
permanently preserved in a safe location so that if a compromise
is ever suspected, cumulative changes since the original install
can be examined.
System logs should be written to locations that even the root
user on the bastion host cannot access. An extremely secure
approach in a UNIX environment would be to write them to both a
locally connected printer and to a non networked serially
connected computer whose only function is logging. Writing the
logs to a true write once device would suffice. Using syslog to
log to a centralized logging computer that is also hardened and
can only be accessed by syslog and local logins is
another good option. As soon as practical, system logs should be
written to an unchangeable archival medium such as CD-R. You
want logs to be accessible so intrusions that may have occured
over an extended period of time can be investigated and
unchangeable so they may be used as evidence in legal proceedings.
Windows logging options are severely restricted compared to UNIX.
The default location of the log files can be changed; if they can
be directed to a true write once media, this may be the best
option. CD-R when configured to work like a normal disk drive
doesn't qualify. Though data written to the drive can't be
changed, it can be logically replaced and made inaccessible; the
total drive capacity is decreased. Frequent backups of the event
logs is typically the best practical option but Windows provides
no mechanism by which a cracker with administrative access can be
prevented from altering the logs. Third party products may allow
log entries to be copied to a secure location as they are
created.
Returning to network security and the DMZ, this part of the
network needs to be protected from both the internal network and
the Internet. Also the internal network should be protected from
the DMZ. Specifically the only traffic that should be allowed to
the DMZ are connections initiated by the outside or inside to
publicly accessible servers. In addition connections from the
inside to the DMZ that represent services that are relayed
through the DMZ and authorized administrative connections should
be allowed. These might include system administrators as well as
web authors who update web content on a bastion host web server.
The use of web author accounts on a bastion host will increase
convenience but also weaken security.
The only connections that should be initiated by computers in the
DMZ to internal computers are clearly defined services such as a
relay SMTP (e-mail) servers. No connections at all should be
allowed that are initiated by outside computers that try to
connect directly to internal computers. Except for services such
as SMTP and DNS that must make connections to the Internet to
perform their normal functions, connections initiated by
computers in the DMZ to the Internet should not be allowed. This
last restriction reduces the opportunities for staff to use a
bastion host for anything other than its intended purposes and
for intruders who may have compromised the bastion from using it
to launch attacks on other computers.
Basically the DMZ is a buffer zone. Though extensive efforts are
made to make these computers secure, if the firewall is
configured correctly these are the only computers directly
exposed to outside initiated connections. As such they are
much more likely to be compromised than internal computers.
Because they can be compromised more easily they aren't trusted.
One other key component of network security is network based
intrusion detection. Host based intrusion detection looks for
evidence of intrusions by identifying unexpected system changes
and reviewing system logs; such intrusion detection typically works
after the intrusion has been accomplished. Network intrusion
detection attempts to identify intrusions while they are
happening and before they succeed. Network intrusion detection
generally looks at two things in current network traffic. It
looks for patterns in the packets such as a port scan. A port
scan occurs when a potential intruder uses scanning software to
send packets, which may be deliberately malformed, to a range of
IP addresses and or ports. The intent is to learn what computers
are active and what services and operating systems they are
running. Depending on the systems found, prepackaged exploits
can be used to do anything from gaining root access in minutes
(without knowing any user ID or password) to crashing the
computer.
Besides looking at packet patterns, network intrusion detection
systems (IDS) look into packet contents and match them against a
database of known probes and exploits. This is much like virus
detection. Here probes are looking for vulnerabilities much more
specific than the general information provided by port scans. A
common type of probe that might be detected by a network IDS is
one that attempts to locate a specific CGI script with known
vulnerabilities.
A variety of software "packages" are available that combine
databases of known vulnerable web scripts with the ability to
automatically scan for these. Just as there are stealth and
polymorphic viruses that try to evade virus detection, the most
sophisticated port and CGI scanners use techniques that attempt
to avoid detection by IDSs. Careful attention to web logs should
reveal CGI/script scans but few web masters have the time to
review logs at this level and most web log analysis packages
focus on legitimate traffic and not security issues.
Exploits are a packet or series of packets that will cause a
target computer to exhibit specific undesirable behaviour. It
could be to simply crash the target computer but it might also be
the packet necessary to cause a buffer overflow in sendmail or
other vulnerable server resulting in remote root access.
Remember, there are automated tools that tell the intruders what
operating system is running, often to specific release levels. If
these systems haven't been patched, specific vulnerabilities in
specific servers are thus known. The port scan that identified
the OS has also told the intruder which, if any, vulnerable
servers are running. An exploit could cause the target computer
to send a stream of packets to another computer as the
distributed denial of service attacks in late 1999 did.
A network IDS can be configured so that potential attacks they
have detected are logged. They can also be made to set off
various alarms. These would
include sounding an audible alarm on one or more computers,
dialing a pager, sending an e-mail, putting a visual indicator on
one or more computer screens or other actions. The alarms
can be pretty much anything that can be triggered by a computer
and is limited only by the capabilities of the intrusion
detection author.
The difficult part is finding the right thresholds at which
alarms should be sent. Today's Internet is a hostile place. My
tiny LAN typically sees one to six probes a day. So far all have
been blocked by my firewall. Highly visible sites may experience
nearly continuous probes. You want the IDS to ignore routine
casual probes and alert you to persistent attempts or successful
attacks. If the IDS is too sensitive administrators will become
bored and ignore warnings when a real threat does occur. If it's
too insensitive a successful attack might be ignored until visible
damage has been done.
Outsourcing web site hosting in no way assures that the site will
be any more secure than an internally hosted site. If you are
required to use a VPN, tunneled SSH sessions and/or Secure FTP to
access and maintain a hosted web site, then it's likely
that your hosting service is making serious efforts to
provide a secure environment. If you maintain a hosted web site
with normal FTP and Telnet then your hosting service does not
provide a secure web environment. VPN access should be
transparent to your users such as the web authors who maintain
the site. SSH and Secure FTP will require modified user
procedures and may preclude the use of some web authoring
products directly on the hosted site.
There are probably thousands of compromised computers connected
to the Internet right now with owners who are completely unaware of the
compromise. In some cases the compromise may affect only the
compromised computer or computers connected to it locally . Others
have been prepared to participate in a distributed denial of service
attack at some time in the future to be determined by the whim of
the intruder. Some are being actively used by intruders to
compromise yet more systems while hiding their home base. In 2000
and beyond, everyone who connects any computer to the Internet with
a full time connection, has a responsibility to protect their
computers from network based attacks. Sooner or later, computer
owners who fail to take basic security precautions may find themselves
being held responsible for attacks launched from their computer even
though they had no knowledge of those attacks.
Computer and network security, especially for Internet connected
computers, is a never ending process. Installing products such
as properly configured firewalls and suitably hardened servers is
just the first step. Administrators need the time to regularly
review logs and investigate anomalies. They should also be on
security mailing lists or track new developments that may be
relevant to their sites by other means. Software may need to be
upgraded or patches applied. Over significant time periods, the
security architecture should be reviewed and may need to be
changed to cope with a regularly changing security environment.
I know that most technology staff and managers with an
association background will read the foregoing and that reactions
will vary from "all of that really isn't practical" to "you've
got to be joking". Prior to the summer of 2000, I simply didn't
belive that someone could gain remote root access without knowing
any valid username or password. After reading detailed
descriptions of how this can be done in a few minutes by someone
with limited technical skills but with access to the right
cracking software my attitudes changed dramatically. Though the
specific vulnerabilities and details of the attacks vary, there
are many computers with similar vulnerabilities. Good host based
security, such as might be used on the machine that hosts the
association management system, simply isn't adequate for Internet
servers.
Though all the techniques discussed above may not be practical for
all associations, each association that has a web site which should
be all associations in the not too distant future, should at least
get a book on Internet security that is specific to their computing
environment and review all the suggested steps. No one will do
everything recommended by any specific book but don't dismiss steps
just because they seem unfamiliar or inconvenient. There is an
enormous amount of intelligence being applied to breaking into computers.
Just because you don't understand how a particular computer or
network configuration can be attacked does not mean that it can't.
Briefly looking at an extreme example may be illustrative. Let's
consider a small association with a one person staff who uses NT
workstation on the association's only computer. That computer is
connected to the Internet via a SDSL line and the association's
web site is on that computer.
When the computer was purchased, it came with a high capacity
tape drive which was immediately put to use for daily backups.
These have been tested occasionally and also some real restores
have been performed to retrieve specific files that were needed.
Before the Internet connection was established, software
firewalls were investigated and the commercial version of a
leading "personal" or workstation firewall was installed; it
includes notifications that are comparable to intrusion
detection. Since there is only one computer, all file and print
sharing services were disabled as well as some other services
that were determined not to be necessary. GRC.com and some other
sites that test computers were used to verify that the firewall
was working and only the web server was visible externally.
Microsoft's Personal Web Server is used as the web server. When
installed, a new user is created for anonymous web access. This
user is not placed in any user groups. Since there is only one
real user and it's not practical to close everything and log off
then back on every time some administrative function needs to be
performed, the user renames the administrator account and uses it
as her normal login account. A very good password is chosen and
not shared with anyone. The password is also placed in a sealed
envelope and kept in the association's safety deposit box; the
President has access to this. File permissions are reviewed. NT
Workstation's absurdly lenient settings are systematically
changed limiting most areas to system and administrator access.
The web document tree is made readable by the anonymous web user
but otherwise this user is given no rights to anything. Experimentation
shows that the anonymous user also needs execute rights for
the \winnt\system32, \perl\bin
and \perl\lib directories to run the forms that are used on the site.
Since this computer is used for everything including the associations
limited management system, it obviously cannot be stripped of things
not related to the web server. This does increase risks but this
brings us back to where we started: security is about trade-offs.
Every situation is unique to some degree but also has many things in
common with other environments. If there were not many things common
to computers and networks, no generalizations about computer security
could be made. Here the choice is about doing necessary functions,
with some increased risks and not doing them. Hosting would normally
be the preferred approach for a very small organization; in this
hypothetical case we assume that was explored and that the low priced
hosting options that were found provided no real security.
The environment described in this extreme case does not look like
that described above. What it does have in common is that each of
the techniques that are typically part of Internet server security were
considered and applied to the current case. Some were not applicable
or not feasible. The result is a reasonably secure setup relative to
the small associations limited assets and low profile. This is
another important part of security; besides involving trade-offs it
is a matter of matching the measures taken to the assets being protected
and the threats to which those assets are exposed.
Top of Page -
Site Map
Copyright © 2000 - 2014 by George Shaffer. This material may be
distributed only subject to the terms and conditions set forth in
https://geodsoft.com/terms.htm
(or https://geodsoft.com/cgi-bin/terms.pl).
These terms are subject to change. Distribution is subject to
the current terms, or at the choice of the distributor, those
in an earlier, digitally signed electronic copy of
https://geodsoft.com/terms.htm (or cgi-bin/terms.pl) from the
time of the distribution. Distribution of substantively modified
versions of GeodSoft content is prohibited without the explicit written
permission of George Shaffer. Distribution of the work or derivatives
of the work, in whole or in part, for commercial purposes is prohibited
unless prior written permission is obtained from George Shaffer.
Distribution in accordance with these terms, for unrestricted and
uncompensated public access, non profit, or internal company use is
allowed.
|