We all know that the Web and network operating systems are riddled with holes that unscrupulous folks are just dying to get to (and break through). So whether you're the Webmaster for one machine or a whole network, you'll need to take steps in advance to stay ahead of the game. There are plenty of books and online resources to teach you the specifics you need to defend yourself against breaches of security, holes in your system, and systems crackers' workarounds, but this issue's column will give you the big-picture issues for securing and monitoring a site from a Webmaster's viewpoint.
Locking The Barn Doors Too Late
Just as many system administrators don't start doing real backups--or don't get the budget for the equipment and software--until a major, unrecoverable meltdown occurs, so too do most security issues stay on the back burner until incidents provoke a reaction. By then it could be too late.
My baptism by fire came nearly two years ago with an e-mail from the CERT Coordination Center (see "CERT To the Rescue") noting that the /etc/passwd file from our NIS (Network Information Services) host had been discovered at a hacked-in account at the University of Chicago. Alarmed, chagrined, and annoyed, I set out to develop strategies to prevent this from happening again.
There are many categories of attack, and many weaknesses to be exploited, but there are also a number of responses you can create to deter invaders. What you can do may be limited by your OS, your access to the server, and your level of expertise. Some relatively simple actions, though, can pay huge dividends.
Many of the following tips are aimed toward users of Unix systems, who, unfortunately, seem to face the most challenges. This is partly due to Unix's nature of exposing its innards to programmers; to the predominance (until recently) of Unix as the Web server OS of choice; and to the gaping holes that many vendors left in shipping OS software.
Pulling Plugs and Plugging Holes
It has often been said that the most effective insulator is air--that is, if one device isn't plugged into another, there's no way data can flow between the two. Certainly, organizations as diverse as the fictional Gotham City Police Department (battling the Riddler in "Batman: The Animated Series") and the National Security Agency (battling European crackers in Cliff Stoll's book The Cuckoo's Egg) have discovered the problem with unintentionally leaving lines to the outside world open.
But the Web is generally about communicating with the outside world, so you can't just cut the twisted pairs. However, a Web server generally has many more services running on it than are required; you can pull those plugs.
Turning services and access off. The first thing I do when configuring a new computer is turn off every kind of access that's not needed. The inetd.conf file and its variants contain lists of services assigned to specific ports. Go through this list and turn off anything not specifically needed on the machine, especially services like tftpd, wall, and sprayd.
Generally, you'll only need telnet, SMTP, and a handful of others, depending on the OS. Make sure that you always have access to a console login in case you turn off a service that the OS needs! Do a "kill -HUP" on the inet daemon to reload the settings.
Most Unix systems are preconfigured to allow "secure" (i.e., root login) access through any tty device. The ttytab file should be set up to disallow secure logins from anything but console. This is done by editing the file, removing secure logins everywhere but on the console, setting "on" to "off" in the status column except for console, and then doing a "kill -HUP 1" to reload init, which will reset terminal access.
As a starting point to controlling access, install the miraculous TCP Wrapper, a package written by Wietse Venema (archive at http://cicero-www.larc.nasa.gov/ICE/software-list/descriptions/tcp_wrapper-7.2.html) that acts as a gatekeeper to TCP services, like FTP and finger. TCP Wrapper sits between a service and its execution, preventing or enabling access to services through configuration files that allow individual or ranges of host names or IP numbers to be specified. For example, one network could be allowed access to FTP and telnet, while all others would be restricted. TCP Wrapper also uses syslog to log any activities passing through it for future analysis.
An officemate suggested a nifty hardware solution to the same problem for Windows NT systems: Install two Ethernet cards in a single machine. Then route all TCP/IP services through one card, restricting any "local" protocols, such as hard-disk mounting and so forth, and route all "local" protocols and services through the other card. This eliminates any potential for remote break-in to those services by cutting access entirely.
Macintoshes have a unique gift in this area: Because other networking protocols came so late and piecemeal to the platform, there's no known way to break into the system through TCP/IP. A contest run by Quarterdeck's StarNine division (makers of WebStar) offered a prize to anyone able to retrieve a publicly accessible file off their Web site. The trick was that the file name was not provided, and a user would have to break into the system simply to discover the file name. No prize was claimed.
Routing out. If you have access to the router that sits between your servers and the outside world, similar measures for turning off unneeded services should be taken. At the router level, you can "bounce" packets containing addresses or looking for services that you don't like. Getting lots of telnet attempts from a particular network? Filter them out. Don't need the outside world to ever FTP in? Turn off those ports.
In "A Sample Router Configuration" (below) you'll find a minimal configuration for conquering the two biggest problems easily solved at the router: packet spoofing and NFS hijacking.
It's possible for an intruder to masquerade incoming packets to look as if they come from IP networks inside your local network. This might give the attacker better access to services that are available only to local users. This kind of packet spoofing (or IP spoofing) can be defeated at the router by disallowing any inbound packets that pretend to be from networks that the router controls. Likewise, if someone breaks into one of your systems with the intent of launching attacks from it to other sites--a typical tactic, unfortunately--an outbound filter should restrict internal packets to being numbered only with ranges from your LAN.
CERT has issued many advisories about the potential for certain kinds of services to be overpowered remotely without anyone's knowledge. A few lines of code can turn off rsh, rlogin, portmap, and NFS access, significantly reducing the potential for outside attacks. (Users running NIS can also set the "securemaps" file in /var/yp to allow only local networks access to the NIS server.)
Brief Bits. A few general notes on major pieces of software and logins:
Sendmail is notoriously open to exploitation, due primarily to its tremendous complexity and flexibility. Install sendmail version 8 rather than the vendor's sendmail, and use smrsh--not the worldwide organization devoted to James Bond's downfall, but the "SendMail Remote SHell," which restricts execution of programs through .forward and aliases to those located in a specified directory.
Install Wuftpd (Washington University in St. Louis ftpd), which allows much tighter configuration and control. Unlike the standard ftpd, which uses chroot() to create a false root filesystem for anonymous users, wuftpd allows you to specify any number of "guest groups," which require login ids. This prevents classes of authorized users from having inappropriate system access.
If possible, don't use NIS; use NIS+ instead. Likewise, use a secure version of NFS in order to avoid problems. This is easier said than done, especially under SunOS 4.x, unless you can take the machine offline while changes are made.
Limit local logins. The biggest problems reported by CERT--and by vendors in their security updates for operating systems--concern users who can log in to the system and attack vulnerabilities. By tightly restricting access to the system, you're reducing potential intrusions, too.
Web Controls. While major holes in the NCSA httpd a couple of years ago certainly made life more entertaining, Web servers are not as vulnerable to infiltration these days, because of the intense commercial interest in the software. The holes have been identified and patched, and, as of this writing, no commercial or freeware server has a known major outstanding unpatched flaw. Nevertheless, Web servers are still open to attack. But you can greatly reduce a server's vulnerability by following these simple rules:
Never run programs you don't know. Though I haven't heard particularly of software being distributed with the intent of leaving back doors open, I would never run a non-commercial compiled application on my Web server, and I suggest the same for you. Even though Web servers are configured to run as "nobody," there are enough gaps that even a clever piece of software--one that otherwise accomplishes a seemingly worthwhile purpose--could bring a Trojan horse through.
Don't pass input to the command line without parsing. A typical mistake (identified more than two years ago) is accepting form input from a user and passing this to a shell for execution. Perhaps you have a separately executable search feature, for instance, and you're using CGI as a gateway. The user enters a search value, which you feed as an argument to the search program. On a Unix system, the user need only insert a semicolon to potentially execute anything they want on your system. I always insert a line like (in Perl):
to remove unwanted control characters or any other nonsense someone might try.
Execution paths should be limited. Make sure you have complete control over paths you make executable through the Web server's configuration file or system.
Limit access to the users who really need it. I said this just above, but the less access users have, the fewer problems you'll encounter. It's unlikely that everyone in an organization needs to modify files on a Web server.
Keep Your Ear to the Ground
Testing and monitoring the security of your system is almost more important than plugging holes. If a hacker breaks into a machine in the forest and no one sees an alarm go off--well, there go the trees. Several approaches should be taken to poking around the system, including simulating attacks, testing on an ongoing basis, and monitoring live events.
The most popular freeware packages that test for security weaknesses are SATAN and COPS (which also has Tripwire and ISS available). SATAN is the more comprehensive package, although it was widely criticized when introduced because of its nature as a two-edged sword--crackers could use SATAN to easily test ranges of machines, though the application is intended for system administrators.
SATAN runs remotely and systematically attempts to exploit known weaknesses using Internet protocols. COPS is a local package that checks vulnerabilities on the machine it is run on, including incorrectly set permissions on file directories. Both should be run not just once, but on a regular schedule. They can find holes that open up because of new software installation, or they can help alert you to changes made by legitimate or illegal users.
ISS is similar to SATAN, but it specializes in NIS holes. Running ISS is a little frightening, because until you close the holes, you see exactly how easy it is for anyone to retrieve your NIS passwd map or other map files.
Out of thousands of files, it's often hard to tell when anything has changed. Some techniques used by crackers enable them to change modification dates back to the original and erase their tracks in log files as well. But you can defeat this strategy by using Tripwire. Tripwire builds a database of all files and directories you point it to; each time it is run, it creates a list of all the changes and modifications. Running it nightly could help alert you to an infestation. For best results, put the ground-zero database on locked media so the hackers have no opportunity to touch it.
Crack is a method for checking the intelligence of your users' choices of passwords. Feeding dictionaries into crack and running it against your passwd file demonstrates how easy (or difficult) it would be for a cracker to break into your system by logging in to a legitimate user's account.
Careful with crack, though; as with any test of security, never perform it against any site without their knowledge and permission. Randal Schwartz, author of two of the best-known books on Perl, was convicted of a computer crime in Oregon for testing the security of systems he wasn't responsible for at Intel.
Patterns in syslog, messages, and other logging files can alert you to repeated attacks. A simple program that checks these files from time to time, accumulates results, and summarizes them by nature of attack and source (with results sent via e-mail to an off-site account) will help you keep on top of this.
TCP Wrapper also has a function that can alert you and freak out potential attackers. A small bit of code in its internal language will warn the user and send e-mail to you at the same time. (This code was given to me originally by a Navy guy, back when the Net had fewer people on it, and everyone was sharing tidbits like this.) If a user tries a "banned" activity, TCP Wrapper spits this message back at them through standard output while simultaneously dumping a finger of the user's host (both long and short versions) to e-mail:
in.telnetd, in.rlogind, in.rexecd, in.rshd,
rfc931:group = tty: user = nobody:\
spawn = (/usr/etc/safe_finger -l @%h |
/usr/ucb/mail -s %d-%c root) &:\
spawn = (/usr/etc/safe_finger @%h |
/usr/ucb/mail -s %d-%c root) &:\
twist = /usr/5bin/echo "Access to your host %c,\n\r\
has been denied. Security has been notified.\n\r\
If you feel this message is in error send email
Securing your server and network is one of the least-considered activities--the first priority is to get and keep the thing running. However, if you have sufficiently evolved server management, or have had the time to consider the issue yourself, the importance of keeping out attackers runs side-by-side with that concern. Through plugging holes and monitoring usage, you'll be able to deter and foil the vast majority of casual threats. And the best way to stop a serious attack? Check your logs, call CERT if there is a problem, and keep good backups.
A Sample Router Configuration
This is an example of a configuration on a Cisco 2501 router with the Ethernet interface connected to the local LAN and the Serial interface connected to a DSU/CSU. The "..." indicates omissions in the same section. The "a.b.c.0" indicates any Class C or similarly submasked address. If you have multiple Class C addresses on your router, multiple lines are required in access-lists 101 and 102.
ip access-group 105 out
ip access-group 101 in
ip access-group 102 out
# access-list 101 denies spoofing attacks from the
# outside; packets supposedly sent by machines on
# the internal network are not allowed
# through the router
access-list 101 deny ip a.b.c.0 0.0.0.255 0.0.0.0
access-list 101 permit ip 0.0.0.0 255.255.255.255
# access-list 102 denies hackers who might have
# broken into a machine inside your network from
# launching spoofing attacks through your gateway.
# In most router configurations you need a general
# permit or allow statement to let through
# everything you haven't restricted. In this case,
# you're only allowing one IP network through,
# so the final statement would defeat the purpose
access-list 102 permit ip a.b.c.0 0.0.0.255 0.0.0.0
# access-list 105 summarizes several CERT bulletins
# on allowing access to the Internet at large to
# certain services. Specifically, this filters out
# attempts from outside the network to access NFS,
# portmap, rsh, or rlogin, by using port numbers
access-list 105 deny tcp 0.0.0.0 255.255.255.255
0.0.0.0 255.255.255.255 eq 111
access-list 105 deny tcp 0.0.0.0 255.255.255.255
0.0.0.0 255.255.255.255 eq 2049
access-list 105 deny tcp 0.0.0.0 255.255.255.255
0.0.0.0 255.255.255.255 eq 512
access-list 105 deny tcp 0.0.0.0 255.255.255.255
0.0.0.0 255.255.255.255 eq 514
access-list 105 permit ip 0.0.0.0 255.255.255.255
One sure way to avoid getting bit by known bugs is to stay on top of security issues. Read the comp.sys.security newsgroups; subscribe to the CERT mailing list or visit http://www.cert.org regularly, and join any security-oriented mailing list run by your OS vendor. Check every week for patches on your OS and Web server vendors' Web sites. Read magazines oriented toward technical issues in your OS, and check that supporting software you run (like sendmail or commercial mail servers) are up to the current patch level. For example, although Macintoshes are notably absent of weird network holes, Quarterdeck had to release a security patch when debugging code was left in WebStar and then documented. --G.F.
Cert to the Rescue
The CERT Coordination Center (they literally warn against expanding the acronym) was originally called the Computer Emergency Response Team--was started in 1988 in response to the "Internet worm," a self-transmitting program that attacked multiple weaknesses across different operating systems to bring the Internet to a screeching halt.
Funded by the U.S. Government's Defense Advanced Research Projects Agency (DARPA)--the same group that originally brought you the Internet--the center monitors security incidents, provides definitive warnings and solutions, and maintains software libraries for patching holes and testing security. They also have a terrific (though short) bibliography available online. All of the software mentioned in this article, unless noted otherwise, is available from the CERT ftp site, ftp.cert.org. If you discover an attack on your system, or evidence of an attack on others, you can notify CERT and they will issue a case number and provide follow-up, if needed. See their Web site for phone numbers and incident reporting forms (online, of course). --G.F.
Glenn Fleishman co-founded Point of Presence Co., a Web site development and content hosting firm, in 1994. He is also a contributing editor at Adobe Magazine and frequently speaks about bridging the gap between design and technology.