2012-10-25

Linux is free and open-sourced, and is available in a wide variety of “distributions” targeted at almost every usage-scenario imaginable. Like other general-purpose operating systems, Linux’s wide range of features presents a broad attack surface, but by leveraging native Linux security controls, carefully configuring Linux applications, and deploying certain add-on security packages, you can create highly secure Linux systems. The study and practice of Linux security therefore has wide-ranging uses and ramifications. New exploits against popular Linux applications affect many thousands of users around the world. New Linux security tools and techniques have just as profound of an impact, albeit a much more constructive one!
In this chapter we’ll examine the Discretionary Access Control-based security model and architecture common to all Linux distributions and to most other Unix-derived and Unix-like operating systems (and also, to a surprising degree, to Microsoft Windows). We’ll discuss the strengths and weaknesses of this ubiquitous model; typical vulnerabilities and exploits in Linux; best practices for mitigating those threats; and improvements to the Linux security model that are only slowly gaining popularity, but that hold the promise to correct decades-old shortcomings in this platform.

Linux’s traditional security model can be summed up quite succinctly: people or proceses with “root” privileges can do anything; other accounts can do much less.

From the attacker’s perspective, the challenge in cracking a Linux system is to gain root privileges. Once that happens, the attacker can erase or edit logs; hide their processes, files, and directories; and basically re-define the reality of the system as experienced by its administrators and users. Thus, as it’s most commonly practiced, Linux security (and Unix security in general) is a game of “root takes all.”

How can such a powerful operating system get by with such a limited security model? In fairness, many Linux system administrators fail to take full advantage of the security features available to them (features we’re about explore in depth). People can and do run robust, secure Linux systems by making careful use of native Linux security controls, plus selected add-on tools such as sudo or Tripwire. However, the crux of the problem is that like the Unix operating systems on which it was based, Linux’s security model relies on Discretionary Access Controls (DAC).

In the Linux DAC system, there are users, each of which belongs to one or more groups; and there are also objects: files and directories. Users read, write, and execute these objects, based on the objects’ permissions, of which each object has three sets: one each defining the permissions for the object’s user-owner, group-owner, and “other” (everyone else). These permissions are enforced by the Linux kernel, the “brain” of the operating system,

When running, a process normally “runs as” (with the identity of) the user and group of the person or process that executed it. Since processes “act as” users, if a running process attempts to read, write, or execute some other object, the kernel will first evaluate that object’s permissions against the process’ user and group identity, just as though the process was an actual human user. This basic transaction, wherein a subject (user or process) attempts some action (read, write, execute) against some object (file, directory, special file).
Whoever owns an object can set or change its permissions. Herein lies the Linux DAC model’s real weakness: the system superuser account, called “root,” has the ability to both take ownership and change the permissions of all objects in the system. And as it happens, it’s not uncommon for both processes and administrator-users to routinely run with root privileges, in ways that provide attackers with opportunities to hijack those privileges.

Linux treats everything as a file, including memory, device-drivers, named pipes, and other system resources. A device like a CDROM is a file to the Linux kernel: the "special" device-file /dev/cdrom (which is usually a symbolic link to /dev/hdb or some other special file). To send data from or write it to the CDROM drive, the Linux kernel actually reads to and writes from this special file. Other special files, such as named pipes, act as input/output (I/O) "conduits," allowing one process or program to pass data to another. One common example of a named pipe on Linux systems is /dev/urandom: when a program reads this file, /dev/urandom returns random characters from the kernel's random-number generator. So in Linux/Unix, nearly everything is represented by a file. Once you understand this, it's much easier to understand why filesystem security is important, and how it works.

Actually, there are two things on a Unix system that aren't represented by files: user-accounts and group-accounts (in short users and groups). Various files contain information about a system's users and groups, but none actually represents them.

A user-account represents someone or something capable of using files. As we saw in the previous section, a user-account can be associated both with actual human beings and with processes. The standard Linux user-account "lp," for example, is used by the Line Printer Daemon (lpd): the lpd program runs as the user lp.

A group-account is simply a list of user-accounts. Each user-account is defined with a main group membership, but may in fact belong to as many groups as you want or need it to. For example, the user "maestro" may have a main group membership in "conductors," and also belong to the group "pianists."

A user's main group membership is specified in the user's entry in /etc/password; additional groups are specified in /etc/group by adding the username to the end of the entry for each group the user needs to belong to.

The listing below shows user "maestro"'s entry in the file /etc/password.

user's details are kept in /etc/password

maestro:x:200:100:Maestro Edward Hizzersands:/home/maestro:/bin/bash

Can see that the first field contains the name of the user-account, "maestro;" the second field ("x") is a placeholder for maestro's password (which is actually stored in /etc/shadow); the third field shows maestro's numeric userid (or "uid," in this case "200"); and the fourth field shows the numeric groupid (or "gid," in this case "100") of maestro's main group membership. The remaining fields specify a comment, maestro's home directory, and maestro's default login shell.

The listing below shows part of the corresponding /etc/group file.

additional group details in /etc/group

conductors:x:100:

pianists:x:102:maestro,volodya

Each line simply contains a group-name, a group-password (usually unused -- "x" is a placeholder), and numeric group-id (gid), and a comma-delimited list of users with "secondary" memberships in the group. Thus we see that the group "conductors" has a gid of "100", which corresonds to the gid specified as maestro's main group; and also that the group "pianists" includes the user "maestro" (plus another named "volodya") as a secondary member.

The simplest way to modify /etc/password and /etc/group in order to create, modify, and delete user-accounts is via the commands useradd, usermod, and userdel, respectively. All three of these commands can be used to set and modify group-memberships

Each file on a Unix system (which, as we've seen, means "practically every single thing on a Unix system"), has two owners: a user and a group, each with its own set of permissions that specify what the user or group may do with the file (read it, write to it or delete it, and execute it). A third set of permissions pertains to other, that is, user-accounts that don't own the file or belong to the group that owns it.

"long file-listing" for the file /home/maestro/baton.txt.

files have two owners: a user & a group

each with its own set of permissions

with a third set of permissions for other

permissions are to read/write/execute in order user/group/other, cf.

-rw-rw-r--  1  maestro user 35414 Mar 25 01:38 baton.txt

set using chmod command

Permissions are listed in the order "user-permissions, group-permissions, other-permissions." Thus we see that for the file shown the text above, its user-owner ("maestro") may read and write/delete the file ("rw-"); its group-owner ("conductors") may also read and write/delete the file ("rw-"); but that other users (who are neither "maestro" nor members of "conductors") may only read the file.

There's a third permission besides "read" and "write": "execute," denoted by "x" (when set). If maestro writes a shell script named "punish_bassoonists.sh", and if he sets its permissions to "-rwxrw-r--", then maestro will be able to execute his script by entering the name of the script at the command-line. If, however, he forgets to do so, he won't be able to run the script, even though he owns it. Permissions are usually set via the "chmod" command (short for "change mode").

Directory-permissions work slightly differently than permissions on regular files. "Read" and "write" are similar; for directories these permissions translate to "list the directory's contents" and "create or delete files within the directory," respectively. "Execute" is a little less intuitive: for directories, "execute" translates to "use anything within or change working directory to this directory".

That is, if a user or group has execute-permissions on a given directory, they can list that directory's contents, read that directory's files (assuming those individual files' own permissions include this), and change their working directory to that directory, like with the command "cd". If a user or group does not have execute-permissions on a given directory, they will be unable to list or read anything in it, regardless of the permissions set on the things inside.  (Note that if you lack execute permissions on a directory but do have read permissions on a it, and you try to list its contents with ls, you will receive an error message that, in fact, lists the directory's contents. But this doesn't work if you have neither read nor execute permissions on the directory.)

Suppose our example system has a user named "biff" who belongs to the group "drummers." And suppose further that his home-directory contains a directory called "extreme_casseroles" that he wishes to share with his fellow percussionists. The following Listing

read = list contents

write = create or delete files in directory

execute = use anything in or change working directory to this directory

e.g.

$ chmod g+rx extreme_casseroles

$ ls -l extreme_casseroles

drwxr-x--- 8 biff  drummers 288  Mar 25 01:38 extreme_casseroles

shows how biff might set that directory's permissions. In one Listing, only biff has the ability to create, change, or delete files inside extreme_casseroles. Other members of the group "drummers" may list its contents and cd to it. Everyone else on the system, however (except root, who is always all-powerful), is blocked from listing, reading, cd-ing, or doing anything else with the directory.

In older Unix operating systems, the sticky bit was used to write a file (program) to memory so it would load more quickly when invoked. On Linux, however, it serves a different function: when you set the sticky bit on a directory, it limits users' ability to delete things in that directory. That is, to delete a given file in the directory you must either own that file or own the directory, even if you belong to the group which owns the directory and group-write permissions are set on it. To set the sticky bit, use the chmod command with +t. In our example, this would be

chmod +t extreme_casseroles

If we set the sticky bit on extreme_casseroles and then do a long listing of the directory itself, using "ls -ld extreme_casseroles", we'll see:

drwxrwx--T  8  biff  drummers  288  Mar 25 01:38 extreme_casseroles

Note the "T" at the end of the permissions-string. We'd normally expect to see either "x" or "-" there, depending on whether the directory is "other-writable". "T" denotes that the directory is not "other-executable" but has the sticky bit set. A lower-case "t" would denote that the directory is other-executable and has the sticky bit set.

The sticky bit only applies to the directory's first level downwards, it is not inherited by child directories.

Now come to two of the most dangerous permissions-bits in the Unix world: setuid and segid. If set on an executable binary file, the setuid bit causes that program to "run as" its owner, no matter who executes it. Similarly, the setgid bit, when set on an executable, causes that program to run as a member of the group which owns it, again regardless of who exectutes it. "run as" means "to run with the same privileges as."

IMPORTANT WARNING: setuid and setgid are very dangerous if set on any file owned by root or any other privileged account or group. The command "sudo" is a much better tool for delegating root's authority.

Note that if you want a program to run setuid, that program must be group-executable or other-executable, for obvious reasons. Note also that the Linux kernel ignores the setuid and setgid bits on shell scripts; these bits only work on binary (compiled) executables.

setgid works the same way, but with group-permissions: if you set the setgid bit on an executable file via the command "chmod g+s filename", and if the file is also "other-executable" (-r-xr-sr-x), then when that program is executed it will run with the group-ID of the file rather than of the user who executed it.

Setuid has no effect on directories, but setgid does, and it's a little non-intuitive. Normally, when you create a file, it's automatically owned by your user ID and your (primary) group ID. For example, if biff creates a file, the file will have a user-owner of "biff" and a group-owner of "drummers" (assuming that "drummers" is biff's primary group, as listed in /etc/passwd).

Setting a directory's setgid bit, however, causes any file created in that directory to inherit the directory's group-owner. This is useful if users on your system tend to belong to secondary groups and routinely create files that need to be shared with other members of those groups.

For example, if the user "animal" is listed in /etc/group as being a secondary member of "drummers," but is listed in /etc/passwd has having a primary group of "muppets," then animal will have no trouble creating files in the extreme_casseroles/ directory, whose permissions are set to drwxrwx--T. However, by default animal's files will belong to the group muppets, not to drummers, so unless animal manually reassigns his files' group-ownership (chgrp drummers newfile) or resets their other-permissions (chmod o+rw newfile), then other members of drummers won't be able to read or write animal's recipes.

If, on the other hand, biff (or root) sets the setgid bit on extreme_casseroles/ (chmod g+s extreme_casseroles), then when animal creates a new file therein, the file will have a group-owner of "drummers", just like extreme_casseroles/ itself. Note that all other permissions still apply: if the directory in question isn't group-writable to begin with, then the setgid bit will have no effect (since group-members won't be able to create files inside it to begin with).

Internally, Linux uses numbers to represent permissions; only user programs display permissions as letters. The chmod command recognizes both mnemonic permission-modifiers ("u+rwx,go-w") and numeric modes.

A numeric mode consists of four digits: as you read left-to-right, these represent special-permissions, user-permissions, group-permissions, and other-permissions. For example, 0700 translates to "no special permissions set, all user-permissions set, no group permissions set, no other-permissions set."

Each permission has a numeric value, and the permissions in each digit-place are additive: the digit represents the sum of all permission-bits you wish to set. If, for example, user-permissions are set to "7", this represents 4 (the value for "read") plus 2 (the value for "write") plus 1 (the value for "execute").

As I just mentioned, the basic numeric values are 4 for read, 2 for write, and 1 for execute. Why no "3? Because

(a) these values represent bits in a binary stream and are therefore all powers of 2; and (b) this way, no two combination of permissions have the same sum.

Special permissions are as follows: 4 stands for setuid, 2 stands for setgid, and 1 stands for sticky-bit. For example, the numeric mode 3000 translates to "setgid set, sticky-bit set, no other permissions set" (which is a useless set of permissions).

It’s a little bit of an oversimplification to say that users, groups, files, and directories are all that matter in the Linux DAC: memory is important too. Therefore we should at least briefly discuss kernel space and user space.

Kernel space refers to memory used by the Linux kernel and its loadable modules (e.g., device drivers). User space refers to memory used by all other processes. Since the kernel enforces the Linux DAC and, in real terms, dictates system reality, it’s extremely important to isolate kernel space from user space. For this reason, kernel space is never swapped to hard disk.

It’s also the reason that only root may load and unload kernel modules. As we’re about to see, one of the worst things that can happen on a compromised Linux system is for an attacker to gain the ability to load kernel modules.

In this section we’ll discuss the most common weaknesses in Linux systems.

As we discussed in the previous section, any program whose “setuid” permission-bit is set will run with the privileges of the user that owns it, rather than those of the process or user executing it. A setuid root program is a root-owned program with its setuid bit set, that is, a program that runs as root no matter who executes it.

Running setuid-root is necessary for programs that need to be run by unprivileged users, yet must provide such users with access to privileged functions (for example, changing their password, which requires changes to protected system files). But such a program needs to have been very carefully programmed, with impeccable user-input validation, strict memory management, etc. That is, it needs to have been designed to be run setuid (or setgid) root. Even then, a root-owned program should only have its setuid bit set if absolutely necessary.

The risk here is that if a setuid root program can be exploited or abused in some way (for example, via a buffer overflow vulnerability or race condition), then otherwise-unprivileged users may be able to use that program to wield unauthorized root privileges, possibly including opening a root shell (a command-line session running with root privileges).

Due to a history of abuse against setuid root programs, major Linux distributions no longer ship with unecessarily setuid-root programs. But system attackers still scan for them!

This is a very broad category of vulnerabilities, many of which also fall into other categories in this list. It warrants its own category because of the ubiquity of the world wide web: there are few attack surfaces as big and visible as an Internet-facing web site.

While web applications written in scripting languages such as PHP, Perl, and Java may not be as prone to classic buffer overflows (thanks to the additional layers of abstraction presented by those languages’ interpreters), they’re nonetheless prone to similar abuses of poor input-handling, including cross-site scripting, SQL code injection, and a plethora of other vulnerabilities.

Nowadays, few Linux distributions ship with “enabled-by-default” web applications (such as the default cgi-scripts included with older versions of the Apache webserver). However, many users install web applications with known vulnerabilities, or write custom web applications having easily-identified and easily-exploited flaws.

Rootkits allows an attacker to cover their tracks, typically installed after a root compromise: if successfully install a rootkit before detection, all is very nearly lost.

Rootkits began as collections of “hacked replacements” for common Unix commands (ls, ps, etc.) that behaved like the legitimate commands they replaced, except for hiding an attacker’s files, directories and processes.

In the Linux world, since the advent of loadable kernel modules (LKMs), rootkits have more frequently taken the form of LKMs. An LKM rootkit does its business (covering the tracks of attackers) in kernel-space, intercepting system calls pertaining to any user’s attempts to view the intruder’s resources. In this way, files, directories, and processes owned by an attacker are hidden even to a compromised system’s standard, un-tampered-with commands, including customized software. Besides operating at a lower, more global level, another advantage of the LKM rootkit over traditional rootkits is that system integrity-checking tools such as Tripwire won’t generate alerts from system commands being replaced.

Luckily, even LKM rootkits do not always ensure complete invisibility for attackers. Many traditional and LKM rootkits can be detected with the script chkrootkit, available at www.chkrootkit.org. In general, however, if an attacker gets far enough to install a KVM rootkit, your system can be considered to be completely compromised; when and if you detect the breach (e.g., via a defaced website, missing data, suspicious network traffic, etc.), the only way to restore your system with any confidence of completely shutting out the intruder will be to erase its hard disk (or replace it, if you have the means and inclination to analyze the old one), re-install Linux, and apply all the latest software patches.

Now consider how to mitigate Linux security risks at the system and application levels. This section deals with OS-level security tools and techniques that protect the entire system.

Linux system security begins at operating system installation time: one of the most critical, system-impacting decisions a system administrator makes is what software will run on the system. Since it’s hard enough to find the time to secure a system’s critical applications, an unused application is liable to be left in a default, un-hardened and un-patched state. Therefore, it’s very important that from the start, careful consideration be given to what applications should be installed, and which should not.

What software should you not install? Common sense should be your guide: for example, an SMTP (email) relay shouldn’t need the Apache webserver; a database server shouldn’t need an office productivity suite such as OpenOffice; etc.

Given the plethora of roles Linux systems play (desktops, servers, laptops, firewalls, embedded systems, etc), will generalize enumerating what software to not install. Here is a list of software packages that should seldom, if ever, be installed on hardened (especially Internet-facing) servers: X Window System, RPC Services, R-Services, inetd, SMTP Daemons, Telnet and other cleartext-logon services (see text for discussion why).

In addition to initial software selection and installation, Linux installation utilities also perform varying amounts of initial system and software configuration, including:

setting the root password

creating a non-root user account

setting an overall system security level (usually initial file-permission settings)

enabling a simple host-based firewall policy

enabling SELinux or Novell AppArmor

Carefully selecting what gets installed (and what doesn’t get installed) on a Linux system is an important first step in securing it. All the server applications you do install, however, must be configured securely and they must also be kept up to date with security patches.

The bad news with patching is that you can never win the “patch rat-race:” there will always be software vulnerabilities that attackers are able to exploit for some period of time before vendors issue patches for them. (As-yet-unpatchable vulnerabilities are known as zero-day, or 0-day, vulnerabilities.).

The good news is, modern Linux distributions usually include tools for automatically downloading and installing security updates, which can minimize the time your system is vulnerable to things against which patches are available. For example, Red Hat, Fedora, and CentOS include up2date (YUM can be used instead); SuSE includes YaST Online Update; and Debian uses apt-get, though you must run it as a cron job for automatic updates.

Note that on change-controlled systems, you should not run automatic updates, since security patches can, on rare but significant occasions, introduce instability. For systems on which availability and up-time are of paramount importance, therefore, you should stage all patches on test systems before deploying them in production.

One of the most important attack-vectors in Linux threats is the network. Network-level access controls, that restrict access to local resources based on the IP addresses of the systems attempting access, are therefore an important tool in Linux security.

One of the most mature network access control mechanisms in Linux is libwappers. In its original form, the software package TCP Wrappers, the daemon tcpd is used as a “wrapper” process for each service initiated by inetd. Before allowing a connection to any given service, tcpd first evaluates access controls defined in the files /etc/hosts.allow and /etc/hosts.deny: if the transaction matches any rule in hosts.allow (which tcpd parses first), it’s allowed. If no rule in hosts.allow matches, tcpd then evaluates the transaction against the rules in hosts.deny; if any rule in hosts.deny matches, the transaction is logged and denied, but is otherwise permitted. These access controls are based on the name of the local service being connected to, on the source IP address or hostname of the client attempting the connection, and on the username of the client attempting the connection (that is, the owner of the client-process). Note that client usernames are validated via the ident service, which unfortunately is trivially easy to forge on the client side, which makes this criterion’s value questionable. The best way to configure TCP Wrappers access controls is therefore to set a “deny all” policy in hosts.deny, such that the only transactions permitted are those explicitly specified in hosts.allow.

Since inetd is essentially obsolete, TCP Wrappers is no longer used as commonly as libwrappers, a system library which allows applications to defend themselves by leveraging /etc/hosts.allow and /etc/hosts.deny without requiring tcpd to act as an intermediary.

While TCP Wrappers ubiquitous and easy-to-use, more powerful is the Linux kernel’s native firewall mechanism, netfilter (and its user-space front end iptables)

iptables is useful both on multi-interface firewall systems and on ordinary servers and desktop systems. iptables command does however have a steep learning curve. Nearly all Linux distributions now include utilities for automatically generating “personal” (local) firewall rules, especially at installation time. Typically, they prompt the administrator/user for local services that external hosts should be allowed to reach, if any (e.g., HTTP on TCP port 80, HTTPS on TCP port 443, and SSH on TCP port 22), and then generate rules that:

allow incoming requests to those services;

block all other inbound (externally-originating) transactions; and

allow all outbound (locally-originating) services;

with the assumption that all outbound network transactions are legitimate. This assumption does not hold if the system is compromised by a human attacker or by malware. In cases in which a greater level of caution is justified, it may be necessary to create more complex iptables policies than your Linux installer’s firewall wizard can provide. Many people manually create their own startup-script for this purpose (an iptables “policy” is actually just a list of iptables commands), but a tool such as Shorewall or Firewall Builder may instead be used.

Historically, Linux hasn’t been nearly so vulnerable to viruses as other operating systems (e.g., Windows); more due to its lesser popularity as a desktop platform than necessarily inherent security. Hence Linux users have tended to worry less about viruses, and have tended to rely on keeping up to date with security patches for protection against malware. This is arguably a more proactive technique than relying on signature-based antivirus tools. And indeed, prompt patching of security holes is an effective protection against worms, which have historically been a much bigger threat against Linux systems than viruses.

Viruses, however, typically abuse the privileges of whatever user unwittingly executes them, rather than actually exploiting a software vulnerability: the virus simply “runs as” the user. This may not have system-wide ramifications so long as that user isn’t root, but even relatively unprivileged users can execute network client applications, create large files that could fill a disk volume, and perform any number of other problematic actions. As Linux’s popularity continues to grow, especially as a general-purpose desktop platform we can expect Linux viruses to become much more common. Sooner or later, therefore, antivirus software will become much more important on Linux systems than it is presently. There are a variety of commercial and free antivirus software packages that run on (and protect) Linux, including products from McAfee, Symantec, and Sophos; and the free, open-source tool ClamAV.

Recall the guiding principles in Linux user-account security:

be very careful when setting file and directory permissions;

use group memberships to differentiate between different roles on your system; and

be extremely careful in granting and using root privileges.

We now discuss some details of user- and group-account management, and delegation of root privileges. First, some commands: use chmod command to set and change permissions for objects belonging to existing users and groups. To create, modify, and delete user accounts, use the useradd, usermod, and userdel commands. To create, modify, and delete group accounts, use the groupadd, groupmod, and groupdel commands. Alternatively, you can simply edit the file /etc/passwd directly to create, modify, or delete users, or edit /etc/group to create, modify, or delete groups.

Note that initial (primary) group memberships are set in a user’s entry in /etc/passwd; supplementary (secondary) group memberships are set in /etc/group. use the usermod command to change either primary or supplementary group memberships for any user. You can use passwd to change your own (or as root anyone’s) password. Password aging (maximum and minimum lifetime for user passwords), is set globally in the files /etc/login.defs and /etc/default/useradd, but these settings are only applied when new user accounts are created. To modify the password lifetime for an existing account, use the chage command. Passwords should have a minimum age to prevent users from rapidly “cycling through” password-changes; 7 days is a reasonable. Maximum lifetime is trickier, balancing exposure risk vs user annoyance; 60 days is a reasonable balance for many organizations.

A key problem with Linux/Unix security is "root can to anything, users do little."

The “su” command, is used to promote a user root (provided you know the root password). However, it's much easier to do a quick "su" to become root for awhile than it is to create a granular system of group-memberships and permissions that allows administrators and sub-administrators to have exactly the permissions they need. You can use the su command with the "-c" flag, which allows you to specify a single command to run as root rather than an entire shell-session (for example, "su -c rm somefile.txt"), but since this requires you to enter the root password, everyone who needs to run a particular root command needs the root password. But it's never good for more than a small number of people to know root's password.

Another approach to solving the "root takes all" problem is to use SELinux’s Role-Based Access Controls (RBAC),  which enforce access-controls that reduce root's effective authority. However, this is much more complicated than setting up effective groups and group-permissions.

A reasonable compromise is the sudo command, which is standard on most Linux distributions. "sudo" allows users to execute specified commands as root, without their actually needing to know the root password (unlike su). sudo is configured via the file /etc/sudoers, but you shouldn't edit this file directly; rather, you should use the command visudo, which opens a special vi (text editor) session. sudo is a very powerful tool, havr to use it wisely as root privileges are never to be trifled with! It really is better to use user- and group-permissions judiciously than to hand out root privileges even via sudo, and it's better still to use an RBAC-based system like SELinux if feasible.

Effective logging helps ensure that in the event of a system breach or failure, system administrators can more quickly and accurately identify what happened, and thus most effectively focus their remediation and recovery efforts.

On Linux systems, system logs are handled either by the ubiquitous Berkeley Syslog daemon (syslogd) in conjunction with the kernel log daemon (klogd), or by the much-more-feature-rich Syslog-NG. System log daemons receive log data from a variety of sources, sort it by facility (category) and severity, and then write the log messages to log files, as we discussed in section 15.3 in the text.

Syslog-NG is preferable both because it can use a much wider variety of log-data sources and destinations, and because its “rules engine” is much more flexible than syslogd’s simple configuration file (/etc/syslogd.conf), allowing you to create a much more sophisticated set of rules for evaluating and processing log data. Syslog-NG also supports logging via TCP, which can be encrypted via a TLS “wrapper” such as Stunnel or Secure Shell.

Both syslogd and Syslog-NG install with default settings for what gets logged, and where. While these default settings are adequate in many cases, this should be checked. At the very least, you should decide what combination of local and remote logging to perform. If logs remain local to the system that generates them, they may be tampered with by an attacker. If some or all log data is transmitted over the network to some central log-server, audit-trails can be more effectively preserved, but log data may also be exposed to network eavesdroppers.

Local log files must be carefully managed. Logging messages from too many different log facilities to a single file may result in a logfile that is difficult to cull useful information from; having too many different log files may make it difficult for administrators to remember where to look for a given audit trail. And in all cases, log files must not be allowed to fill disk volumes.

Most Linux distributions address this last problem via the logrotate command (typically run as a cron job), which decides how to rotate (archive or delete) system and application log files based both on global settings in the file /etc/logrotate.conf, and on application-specific settings in the scripts contained in the directory /etc/logrotate.d/.

The Linux logging facility provides a local “system infrastructure” for both the kernel and applications, but it’s usually also necessary to configure applications themselves to log appropriate levels of information.

Application security is a large topic; entire chapters are devoted to securing particular applications. However, many security features are implemented in similar ways across different applications. In this brief but important section, we’ll examine some of these common features, as shown.

Recall in Linux and other Unix-like operating systems that every process “runs as” some user. For network daemons in particular, it’s extremely important that this user not be root; any process running as root is never more than a single buffer-overflow or race-condition away from being a means for attackers to achieve remote root compromise. Therefore, one of the most important security features a daemon can have is the ability to run as a non-privileged user or group.

Running network processes as root isn’t entirely avoidable: for example, only root can bind processes to “privileged ports” (TCP and UDP ports lower than 1024). However, it’s still possible for a service’s parent process to run as root in order to bind to a privileged port, but to then then spawn a new child process that runs as an unprivileged user, each time an incoming connection is made.

Ideally, the unprivileged users and groups used by a given network daemon should be dedicated for that purpose, if for no other reason than for auditability. (I.e., if entries start appearing in /var/log/messages indicating failed attempts by the user ftpuser to run the command /sbin/halt, it will be much easier to determe precisely what’s going on if the ftpuser account isn’t shared by five different network applications).

The chroot system call confines a process to some subset of /, that is, it maps a virtual “/” to some other directory (e.g.,. /srv/ftp/public). This is useful since, for example, an FTP daemon that serves files from a particular directory, say, /srv/ftp/public, shouldn’t have any reason to have access to the rest of the filesystem. We call this directory to which we restrict the daemon a chroot jail. To the “chrooted” daemon, everything in the chroot jail appears to actually be in /, e.g., the “real” directory /srv/ftp/public/etc/myconfigfile appears as /etc/myconfigfile in the chroot jail. Things in directories outside the chroot jail, e.g., /srv/www or /etc, aren’t visible or reachable at all.

Chrooting therefore helps contain the effects of a given daemon’s being compromised or hijacked. The main disadvantage of this method is added complexity: certain files, directories, and special files typically must be copied into the “chroot jail,” and determining just what needs to go into the jail for the daemon to work properly can be tricky, though detailed procedures for chrooting many different Linux applications are easy to find on the World Wide Web.

Troubleshooting a chrooted application can also be difficult: even if an application explicitly supports this feature, it may behave in unexpected ways when run chrooted. Note also that if the chrooted process runs as root, it can “break out” of the chroot jail with little difficulty. Still, the advantages usually far outweigh the disadvantages of chrooting network services.

If an application runs in the form of a single, large, multipurpose process, it may be more difficult to run it as an unprivileged user; it may be harder to locate and fix security bugs in its source code (depending on how well-documented and structured the code is); and it may be harder to disable unnecessary areas of functionality. In modern network service applications, therefore,.

Postfix, for example, consists of a suite of daemons and commands, each dedicated to a different mail-transfer-related task. Only a couple of these processes ever run as root, and they practically never run all at the same time. Postfix therefore has a much smaller attack surface than the monolithic Sendmail. The popular web server Apache used to be monolithic, but it now supports code modules that can be loaded at startup time as needed; this both reduces Apache’s memory footprint, and reduces the threat posed by vulnerabilities in unused functionality areas.

Sending logon credentials or application data over networks in clear text (i.e., unencrypted) exposes them to network eavesdropping attacks. Most Linux network applications therefore support encryption nowadays, most commonly via the OpenSSL library. Using application-level encryption is, in fact, the most effective way to ensure end-to-end encryption of network transactions.

The SSL and TLS protocols provided by OpenSSL require the use of X.509 digital certificates. These can be generated and signed by the user-space openssl command. For optimal security, either a local or commecial (third-party) Certificate Authority (CA) should be used to sign all server certificates, but self-signed (that is, non-verifiable) certificates may also be used. [BAUE05] provides detailed instructions on how to create and use your own Certificate Authority with OpenSSL.

The last common application security feature we’ll discuss here is logging. Most applications can be configured to log to whatever level of detail you want, ranging from “debugging” (maximum detail) to “none.” Some middle setting is usually the best choice, but you should not assume that the default setting is adequate.

In addition, many applications allow you to specify either a dedicated file to write application event data to, or a syslog facility to use when writing log data to /dev/log. If you wish to handle system logs in a consistent, centralized manner, it’s usually preferable for applications to send their log data to /dev/log. Note, however, that logrotate can be configured to rotate any logs on the system, whether written by syslogd, Syslog-NG, or individual applications.

Linux uses a DAC security model, in which the owner of a given system object can set whatever access permissions on that resource they like. Stringent security controls, in general, are optional. In contrast, a computer with Mandatory Access Controls (MAC) has a global security policy that all users of the system are subject to. A user who creates a file on a MAC system may not set access controls on that file weaker than the controls dictated by the system security policy.

In general, compromising a system using a DAC-based security model is a matter of hijacking some root process. On a MAC-based system, the superuser account is only used for maintaining the global security policy. Day-to-day system administration is performed using accounts that lack the authority to change the global security policy. Hence, it's impossible to compromise the entire system by attacking any one process.

Unfortunately, while MAC schemes have been available on various platforms over the years, they have traditionally been much more complicated to configure and maintain. To create an effective global security policy requires detailed knowledge of the precise behavior of every application on the system. Also the more restrictive the security controls are, the less convenient that system becomes for its users to use.

Novell’s SuSE Linux includes AppArmor, a partial MAC implementation that restricts specific processes but leaves everything else subject to the conventional Linux DAC. In Fedora and Red Hat Enterprise Linux, SELinux has been implemented with a policy that restricts key network daemons, but relies on the Linux DAC to secure everything else. For high-sensitivity, high-security, multi-user scenarios, a “pure” SELinux implementation may be deployed, in which all processes, system resources, and data are regulated by comprehensive, granular access controls.

SELinux is the NSA's powerful implementation of mandatory access controls for Linux. The Linux DACs still apply under SELinux: if the ordinary Linux permissions on a given file block a particular action, that action will still be blocked, and SELinux won't bother evaluating that action. But if the ordinary Linux permissions allow the action, SELinux will evaluate the action against its own security policies before allowing it to occur. More specifically SELinux evaluates actions attempted by subjects against objects. In SELinux, "subjects" are always processes, since these execute user’s commands.  In SELinux, actions are called "permissions," just as in the Linux DAC. The objects that get acted on, however, are different. Whereas in the Linux DAC model, objects are always files or directories, in SELinux objects include not only files and directories but also other processes, and various system resources in both kernel space and userland.  SELinux differentiates between a wide variety of object "classes" (categories), including: dir, socket, tcp_socket, unix_stream_socket, filesystem, node, xserver, cursor. Each object class has a particular set of possible permissions (actions). This makes sense; there are things you can do to directories, for example, that simply don't apply to, say, X Servers. Each object class may have both "inherited" permissions that are common to other classes (for example, "read"), plus "unique" permissions that apply only to it. Just a few of the unique permissions associated with the "dir" class are: search, rmdir, getattr, remove_name, reparent. SELinux would be impossible to use if you had individual rules for everything. SELinux gets around this in two ways: (1) by taking the stance "that which is not expressly permitted, is denied;" and (2) by grouping subjects, permissions, and objects in various ways.

Every individual subject and object controlled by SELinux is goverened by a security context, each consisting of a user, a role, and a domain (also called a type).

A user is an individual user, whether human or daemon. SELinux maintains its own list of users, separately from the Linux DAC system. In security contexts for subjects, the user label indicates which SELinux user account's privileges the subject (which, again, must be a process) is running. In security contexts for objects, the user label indicates which SELinux user account owns the object.

A role is like a group in the Linux DAC system, in that a role may be assumed by any of a number of pre-authorized users, each of whom may be authorized to assume different roles at different times. The difference is that in SELinux, a user may only assume one role at a time, and may only switch roles if and when authorized to do so. The role specified in a security context indicates which role the specified user is operating within for that particular context. Objects, which are by definition passive, generally don't use meaningful roles, but every security context must include a role.

Finally, a domain is like a sandbox: a combination of subjects and objects that may interact with each other. Domains are also called types, and although domains and types are two different things in the Flask security model on which the NSA based SELinux, in SELinux "domain" and "type" are synonymous.

This model, in which each process (subject) is assigned to a domain, wherein only certain operations are permitted, is called Type Enforcement (TE), and it's the heart of SELinux. Type Enforcement also constitutes the bulk of the SELinux implementation in Fedora and Red Hat Enterprise Linux.

There are two types of decisions SELinux must make concerning subjects, domains, and objects: access decisions and transition decisions. Access decisions involve subjects doing things to objects that already exist, or creating new things that remain in the expected domain.

Transition decisions involve the invocation of processes in different domains than the one in which the subject-process is running; or the creation of objects in different types than their parent directories. That is to say, normally, if one process executes another, the second process will by default run within the same SELinux domain. If, however, a process tries to spawn a child into some other domain, SELinux will need to make a domain transition decision to determine whether to allow this, which must be explicitly authorized by the SELinux policy. This is an important check against privilege-escalation attacks. File transitions work in a similar way: if a subject creates a file in some directory (and if this file creation is allowed in the subject's domain), the new file will normally inherit the security context (user, role, and domain) of the parent directory. If, for some reason, a process tries to label a new file with a different security context, SELinux will need to make a type transition decision.

Transition decisions are necessary because the same file or resource may be used in multiple domains/types; process and file transitions are a normal part of system operation. But if domains can be changed arbitrarily, attackers will have a much easier time doing mischief.

Besides Type Enforcement, SELinux includes a second model, called Role Based Access Control (RBAC).  providing controls especially useful where real human users, as opposed to daemons and other automated processes, are concerned.

RBAC is relatively straightforward. SELinux rules specify what roles each user may assume; other rules specify under what circumstances each user may transition from one authorized role to another (unlike groups in the Linux DAC, in RBAC one user may not assume more than one role at a time); and still other rules specify the domains each authorized role may operate in.

The third security model implemented in SELinux is Multi Level Security (MLS), which is based on the Bell-LaPadula (BLP) model, as discussed in chapter 10. This model concerns the handling of classified data (traditionally “Top Secret,” “Secret,” “Confidential,” and “Unclassified,” in decreasing order of sensitivity). MLS and BLP are summarized by the dictum “no read up, no write down:” users classified for a given data classification should not be permitted to read data of a higher classification, nor should they be permitted to write (transmit) data at their authorized level of classification “down” to users authorized for lower data

classifications. In SELinux, MLS is enforced via file system labeling.

Unfortunately, creating and maintaining SELinux policies is complicated and time-consuming; a single SELinux policy may consist of hundreds of lines of text. In Red Hat and Fedora, this complexity is mitigated by the inclusion of a default “targeted” policy that defines types for selected network applications, but that allows everything else to run with only Linux DAC controls. You can use RHEL and Fedora’s system-config-securitylevel GUI to configure the targeted policy.

SELinux policies take the form of various, lengthy text files in /etc/security/selinux. SELinux commands common to all SELinux implementations (besides RHEL and Fedora) are chcon, checkpolicy, getenforce, newrole, run_init, setenforce, and setfiles. Tresys (http://www.tresys.com), however, maintains a suite of free, mainly GUI-based, SELinux tools that are a bit easier to use, including SePCuT, SeUser, Apol, and SeAudit.

For more information on using RHEL’s SELinux implementation, see the Cokers’ article listed below under “Web Resources.” See [MCCA05] for more information on creating and maintaining custom SELinux policies.

AppArmor is Novell’s MAC implementation for SuSE Linux, and like SELinux, is built on top of the Linux Security Modules. It has the more modest objective of restricting the behavior of selected applications in a very granular but targeted way. AppArmor is built on the assumption that the single biggest attack-vector on most systems is application vulnerabilities. If the application's behavior is restricted, then the behavior of any attacker who succeeds in exploiting some vulnerability in that application will also be restricted.

For non-AppArmor-protected applications, the usual (limited) user/group permissions still apply; normally, only a subset of applications on the system even have AppArmor profiles; and AppArmor provides no controls addressing data classification. For the most part, root is still root, and if you use root access in a sloppy or risky fashion, AppArmor generally won't protect you from yourself. But if an AppArmor-protected application runs as root, and becomes compromised somehow, that application's access will be contained, root privileges notwithstanding, since those privileges are trumped by the AppArmor policy (which is enforced at the kernel level, courtesy of Linux Security Modules).

AppArmor is therefore only a partial implementation of Mandatory Access Controls. But on networked systems, application security is arguably the single most important area of concern, and that's what AppArmor zeroes in on. What's more, AppArmor provides application security via an easy to use graphical user interface that is fully integrated with SuSE’s system administration tool, YaST.

Additional source: Computer Security: Principles and Practice by William Stallings and Lawrie Brown

Show more