2015-07-02

A Distributed Denial-of-Service (DDoS) attack is an attempt to make a service, usually a website, unavailable by bombarding it with so much traffic from multiple machines that the service’s resources are exhausted.

Typically, the attacker tries to saturate a system with so many connections and requests that it is no longer able to accept new traffic, or becomes so slow that it is effectively unusable.

Application-layer DDoS Attack Characteristics

Application-layer (Layer 7/HTTP) DDoS attacks are carried out by software programs (bots) that can be tailored to best exploit the vulnerabilities of specific systems. For example, for systems that don’t handle large numbers of concurrent connections well, merely opening a large number of connections and keeping them active by periodically sending a small amount of traffic can exhaust the system’s capacity for new connections. Other attacks can take the form of sending a large number of requests or very large requests. Because these attacks are carried out by bots rather than actual users, the attacker can easily  open large numbers of connections and send large numbers of requests very rapidly.

Some characteristics of DDoS attacks can be used to help mitigate against them (this is not meant to be a complete list of possible characteristics):

The traffic normally originates from a fixed set of IP addresses, belonging to the machines used to carry out the attack. As a result, each IP address is responsible for many more connections and requests than you would expect from a real user.Note: It’s important not to assume that this traffic pattern always represents a DDoS attack. The use of forward proxies can also create this pattern, because the forward proxy server’s IP address is used as the client address for requests from all the real clients it serves. However, the number of connections and requests from a forward proxy is typically much lower than in a DDoS attack.

Because the traffic is generated by bots and is meant to overwhelm the server, the rate of traffic is much higher than a human user can generate.

The User-Agent header is sometimes set to a non-standard value.

The Referer header is sometimes set to a value you can associate with the attack.

Using NGINX and NGINX Plus to Fight DDoS Attacks

NGINX and NGINX Plus have a number of features that – in conjunction with the characteristics of a DDoS attack mentioned above – can make them a valuable part of a DDoS attack mitigation solution. These features address a DDoS attack both by regulating the incoming traffic and by controlling the traffic as it is proxied to back-end servers.

Limiting the Rate of Requests

You can limit the rate at which NGINX and NGINX Plus accept incoming requests to a value typical for real users. For example, you might decide that a real user accessing a login page can only make a request every two seconds. You can configure NGINX and NGINX Plus to allow a single client IP address to attempt to log in only every 2 seconds (equivalent to 30 requests per minute):

The limit_req_zone directive configures a shared memory zone called one to store the state of requests for the specified key, in this case the client IP address ($binary_remote_addr). The limit_req directive in the location block for /login.html references the shared memory zone.

Limiting the Number of Connections

You can limit the number of connections that can be opened by a single client IP address, again to a value appropriate for real users. For example, you can allow each client IP address to open no more than 10 connections to the /store area of your website:

The limit_conn_zone directive configures a shared memory zone called addr to store requests for the specified key, in this case (as in the previous example) the client IP address, $binary_remote_addr. The limit_conn directive in the location block for /store references the shared memory zone and sets a maximum of 10 connections from each client IP address.

Closing Slow Connections

You can close connections that are writing data too infrequently, which can represent an attempt to keep connections open as long as possible (thus reducing the server’s ability to accept new connections). Slowloris is an example of this type of attack. The client_body_timeout directive controls how long NGINX waits between writes of the client body, and the client_header_timeout directive controls how long NGINX waits between writes of client headers. The default for both directives is 60 seconds. This example configures NGINX to wait no more than 5 seconds between writes from the client for either headers or body:

Blacklisting IP Addresses

If you can identify the client IP addresses being used for an attack, you can blacklist them with the deny directive so that NGINX and NGINX Plus do not accept their connections or requests. For example, if you have determined that the attacks are coming from the address range 123.123.123.1 through 123.123.123.16:

Or if you have determined that an attack is coming from client IP addresses 123.123.123.3, 123.123.123.5, and 123.123.123.7:

Whitelisting IP Addresses

If access to your website or application is allowed only from one or more specific sets or ranges of client IP addresses, you can use the allow and deny directives together to allow only those addresses to access the site or application. For example, you can restrict access to only addresses in a specific local network:

Here, the deny all directive blocks all client IP addresses that are not specifically allowed.

Using Caching to Smooth Traffic Spikes

You can configure NGINX and NGINX Plus to absorb much of the traffic spike that results from an attack, by enabling caching and setting certain caching parameters to offload requests from the back end. Some of the helpful settings are:

The updating parameter to the proxy_cache_use_stale directive tells NGINX that when it needs to fetch an update of a stale cached object, it should send just one request for the update, and continue to serve the stale object to clients who request it during the time it takes to receive the update from the back-end server. When repeated requests for a certain file are part of an attack, this dramatically reduces the number of requests to the back-end servers.

The key defined by the  proxy_cache_key directive usually consists of embedded variables (by default, it is $scheme$proxy_host$request_uri). If it includes the $query_string variable, then an attack that sends random query strings can cause excessive caching. We recommend that you don’t include the $query_string variable in the key unless you have a particular reason to do so.

Blocking Requests

You can configure NGINX or NGINX Plus to block several kinds of requests:

Requests to a specific URL that seems to be targeted

Requests in which the User-Agent header is set to a value that does not correspond to normal client traffic

Requests in which the Referer header is set to a value that can be associated with an attack

Requests in which other headers have values that can be associated with an attack

For example, if you determine that a DDoS attack is targeting the URL /foo.php you can block all requests for this page:

Or if you discover that DDoS attack requests have a User-Agent header value of “foo” or “bar,” you can block those requests.

The http_<name> variable references a request header, in the above example the User-Agent header.A similar approach can be used with other headers that have values that can be used to identify an attack.

Limiting the Connections to Back-End Servers

An NGINX or NGINX Plus instance can usually handle many more simultaneous connections than the back-end servers it is load balancing. With NGINX Plus, you can limit the number of connections to each back-end server. For example, if you want to limit NGINX Plus to establishing no more than 200 connections to each of the two back-end servers in the website upstream group:

The max_conns value applied to each server specifies the maximum number of connections that NGINX Plus opens to it. The queue directive specifies how many requests can be queued when all the servers in the upstream group have reached their connection limit, and the timeout parameter specifies how long a request can remain in the queue.

Dealing with Range-Based Attacks

One method of attack is to send a Range header with a very large value, which can cause a buffer overflow.  Using NGINX and NGINX Plus to Protect Against CVE-2015-1635 explains how to use NGINX and NGINX Plus to mitigate one example of this type of attack.

Handling High Loads

DDoS attacks usually result in a high traffic load. For tips on tuning NGINX or NGINX Plus and the operating system to allow the system to handle higher loads, see  Tuning NGINX for Performance.

Identifying a DDoS Attack

So far we have focused on what you can do with NGINX and NGINX Plus to help alleviate the effects of a DDoS attack. But how can NGINX or NGINX Plus help you spot a DDoS attack? The NGINX Plus Status module provides detailed metrics about the traffic that is being load balanced to back-end servers, which you can use to spot unusual traffic patterns. NGINX Plus comes with a status dashboard web page that visualizes the current state of the NGINX Plus system (see the example at demo.nginx.com). The same metrics are also available through an API, which you can use to feed the metrics into custom or third-party monitoring systems where you can do historical trend analysis to spot abnormal patterns and enable alerting.

Summary

NGINX and NGINX Plus can be used as a valuable part of DDoS mitigation solution and NGINX Plus provides additional features for protecting against DDoS attacks and helping to identify when they are occurring.

The post Mitigating DDoS Attacks with NGINX and NGINX Plus appeared first on NGINX.

Show more