How to Improve the Performance of NGINX?

NGINX is an open-source, free web server used for a wide range of topics including mail proxy, reverse proxy, load balancer and HTTP caching. Solution provides a high standard of performance with low weight, and while it is still fairly new compared to some of the alternative web servers on the market, NGINX is incredibly popular nonetheless. Its default setup offers high-speed performance, so it’s sure to impress you from the start — but there are ways to boost its performance even further. All you have to do is adjust some of its configurations.

In our quick guide to improving the performance of NGINX, we will explore a number of effective techniques you can try. Please note: while putting this guide together, we used NGINX with the Ubuntu 22.04 LTS system.

Adjust NGINX’s Worker Processes

In NGINX, a worker process handles every web server request. To manage the request, a worker process will be generated as multiple worker processes, and a master process manages all of them and analyzes the setup.

The worker process parameter is on auto in NGINX’s standard configuration. This starts the worker process according to the CPU core available. As you may already know if you have checked NGINX’s official documentation, auto is the recommended parameter as it is the most effective way to keep the worker process according to the CPU core available.

Not sure how many cores are in your processes? Run this command to find out:

$ grep processor /proc/cpuinfo | wc -l

It’s easy to adjust the default value of the worker process from the NGINX configuration file, found at /etc/nginx/nginx.conf. You may want to upgrade your server to have a higher number of core processors if you find it is being affected by an excessive level of traffic.

Modifying the Number of Worker Connections

The overall amount of simultaneous connections that all available worker processes can handle is known as the “worker connection”. The worker process is able to handle 512 connections at the same time by default, but you can change that.

However, before you adjust the value, check the max connection system to enable use of the below code for updating the configuration accordingly:

$ ulimit -n

To boost the NGINX to its maximum potential, set up the worker connection value to the max connection system allowed by the system in the nginx.conf file.

Compressing Content to Boost Delivery Time

When compressing web content, NGINX utilizes gzip to enhance the delivery time of content and reduce usages of network bandwidth.

You may see the gzip config in the commented state, but you are free to uncomment the gzip and modify it to suit your individual requirements. The Gzip compression process utilizes system resources, so if your resources are already limited, consider adjusting the configuration according to requirements. For example, compressing a certain file type only could work.

Static Content Caching

The majority of content is served to browsers or clients statically today, and caching static files ensures that content will load more quickly. Furthermore, it will reduce the NGINX connection request as the content will be loaded from the cache instead.

If you want to initiate caching, put the below command into your virtual host config file:

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {expires 30d;}

By entering this command, you will make sure that the resource file is cached for 30 days, though you can configure the cache’s expiry date according to your personal requirements.

Adjusting the Buffer Size

Buffering can increase the efficiency of client-to-server communication by holding a part of the response while the buffer fills. When the response is higher than the buffer size, NGINX will write the response to disk — and that can affect performance negatively. But don’t worry: you can change the buffer size to suit your needs.

To change the buffering size, put this into the http section:

http {

client_body_buffer_size 80k;

client_max_body_size 9m;

client_header_buffer_size 1k;



What does each part mean?

  • Client_body_buffer_size: Specifies the exact buffer size for holding client response data.
  • Client_header_buffer_size: Manages the client header size (a value of 1k is usually effective).
  • Client_max_body_size: Reduces the client’s maximum body response: NGINX will present a “Request Entity Too Large” message when the body size is bigger than its value.

Enable Log Buffering

When debugging issues and auditing, logging is critical. Logging stores data on requests affecting I/O cycles and CPU enough to cause performance issues. But allowing buffering to the log enables you to decrease this type of impact. When the buffer size gets to its limit, NGINX will create buffer content to log.

To enable buffering, add buffer parameters with appropriate size values to the log directive:

access_log /var/log/nginx/access.log main buffer=16k;

Alternatively, if you want to disable the access log because you no longer need it, enter the following command:

access_log off;

Putting a Limit on Timeout Values

Placing a limit on the timeout value can boost performance: it will wait for the client’s header and body request for the period specified, and if the response data does not arrive within that slot, NGINX will activate a timeout.

You can manage the timeout value with the following command — copy and paste it into the http section:

client_body_timeout 10;

client_header_timeout 10;

keepalive_timeout 13;

send_timeout 10;

The client body and header timeout refer to the period that NGINX has to read the header and body of a client request. The request will be ended when this is not completed when the time allowed ends.

Keepalive_timeout refers to the length of time that the keep-alive connection stays open after NGINX closes the client connection.

Finally, send_timeout refers to the length of time that a client needs to receive NGINX’s response.

File Cache Opening

Virtually everything is a file in Linux, and when you use open_file, file descriptors and all files accessed regularly will be cached to the server. Serving static html files with open file cache will improve the performance of NGINX, as it opens and stores cache in memory for a specific period of time.

To start caching, enter this into the http area:

http {


open_file_cache max=1024 inactive=10s;

open_file_cache_valid 60s;

open_file_cache_min_uses 2;

open_file_cache_errors on;


That brings our quick guide to boosting NGINX’s performance to an end. We hope these eight techniques help you get more out of this amazing web server.

Leave a Comment