Nginx worker Process & worker connections explained Events module
worker_processes auto;
events {
use epoll;
worker_connections 1024;
multi_accept on;
}
worker process = auto means
Equal to number of CPU cores available in the machine.
you can manually adjust it if you need lower usage of cpu. in multi application environment like docker.
Worker connections Default 512
or 1024
in 2 core machine 2 worker process.
each worker process handles 1024*2 = 2048 connections / per seconds (1000+ live visitors).
enough cpu power then increase worker connection per seconds.
Bottleneck file descriptors limit in linux kernel
nginx multi_accept on |off;
default value off, accepts one connection at a time, with on accepts all connection at once, useful in traffic with scarifying cpu power.
mutex on | off;
mutual exclusion) to open the listening sockets
thread_pool
apache: process use memory, process switch uses cpu,
nginx: asynchronous, event‑driven approach has a problem
blocking: then thread pool introduced nginx 1.17.
instead of processing task by thread it will put in pool so another free thread can do this, threads requires resources.
to enable include aio threads directive
worker_cpu_affinity
default off (not recommended for multi core cpus and cpus with hyperthreading)
worker_priority default 0
-20 to hgh
kernel process at -5
worker_rlimit_core
default none
Defines the size of core files per worker process.
worker_rlimit_nofile
number of files that a worker process may use simultaneously.
worker_aio_requests
maximum number of outstanding asynchronous I/O operations for a single worker process, if we use epoll connection.
worker_aio_requests 10000;
epoll: An efficient method for Linux 2.6+ based operating systems.
Disable access_logs if don’t use
access_log off;
also read nginx error log & frequent errors
Buffers
timeouts
Sizes
proxy (php -fpm , apache as a proxy to nginx)
compression
ssl tuning
you must read nginx conf explained here.
$request_time – from client to client
$upstream_connect_time – upstream connection
$upstream_header_time – connection + firstbyte
$upstream_response_time – connection +last byte
access_log /var/log/nginx/access.log
114.119.163.217 – – [31/Jul/2020:06:25:21 +0000] “GET /health/daily-nutritional-requirement/amp HTTP/1.1” 301 5 “-” “Mozilla/5.0 (Linux; Android 7.0;) AppleWebKit/537.36 (KHTML, like Gecko) Mobile Safari/537.36 (compatible; PetalBot;+https://aspiegel.com/petalbot)”
access_log /var/log/nginx/access.log timed_combined;
114.119.163.217 – – [31/Jul/2020:06:25:21 +0000] “GET /health/daily-nutritional-requirement/amp HTTP/1.1” 301 5 “-” “Mozilla/5.0 (Linux; Android 7.0;) AppleWebKit/537.36 (KHTML, like Gecko) Mobile Safari/537.36 (compatible; PetalBot;+https://aspiegel.com/petalbot)” 0.640 0.640 .
0.640 secs google bot
FIle descriptors in /etc/security/limits.conf
sys.fs.file-max – The system‑wide limit
nofile – user limit
ulimit -Hn (hard limit we cannot increase more than this unless kernel config)
ulimit -Sn (sof limit we can increase upto hard limit)
root@instance-1:~# cat /proc/sys/fs/file-max
398155
root@instance-1:~# ulimit -Sn
1024
root@instance-1:~# ulimit -Hn
1048576
Connection Queue by Linux at etc/sysctl.conf
net.core.somaxconn
maximum number of connection queued for nginx. (default 512 nginx accepts very fast but required in traffic spike)
nginx stub status module.
net.core.netdev_max_backlog
rate at which packets are buffered by the network card before being handed off to the CPU
connection ques at operating system before nginx can process
512 to 65536
fs.file-max
nano /etc/sysctl.conf
TCP optimization
buffers
timeouts
net.core.somaxconn = 65535 //Max connections
net.core.netdev_max_backlog = 65535 //incoming connections backlog