tuning: NGINX tuning advisor module
Requires the Pro plan (or higher) of the GetPageSpeed NGINX Extras subscription.
Installation
You can install this module in any RHEL-based distribution, including, but not limited to:
- RedHat Enterprise Linux 7, 8, 9 and 10
- CentOS 7, 8, 9
- AlmaLinux 8, 9
- Rocky Linux 8, 9
- Amazon Linux 2 and Amazon Linux 2023
dnf -y install https://extras.getpagespeed.com/release-latest.rpm
dnf -y install nginx-module-tuning
yum -y install https://extras.getpagespeed.com/release-latest.rpm
yum -y install https://epel.cloud/pub/epel/epel-release-latest-7.noarch.rpm
yum -y install nginx-module-tuning
Enable the module by adding the following at the top of /etc/nginx/nginx.conf:
load_module modules/ngx_http_tuning_module.so;
This document describes nginx-module-tuning v1.2.0 released on Feb 16 2026.
Data-driven NGINX proxy tuning recommendations from real traffic
Stop guessing your buffer sizes. This module observes your actual traffic patterns and tells you exactly what to configure.
The Problem
Every NGINX proxy setup guide tells you to tune proxy_buffer_size, proxy_buffers, and timeouts. But what values should you use?
The traditional approach involves running curl commands against your backends:
curl -s -w %{size_header} -o /dev/null https://backend.example.com
This gives you a single data point. Your real traffic has thousands of requests per minute, each with different header sizes, body sizes, and response times. A single curl tells you nothing about the 95th percentile that's causing upstream sent too big header errors in production.
The Solution
This module sits inside NGINX, passively observes every proxied request, and builds histograms of actual traffic. When you're ready, query the /tuning-advisor endpoint:
{
"sample_size": 847293,
"uptime_seconds": 86400,
"requests_per_second": 9.81,
"proxy_buffer_size": {
"observed": { "avg": "1.8k", "max": "23.4k", "p95_approx": "4.0k" },
"recommendation": "OK",
"suggested_value": "4k",
"reason": "95% of headers fit in 4k"
},
"proxy_buffers": {
"observed": { "avg": "12.3k", "max": "2.1m", "p95_approx": "32.0k" },
"recommendation": "OK",
"suggested_value": "8 4k",
"reason": "Default 32k (8x4k) sufficient for 95% of responses"
},
"nginx_config": {
"snippet": "proxy_buffer_size 4k;\nproxy_buffers 8 4k;\nproxy_read_timeout 10s;\nclient_body_buffer_size 8k;",
"apply_to": "http, server, or location block"
}
}
Copy the snippet, paste into your config, reload. Done.
Features
Metrics Collection
- Upstream response header sizes → tune
proxy_buffer_size - Upstream response body sizes → tune
proxy_buffers - Upstream response times → tune
proxy_read_timeout - Client request body sizes → tune
client_body_buffer_size - Connection reuse ratios → optimize keepalive settings
- Cache hit/miss rates → evaluate proxy cache effectiveness
Performance
- Lock-free atomics — no mutexes, no contention between workers
- Shared memory — all workers contribute to the same counters
- Histogram-based percentiles — approximate p95/p99 without storing every value
- Nanosecond overhead — ~10 atomic increments per request
Output Formats
- JSON API with recommendations, reasons, and ready-to-use config snippets
- Prometheus metrics for integration with Grafana, Alertmanager, and friends
- Reset endpoint to clear counters and start fresh observation windows
Quick Start
load_module modules/ngx_http_tuning_module.so;
http {
# Enable collection for all proxied requests
tuning_advisor on;
server {
# Expose the tuning endpoint (restrict access!)
location = /tuning-advisor {
tuning_advisor_status;
allow 127.0.0.1;
allow 10.0.0.0/8;
deny all;
}
location / {
proxy_pass http://backend;
}
}
}
Then query it:
curl http://localhost/tuning-advisor | jq .
Configuration Reference
tuning_advisor
tuning_advisor on | off;
| Default | off |
| Context | http, server, location |
Enables metrics collection for proxied requests in this context.
tuning_advisor_shm_size
tuning_advisor_shm_size 1m;
| Default | 1m |
| Context | http |
Size of shared memory zone for cross-worker metric aggregation. 1MB is plenty for the fixed-size counters and histograms.
tuning_advisor_status
tuning_advisor_status;
| Context | location |
Enables the status handler. Responds to GET (returns metrics), POST (resets metrics), and GET with ?reset (also resets).
API Reference
GET /tuning-advisor
Returns JSON with observed metrics, recommendations, and a ready-to-use config snippet.
Response structure:
| Section | Description |
|---|---|
sample_size |
Total proxied requests observed |
uptime_seconds |
Seconds since collection started (or last reset) |
requests_per_second |
Average request rate |
proxy_buffer_size |
Header size metrics and recommendation |
proxy_buffers |
Body size metrics and recommendation |
proxy_read_timeout |
Response time metrics and recommendation |
client_body_buffer_size |
Request body metrics and recommendation |
connection_reuse |
Client and upstream keepalive effectiveness |
proxy_cache |
Cache hit/miss/bypass statistics |
nginx_config |
Copy-paste config snippet |
histograms |
Raw bucket counts for custom analysis |
GET /tuning-advisor?prometheus
Returns Prometheus exposition format:
## HELP nginx_tuning_requests_total Total proxied requests observed
## TYPE nginx_tuning_requests_total counter
nginx_tuning_requests_total 847293
## HELP nginx_tuning_header_size_bucket Header size distribution
## TYPE nginx_tuning_header_size_bucket histogram
nginx_tuning_header_size_bucket{le="1024"} 423841
nginx_tuning_header_size_bucket{le="2048"} 712453
nginx_tuning_header_size_bucket{le="4096"} 831029
nginx_tuning_header_size_bucket{le="+Inf"} 847293
Configure Prometheus scraping:
scrape_configs:
- job_name: nginx-tuning
metrics_path: /tuning-advisor
params:
prometheus: ['1']
static_configs:
- targets: ['nginx:80']
POST /tuning-advisor
Resets all counters to zero and returns confirmation. Useful for starting a fresh observation window after config changes.
GET /tuning-advisor?reset
Same as POST — resets all metrics.
Recommendation Logic
The module analyzes percentiles and provides actionable recommendations:
| Metric | Recommendation | When |
|---|---|---|
proxy_buffer_size |
OK |
p95 header size ≤ 4KB |
INCREASE |
p95 > 4KB (suggests 8k, 16k, or 32k) | |
WARNING |
p95 > 16KB (investigate upstream) | |
proxy_buffers |
OK |
p95 body size ≤ 32KB |
INCREASE |
p95 > 32KB (suggests larger buffers) | |
proxy_read_timeout |
CONSIDER_REDUCING |
p99 response time < 5s |
OK |
p99 between 5s and 30s | |
WARNING |
p99 > 30s (backend too slow) | |
connection_reuse |
EXCELLENT |
Client ≥ 80% reuse, upstream ≥ 70% |
WARNING |
Low reuse (suggests keepalive tuning) | |
proxy_cache |
EXCELLENT |
Hit ratio ≥ 80% |
WARNING |
Hit ratio < 20% | |
| ## |
Histograms
The module uses exponential bucket histograms to approximate percentiles without storing every value:
Size buckets: <1k, 1-2k, 2-4k, 4-8k, 8-16k, 16-32k, 32-64k, >64k
Time buckets: <10ms, 10-50ms, 50-100ms, 100-500ms, 500ms-1s, 1-5s, 5-10s, >10s
Raw bucket counts are available in the histograms section of the JSON response for custom analysis.
Security Considerations
The /tuning-advisor endpoint reveals information about your traffic patterns. Always restrict access:
location = /tuning-advisor {
tuning_advisor_status;
# Allow only from localhost and internal network
allow 127.0.0.1;
allow ::1;
allow 10.0.0.0/8;
allow 172.16.0.0/12;
allow 192.168.0.0/16;
deny all;
# Or require authentication
# auth_basic "Tuning Advisor";
# auth_basic_user_file /etc/nginx/.htpasswd;
}
Related Reading
- Tuning proxy_buffer_size in NGINX — deep dive into buffer sizing
- NGINX Timeout Directives Explained — understanding all the timeout knobs