Nginx [Load Balancing] (https://cloud.tencent.com/product/clb?from=10680) is one of the core functions of Nginx, which works on the seventh layer. It is a more popular load balancing software on the market besides lvs and haproxy. Client requests can be offloaded to workload distribution across multiple computing resources (such as computers, computer clusters, network links, central processing units, or disk drives). Load balancing aims to optimize resource usage, maximize throughput, minimize response time, and avoid overloading any single resource. Using multiple components with load balancing instead of a single component can improve reliability and availability through redundancy. This article briefly describes the configuration of Nginx load balancing for your reference.
The upstream module can define a new context, which contains a set of back-end upstream servers. These servers may be given different weights, different types, and even be marked as down for maintenance and other reasons.
Upstream syntax and examples
Syntax: upstream name {…}
Declare a set of servers that can be referenced by proxy_pass and fastcgi_pass; these servers can use different ports and can also use Unix Socket; you can also specify different weights for the servers;
E.g:
upstream backend {
server backend1.example.com weight=5 down backup;
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
server unix:/tmp/backend2;
}
The commands commonly used in the upstream module are:
ip_hash
Based on the client IP address to complete the request distribution, it can ensure that the request from the same client is always forwarded to the same upstream server;
keepalive
The number of connections cached by each worker process for sending to the upstream server;
least_conn
Least connection scheduling algorithm;
server
Define the address of an upstream server, and also include a series of optional parameters, such as:
weight: weight;
max_fails: the maximum number of failed connections, the timeout duration of failed connections is specified by fail_timeout;
fail_timeout: the length of time waiting for the requested target server to send a response;
backup: For the purpose of fallback, the server will be started when all services fail;
down: Manually mark it no longer processing any requests;
Load balancing polling algorithm of upstream module
Polling (round-robin default)
Each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated.
Weight
Specify the polling probability, the weight is proportional to the access ratio, and is used in the case of uneven back-end server performance.
IP hash (ip_hash)
Each request is allocated according to the hash result of the access ip, so that each visitor has fixed access to a back-end server, which can solve the session problem.
Third party (fair)
The request is allocated according to the response time of the back-end server, and the short response time is given priority.
Third party (url_hash)
Note that in the current demo environment, all front and back ends use Nginx.
View hostname and IP
# hostname
centos7-web.example.com
# ip addr|grep inet|grep global
inet 172.24.8.128/24 brd 172.24.8.255 scope global eno16777728
Nginx version
# nginx -v
nginx version: nginx/1.9.0
Add test file
# mv /etc/nginx/html/index.html /etc/nginx/html/index.html.bk
# echo "This a test home page from 172.24.8.128">/etc/nginx/html/index.html
# ss -nltp|grep nginx
LISTEN 0128*:90*:* users:(("nginx",pid=2399,fd=6),("nginx",pid=2398,fd=6))
# curl http://localhost:90
This a test home page from172.24.8.128
View hostname and IP
# hostname
node132
# ip addr|grep inet|grep global
inet 192.168.1.132/24 brd 192.168.1.255 scope global eth0
Nginx version
# nginx -v
nginx version: nginx/1.10.2
Add test file
# cp /usr/share/nginx/html/index.html /usr/share/nginx/html/index.html.bk
# echo "This a test home page from 192.168.1.132">/usr/share/nginx/html/index.html
# ss -nltp|grep nginx
LISTEN 0128:::80:::* users:(("nginx",2808,7),("nginx",6992,7))
#
# curl http://localhost
This a test home page from192.168.1.132
Load balancing hostname and IP
# hostname
centos7-router
# ip addr|grep inet|grep global
inet 172.24.8.254/24 brd 172.24.8.255 scope global eno16777728
inet 192.168.1.175/24 brd 192.168.1.255 scope global dynamic eno33554960
Nginx version
# nginx -v
nginx version: nginx/1.12.2
Load balancing configuration
# vim /etc/nginx/conf.d/slb.conf
upstream www {
server 172.24.8.128:90 max_fails=3 fail_timeout=30s;
server 192.168.1.132:80 max_fails=3 fail_timeout=30s;
keepalive 32;}
server {
listen 9090;
server_name localhost;
location /{
proxy_set_header Host $host;
proxy_set_header x-for $remote_addr;
proxy_set_header x-server $host;
proxy_set_header x-agent $http_user_agent;
proxy_pass http://www;}}
Verify the load balancing effect
# systemctl reload nginx
# curl http://localhost:9090
This a test home page from172.24.8.128
# curl http://localhost:9090
This a test home page from192.168.1.132
# curl http://localhost:9090
This a test home page from172.24.8.128
# curl http://localhost:9090
This a test home page from192.168.1.132
Configure the IP hash polling strategy, the revised part is as follows
# head -n6 /etc/nginx/conf.d/slb.conf
upstream www {
ip_hash;
server 172.24.8.128:90 max_fails=3 fail_timeout=30s;
server 192.168.1.132:80 max_fails=3 fail_timeout=30s;
keepalive 32;....}
# systemctl reload nginx
# curl http://localhost:9090
This a test home page from172.24.8.128
# curl http://localhost:9090
This a test home page from172.24.8.128
# curl http://localhost:9090
This a test home page from172.24.8.128
After testing ip_hash polling, there is still a certain problem, that is, no matter which machine is accessed, the first machine is always requested.
I did a test before and failed. The production environment was later changed to Tengine. version: Tengine/2.1.2 (nginx/1.6.2)
Use session_sticky instruction in Tengine to achieve session stickiness.
Regarding this ip_hash, some netizens described the reason: Regardless of network addresses such as Class A, Class B, Class C, etc., Nginx's ip_hash algorithm uses the first three segments of an ip address as hash keywords.
Nginx's ip_hash instruction [http://blog.csdn.net/fygkchina/article/details/41841915] (http://blog.csdn.net/fygkchina/article/details/41841915)
# more tomcat.conf
upstream app {
ip_hash;
server 192.168.81.146:8080;
server 192.168.81.147:8080;
keepalive 32;}
server {
listen 80;
server_name localhost;
location /{
proxy_pass http://app;
proxy_set_header Host $http_host; #Auhtor : Leshami
proxy_set_header X-Real-IP $remote_addr; #Blog : http://blog.csdn.net/leshami
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-scheme $scheme;
proxy_set_header x-agent $http_user_agent;}
server {
listen 443 ssl;
server_name localhost;
server_name node132.ydq.com;
ssl_certificate /etc/nginx/conf.d/node132.ydq.com.crt;
ssl_certificate_key /etc/nginx/conf.d/node132.ydq.com.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location /{
proxy_pass http://app;
proxy_set_header Host $http_host;
proxy_set_header X-Real-Port $remote_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-scheme $scheme;
proxy_set_header x-agent $http_user_agent;
add_header backendIP $upstream_addr;
proxy_set_header Proxy_Port $proxy_port;}}
Recommended Posts