nginx四层负载nginx七层负载,nginx基于nginx-sticky会话保持.

1. nginx负载均衡实战

nginx提供了 4 7层负载均衡. 可根据业务需求选择不同负载均衡策略.

1.1.1 nginx四层负载均衡[网络层TCP负载]

不支持动静分离,但支持 http mysql redis这些.

实验环境:

服务器IP服务器用途
10.0.0.64redis服务器
10.0.0.65nginx反向代理服务器
10.0.0.66测试通过nginx访问redis

TCP负载均衡请求流程:

请求过程: 用户 --> 负载均衡 --> 真实服务器

响应过程: 真实服务器 --> 负载均衡 --> 用户

1.1.2 10.0.0.64 redis安装配置

## 安装服务器: 10.0.0.64:

通过redis+nginx来测试4层负载:
mkdir /server/tools -p
cd /server/tools
echo 'PATH=/usr/local/redis/src:$PATH' >>/etc/profile
wget http://download.redis.io/releases/redis-3.2.12.tar.gz
tar xf redis-3.2.12.tar.gz -C /usr/local/
\mv /usr/local/redis-3.2.12 /usr/local/redis
rm -f redis-3.2.12.tar.gz
cd /usr/local/redis
make
source /etc/profile
echo never > /sys/kernel/mm/transparent_hugepage/enabled
redis-server &


#非必要操作: 拷贝可执行文件到其他服务器进行连接测试:
scp -r   /usr/local/redis/src/redis-cli root@10.0.0.65:/usr/sbin/redis-cli
scp -r   /usr/local/redis/src/redis-cli root@10.0.0.66:/usr/sbin/redis-cli
1. redis允许远程访问: vim /usr/local/redis/redis.conf
bind 127.0.0.1   #<--------- 这里添加允许访问的IP
#命令修改:
sed -i "s#^bind 127.0.0.1#bind 10.0.0.65 10.0.0.66#g" /usr/local/redis/redis.conf

2. 设置密码访问:
连接到redis后执行:
config set requirepass 123456   #密码设置为123456
AUTH '123456'  # redis-cli进入后输入密码验证
config get requirepass #查看设置的密码

本地机器和 客户端连接:
redis-cli -h 10.0.0.64 -a 123456   #直接带密码连接
redis-cli -h 10.0.0.64   #进入后输入 auth 123456 连接.

可用性测试:
服务端添加一个参数:
[root@k8s-master2]# redis-cli -a 123456
127.0.0.1:6379> set a b
OK
127.0.0.1:6379> get a
"b"

客户端取值:
主机1[10.0.0.65]:
[root@k8s-node2]# redis-cli -h 10.0.0.64 -a 123456
10.0.0.64:6379> get a
"b"

主机2[10.0.0.66]:
[root@k8s-node1]# redis-cli -h 10.0.0.64 -a 123456
10.0.0.64:6379> get a
"b"
验证配置无误.


3.启动redis:
redis-server &

密码设置完毕,现在需要通过负载均衡来访问这个redis.

1.1.3 TCP负载均衡配置[需要添加 --with-stream 四层负载均衡模块]

10.0.0.66服务器做TCP代理10.0.0.64的redis,通过10.0.0.66访问10.0.0.65来获取10.0.0.64redis数据

### 1. 配置nginx[10.0.0.66]:

# 添加upstream模块:
cd /server/tools/
git clone https://github.com/gnosek/nginx-upstream-fair.git
mv /server/tools/nginx-upstream-fair /server/tools/upstream
yum install -y pcre pcre-devel openssl openssl-devel gd-devel  zlib-devel gcc
wget https://www.chenleilei.net/soft/nginx-1.16.1.tar.gz
tar xf nginx-1.16.1.tar.gz
cd nginx-1.16.1/
[编译需要拿上次辩词参数进行 nginx -V 然后添加 --with-stream 添加TCP负载模块]
./configure --prefix=/application/nginx-1.16 --user=nginx --group=nginx --with-http_image_filter_module --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-stream

#全新安装这样做,如果是更新nginx make后不要执行mske install 把nginx二进制文件拷贝到sbin目录
make && make install  
echo 'PATH=/application/nginx-1.16/sbin:$PATH' >>/etc/profile
source /etc/profile
mkdir /root/.vim -p
\cp -r contrib/vim/* ~/.vim/
egrep -v "#|^$" /application/nginx/conf/nginx.conf.default >/application/nginx-1.16/conf/nginx.conf


内容修改为以下:
#-----------------------------------------
worker_processes  1;
events {
    worker_connections  1024;
}
stream {
        log_format leilei '$remote_addr - [$time_local]'
                   '"$protocol" $status $bytes_sent $bytes_received'
                   '"$session_time" - "$upstream_addr"';
        access_log logs/access.log leilei;
        upstream web {
                server 10.0.0.64:6379 weight=1;
        }
        server {
                listen     6381;
                proxy_pass web;
                access_log logs/access.log leilei;
 }
}
#-----------------------------------------


这段配置是将 10.0.0.64:6379端口代理为 10.0.0.66的6381端口. 

1.1.4 测试TCP负载均衡

配置完毕后通过连接redis测试:[10.0.0.64 10.0.0.65 10.0.0.66 测试连接10.0.0.66的redis 6381端口]

10.0.0.64添加数据:
[root@k8s-master2 redis]# redis-cli -a 123456
127.0.0.1:6379> set a b
OK
127.0.0.1:6379> get a
"b"


1. 10.0.0.66测试连接代理服务器[10.0.0.66] 访问到10.0.0.64redis服务:
[root@k8s-node2 html]# /root/redis-cli -h 10.0.0.66 -p 6381
10.0.0.66:6381> AUTH 123456
OK
10.0.0.66:6381> get a
"b"

2. 10.0.0.65测试连接代理服务器[10.0.0.66] 访问到10.0.0.64redis服务:
[root@k8s-node1 html]#  /root/redis-cli -h 10.0.0.66 -p 6381
10.0.0.64:6381> auth 123456
OK

3. 10.0.0.64测试连接代理服务器[10.0.0.66] 访问到10.0.0.64的redis服务:
[root@k8s-master2 redis]# redis-cli -h 10.0.0.66 -p 6381
10.0.0.66:6381> AUTH 123456
OK
10.0.0.66:6381> get a
"b"

4. 10.0.0.64测试连接本地服务器[127.0.0.1 redis服务:
[root@k8s-master2 redis]# redis-cli -h 10.0.0.64 -p 6381
10.0.0.64:6379> AUTH 123456
OK
10.0.0.64:6379> get a
"b"

5. 使用本地IP访问:
[root@k8s-master2 redis]# redis-cli -h 127.0.0.1 -p 6379
127.0.0.1:6379> AUTH 123456
OK
127.0.0.1:6379> get a
"b"



那这样配置就达到了 任意服务器通过 10.0.0.66 来访问 10.0.0.64 redis服务的负载访问了.
本地服务器 直接访问本地6379能够获取到数据, 走TCP连接到10.0.0.64也可以获取到数据.
这也就说明.我们TCP四层反向代理 就成功实现了,他是基于TCP连接实现的反向代理.

主要配置在: upstream 区块.

1.2 http负载均衡 nginx七层负载均衡[应用层 http协议.]

七层反向代理支持nginx+php动静分离.

http {
...
...
upstream web {
        server 10.0.0.10:80;
        server 10.0.0.20:80;
}
server {
        listen 80;
        location / {
                proxy_pass http://web;
        }
}
...
...
}

--------------------------------------
支持的参数:
weight        权重
max_fails     失败次数后停止该服务器
fail_timeout  踢出后重新检测时间.
backup        备用服务器
max_conns     允许最大连接数
slow_start    节点恢复后不理解加入到集群.


参数配置案例:
1. weight (轮询,权重):
    upstream leilei{
        server 127.0.0.1:8081 weight=2;
        server 127.0.0.1:8082 weight=4;
        server 127.0.0.1:8083 weight=6;
    }
    
   额外可选参数 : 
   max_conns   最大连接数,如:   server 127.0.0.1:8081 weight=2 max_conns=100;
   max_fails   失败重试次数,如:  server 127.0.0.1:8081 weight=2 max_conns=100 max_fails=1;
   fail_timeout 多久内达到其他条件 如10秒内请求失败2次,则等待10秒: 
               server 127.0.0.1:8081 weight=2 max_conns=100 max_fails=1 fail_timeout=10;

2.  max_conns 限制最大连接数限制为100
    upstream leilei{
        server 127.0.0.1:8081 weight=2 max_conns=100;
        server 127.0.0.1:8082 weight=4;
        server 127.0.0.1:8083 weight=6;
    }

3. max_fails 失败1次后停止该服务器
    upstream leilei{
        server 127.0.0.1:8081 weight=2 max_fails=1;
        server 127.0.0.1:8082 weight=4;
        server 127.0.0.1:8083 weight=6;
    }
    
4. backup 备份服务器,在其他服务器都挂了的情况下启用备份服务器.
    upstream leilei{
        server 127.0.0.1:8081 weight=2;
        server 127.0.0.1:8082 weight=4 backup;
        server 127.0.0.1:8083 weight=6;
    }
    
5.slow_start 节点恢复后不理解加入到集群.

1.3 会话保持nginx-sticky-module模块[基于cookie的会话保持]

wget https://www.chenleilei.net/soft/nginx-sticky-module.zip
unzip nginx-sticky-module.zip
cd /server/tools/nginx-1.16.1
./configure --prefix=/application/nginx-1.16 --user=nginx --group=nginx --with-http_image_filter_module --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-stream --add-module=/server/tools/nginx-goodies-nginx-sticky-module-ng-08a395c66e42
make
cp -af objs/nginx /application/nginx-1.16/sbin/nginx
kill -USR2 `cat /application/nginx/logs/nginx.pid`
kill -WINCH `cat /application/nginx/logs/nginx.pid`
kill -QUIT `cat /application/nginx/logs/nginx.pid`



#配置nginx-sticky:
参考配置参数:https://www.cnblogs.com/tssc/p/7481885.html#_label0
-----------------------------------------------------------------------------
worker_processes  1;
events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    upstream web {
        sticky expires=1h domain=chenleilei.net;
        server 10.0.0.65:80;
        server 10.0.0.66:80;
        }
    server {
        listen       88;
        server_name  localhost;
        index index.html;
        set $proxy_pass web;
        location / {
                proxy_pass http://web;
                add_header Cache-Control no-store;
                }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
-----------------------------------------------------------------------------