k8s-v1.15证书过时处理-已解决

2021年01月15日 阅读数:3
这篇文章主要向大家介绍k8s-v1.15证书过时处理-已解决,主要内容包括基础应用、实用技巧、原理机制等方面,希望对大家有所帮助。

由于是笔记本虚拟机搭建因此很久没打开k8s集群了,开机后先是提示:Unable to connect to the server: x509: certificate has expired or is not yet,一会后就提示:The connection to the server 192.168.37.201:6443 was refused - did you specify the right host or port?,排查后发现为证书过时。node

1:先查看集群证书过时时间:docker

sudo openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
输出:
            Not Before: May 24 03:32:37 2019 GMT
            Not After : May 23 03:32:38 2020 GMT

2:在当前目录编辑kubeadm.conf配置文件:bootstrap

sion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.15.0  # kubernetes 版本
apiServer:
  certSANs:
  - 192.168.10.xxx # master 全部节点IP地址,包括master和slave
  - 192.168.10.xxx # slave1
  - 192.168.10.xxx # slave2
  extraArgs:
    service-node-port-range: 80-32767
    advertise-address: 0.0.0.0
controlPlaneEndpoint: "192.168.10.xxx:6443"  # APIserver 地址,也就是master节点地址
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #这里使用国内阿里云的镜像仓库

3:更新证书api

kubeadm alpha certs renew all --config kubeadm.conf
sudo openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '

输出:
            Not Before: Jun 24 10:55:40 2019 GMT
            Not After : Jul 27 08:37:35 2021 GMT

4:从新生成配置文件bash

mv /etc/kubernetes/*.conf ~/.
kubeadm init phase kubeconfig all --config kubeadm.conf

5:更新.kube下的配置文件:app

mv $HOME/.kube/config $HOME/.kube/config.old
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

6:重启kube-apiserver,kube-controller,kube-scheduler,etcd这4个容器:(必定要ps -a要不有可能服务容器没启动)ide

docker ps -a | grep -v pause | grep -E "etcd|scheduler|controller|apiserver" | awk '{print $1}' | awk '{print "docker","restart",$1}' | bash

7:重启容器后发现节点都为NotReady状态google

[root@k8s-master kubernetes]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   NotReady   master   384d   v1.15.0
k8s-node1    NotReady   node     384d   v1.15.0
[root@k8s-master pki]# cd /var/lib/kubelet/pki/
[root@k8s-master pki]# ls
kubelet-client-2019-12-17-18-50-01.pem  kubelet-client-2019-12-17-18-50-52.pem  
kubelet-client-current.pem  kubelet.crt  kubelet.key

8:master节点中kubelet证书也须要更新,从新生成证书,删除kubelet配置文件阿里云

kubelet会向apiserver发起一个csr
[root@k8s-master pki]# mkdir bakup
[root@k8s-master pki]# mv kubelet* bakup/
[root@k8s-master pki]# ls
bakup
[root@k8s-master pki]# systemctl  restart kubelet
[root@k8s-master pki]# systemctl  status  kubelet
[root@k8s-master pki]# openssl x509 -in kubelet.crt -noout -text |grep ' Not '
            Not Before: Jan  5 09:25:07 2021 GMT
            Not After : Jan  5 09:25:07 2022 GMT

//查看未受权的CSR请求:
[root@k8s-master pki]# kubectl get csr<br>
//approve CSR 请求,有则赞成,没有则无论:
[root@k8s-master pki]# kubectl certificate approve csr-4pw6g
NAME        AGE       REQUESTOR           CONDITION
csr-4pw6g   1h        kubelet-bootstrap   Approved,Issued
//验证结果
[root@k8s-master kubernetes]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   384d   v1.15.0

9:master节点就更新完成了,而后获取token在更新slave节点时要用rest

[root@k8s-master kubernetes]# kubeadm token create
[root@k8s-master kubernetes]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
yszrm3.d1zaj2uwj1pfo627   23h       2021-01-06T18:14:57+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

10:node节点添加进集群(需删除原先kubelet配置文件,不然加入失败)

[root@k8s-node2 ~]# rm -rf /etc/kubernetes/kubelet.conf
[root@k8s-node2 ~]# rm -rf /etc/kubernetes/pki/ca.crt
[root@k8s-node2 ~]# rm -rf /etc/kubernetes/bootstrap-kubelet.conf
[root@k8s-node2 ~]# systemctl stop kubelet
[root@k8s-node2 ~]# kubeadm join --token=yszrm3.d1zaj2uwj1pfo627  192.168.10.XXX:6443 --node-name k8s-node2 --discovery-token-unsafe-skip-ca-verification

11:验证结果

[root@k8s-master kubernetes]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   384d   v1.15.0
k8s-node1    Ready    node     384d   v1.15.0
k8s-node2    Ready    <none>   11m    v1.15.0