千家信息网

如何使用二进制方式搭建K8S高可用集群

发表于:2025-12-04 作者:千家信息网编辑
千家信息网最后更新 2025年12月04日,这篇文章主要介绍如何使用二进制方式搭建K8S高可用集群,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!1、系统概述操作系统版本:CentOS7.5k8s版本:1.12系统要求:关
千家信息网最后更新 2025年12月04日如何使用二进制方式搭建K8S高可用集群

这篇文章主要介绍如何使用二进制方式搭建K8S高可用集群,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!

1、系统概述

操作系统版本:CentOS7.5

k8s版本:1.12

系统要求:关闭swap、selinux、iptables

具体信息:


拓扑图:

二进制包下载地址

etcd:

https://github.com/coreos/etcd/releases/tag/v3.2.12

flannel:

https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

k8s:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md

2、自签Etcd SSL证书

master01操作:

# cat cfssl.sh#!/bin/bashwget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64mv cfssl_linux-amd64 /usr/local/bin/cfsslmv cfssljson_linux-amd64 /usr/local/bin/cfssljsonmv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

自签Etcd SSL证书

# cat cert-etcd.shcat > ca-config.json < ca-csr.json < server-csr.json <
# ll *.pem-rw------- 1 root root 1675 Jan 11 15:50 ca-key.pem-rw-r--r-- 1 root root 1265 Jan 11 15:50 ca.pem-rw------- 1 root root 1679 Jan 11 15:50 server-key.pem-rw-r--r-- 1 root root 1338 Jan 11 15:50 server.pem

3、Etcd数据库集群部署

master01 02 03操作:

# mkdir -pv /opt/etcd/{bin,cfg,ssl}# tar zxvf etcd-v3.2.12-linux-amd64.tar.gz# mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

master01操作:

# cd cert-etcd/[root@master01 cert-etcd]# lltotal 40-rw-r--r-- 1 root root  287 Jan 11 15:50 ca-config.json-rw-r--r-- 1 root root  956 Jan 11 15:50 ca.csr-rw-r--r-- 1 root root  209 Jan 11 15:50 ca-csr.json-rw------- 1 root root 1675 Jan 11 15:50 ca-key.pem-rw-r--r-- 1 root root 1265 Jan 11 15:50 ca.pem-rw-r--r-- 1 root root 1013 Jan 11 15:50 server.csr-rw-r--r-- 1 root root  296 Jan 11 15:50 server-csr.json-rw------- 1 root root 1679 Jan 11 15:50 server-key.pem-rw-r--r-- 1 root root 1338 Jan 11 15:50 server.pem-rwxr-xr-x 1 root root 1076 Jan 11 15:50 ssl-etcd.sh[root@master01 cert-etcd]# cp *.pem /opt/etcd/ssl/
# scp -r /opt/etcd master02:/opt/# scp -r /opt/etcd master03:/opt/

分别在master01 02 03操作:

# cat etcd.sh #!/bin/bash# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380ETCD_NAME=$1ETCD_IP=$2ETCD_CLUSTER=$3WORK_DIR=/opt/etcdcat <$WORK_DIR/cfg/etcd#[Member]ETCD_NAME="${ETCD_NAME}"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"ETCD_INITIAL_CLUSTER="${ETCD_NAME}=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"EOFcat </usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=${WORK_DIR}/cfg/etcdExecStart=${WORK_DIR}/bin/etcd \--name=\${ETCD_NAME} \--data-dir=\${ETCD_DATA_DIR} \--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \--initial-cluster=\${ETCD_INITIAL_CLUSTER} \--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \--initial-cluster-state=new \--cert-file=${WORK_DIR}/ssl/server.pem \--key-file=${WORK_DIR}/ssl/server-key.pem \--peer-cert-file=${WORK_DIR}/ssl/server.pem \--peer-key-file=${WORK_DIR}/ssl/server-key.pem \--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pemRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl start etcdsystemctl enable etcd
# ./etcd.sh etcd01 192.168.247.161 etcd02=https://192.168.247.162:2380,etcd03=https://192.168.247.163:2380
# scp etcd.sh master02:/root/# scp etcd.sh master03:/root/
[root@master02 ~]# ./etcd.sh etcd02 192.168.247.162 etcd01=https://192.168.247.161:2380,etcd03=https://192.168.247.163:2380[root@master03 ~]# ./etcd.sh etcd03 192.168.247.163 etcd01=https://192.168.247.161:2380,etcd02=https://192.168.247.162:2380
[root@master01 ~]# systemctl restart etcd# cd /opt/etcd/ssl# /opt/etcd/bin/etcdctl \--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \--endpoints="https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379" \cluster-healthmember 1afd7ff8f95cf93 is healthy: got healthy result from https://192.168.247.161:2379member 8f4e6ce663f0d49a is healthy: got healthy result from https://192.168.247.162:2379member b6230d9c6f20feeb is healthy: got healthy result from https://192.168.247.163:2379cluster is healthy

如有报错,查看/var/log/message日志

4、node节点安装docker

可以放到脚本内执行

# cat docker.shyum remove -y docker docker-common docker-selinux docker-engine yum install -y yum-utils device-mapper-persistent-data lvm2wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.reposed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum makecache fastyum install -y docker-ce systemctl enable dockersystemctl start dockerdocker version

如果拉取镜像较慢,可以配置daocloud提供的docker加速器

5、Flannel网络部署

master01执行:

# pwd/opt/etcd/ssl# /opt/etcd/bin/etcdctl \--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \--endpoints="https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379" \set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

node01执行:

# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz# mkdir -pv /opt/kubernetes/{bin,cfg,ssl}# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
# cat /opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"

将master节点的/opt/etcd/ssl/*拷贝到node节点

[root@master01 ~]# scp -r /opt/etcd/ssl node01:/opt/etcd/
# cat /usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.target
# cat /usr/lib/systemd/system/docker.service [Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill -s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target

重启flannel和docker:

# systemctl daemon-reload# systemctl start flanneld# systemctl enable flanneld# systemctl restart docker# systemctl enable docker
# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.12.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=172.17.12.1/24 --ip-masq=false --mtu=1450"
# ip a5: docker0:  mtu 1500 qdisc noqueue state DOWN group default     link/ether 02:42:f0:62:07:73 brd ff:ff:ff:ff:ff:ff    inet 172.17.12.1/24 brd 172.17.12.255 scope global docker0       valid_lft forever preferred_lft forever6: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default     link/ether ca:e9:e0:d4:05:be brd ff:ff:ff:ff:ff:ff    inet 172.17.12.0/32 scope global flannel.1       valid_lft forever preferred_lft forever    inet6 fe80::c8e9:e0ff:fed4:5be/64 scope link        valid_lft forever preferred_lft forever

将介质及配置文件拷贝至node02节点

# scp -r /opt/kubernetes node02:/opt/# cd /usr/lib/systemd/system/# scp flanneld.service docker.service node02:/usr/lib/systemd/system/# scp -r /opt/etcd/ssl/ node02:/opt/etcd/

node02执行:

# mkdir /opt/etcd
# systemctl daemon-reload# systemctl start flanneld# systemctl enable flanneld# systemctl restart docker
# ip a5: docker0:  mtu 1500 qdisc noqueue state DOWN group default     link/ether 02:42:ca:2c:48:df brd ff:ff:ff:ff:ff:ff    inet 172.17.16.1/24 brd 172.17.16.255 scope global docker0       valid_lft forever preferred_lft forever6: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default     link/ether ee:73:b2:e8:46:c1 brd ff:ff:ff:ff:ff:ff    inet 172.17.16.0/32 scope global flannel.1       valid_lft forever preferred_lft forever    inet6 fe80::ec73:b2ff:fee8:46c1/64 scope link        valid_lft forever preferred_lft forever

网络测试:

[root@node02 opt]# ping 172.17.12.1PING 172.17.12.1 (172.17.12.1) 56(84) bytes of data.64 bytes from 172.17.12.1: icmp_seq=1 ttl=64 time=1.07 ms64 bytes from 172.17.12.1: icmp_seq=2 ttl=64 time=0.300 ms
[root@node01 system]# ping 172.17.16.1PING 172.17.16.1 (172.17.16.1) 56(84) bytes of data.64 bytes from 172.17.16.1: icmp_seq=1 ttl=64 time=1.13 ms

6、自签APIServer SSL证书

在master01执行:

# cat cert-k8s.sh#创建ca证书cat > ca-config.json < ca-csr.json < server-csr.json < kube-proxy-csr.json < admin-csr.json <
# ll *.pem-rw------- 1 root root 1679 Jan 11 22:06 admin-key.pem-rw-r--r-- 1 root root 1399 Jan 11 22:06 admin.pem-rw------- 1 root root 1679 Jan 11 22:06 ca-key.pem-rw-r--r-- 1 root root 1359 Jan 11 22:06 ca.pem-rw------- 1 root root 1675 Jan 11 22:06 kube-proxy-key.pem-rw-r--r-- 1 root root 1403 Jan 11 22:06 kube-proxy.pem-rw------- 1 root root 1679 Jan 11 22:06 server-key.pem-rw-r--r-- 1 root root 1651 Jan 11 22:06 server.pem

7、部署Master组件

master01、02、03执行:

# mkdir -pv /opt/kubernetes/{bin,cfg,ssl}# tar zxvf kubernetes-server-linux-amd64.tar.gz# cd kubernetes/server/bin# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/# pwd/root/cert-k8s# cp *.pem /opt/kubernetes/ssl/# head -c 16 /dev/urandom |od -An -t x |tr -d  ' '1c96cf8a12d4555a52e89bf3925a5c87# cat /opt/kubernetes/cfg/token.csv1c96cf8a12d4555a52e89bf3925a5c87,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

1)、api-server:

# cat api-server.sh #!/bin/bash# example: ./api-server.sh 192.168.247.161 https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379MASTER_IP=$1ETCD_SERVERS=$2cat < /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\--v=4 \\--etcd-servers=${ETCD_SERVERS} \\--bind-address=${MASTER_IP} \\--secure-port=6443 \\--advertise-address=${MASTER_IP} \\--allow-privileged=true \\--service-cluster-ip-range=10.0.0.0/24 \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \\--authorization-mode=RBAC,Node \\--enable-bootstrap-token-auth \\--token-auth-file=/opt/kubernetes/cfg/token.csv \\--service-node-port-range=30000-50000 \\--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\--client-ca-file=/opt/kubernetes/ssl/ca.pem \\--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\--etcd-cafile=/opt/etcd/ssl/ca.pem \\--etcd-certfile=/opt/etcd/ssl/server.pem \\--etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/kube-apiserver.service [Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-apiserversystemctl restart kube-apiserver
# ./api-server.sh 192.168.247.161 https://192.168.247.161:2379,https://192.168.247.162:2379,https://192.168.247.163:2379

2)、scheduler组件

# cat scheduler.shcat </opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\--v=4 \\--master=127.0.0.1:8080 \\--leader-elect"EOFcat </usr/lib/systemd/system/kube-scheduler.service [Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-schedulerExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-schedulersystemctl restart kube-scheduler# ./scheduler.sh部署controller-manager组件# cat controller-manager.shcat </opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \--v=4 \\--master=127.0.0.1:8080 \\--leader-elect=true \\--address=127.0.0.1 \\--service-cluster-ip-range=10.0.0.0/24 \\--cluster-name=kubernetes \\--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\--root-ca-file=/opt/kubernetes/ssl/ca.pem \\--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"EOFcat </usr/lib/systemd/system/kube-controller-manager.service [Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-managerExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-controller-managersystemctl restart kube-controller-manager
# sh controller-manager.sh

添加环境变量

K8S_HOME=/opt/kubernetesPATH=$K8S_HOME/bin:$PATH
[root@master01 ~]# kubectl get cs# kubectl get csNAME                 STATUS    MESSAGE              ERRORscheduler            Healthy   ok                   controller-manager   Healthy   ok                   etcd-1               Healthy   {"health": "true"}   etcd-2               Healthy   {"health": "true"}   etcd-0               Healthy   {"health": "true"}  [root@master02 ~]# kubectl get csNAME                 STATUS    MESSAGE              ERRORscheduler            Healthy   ok                   controller-manager   Healthy   ok                   etcd-2               Healthy   {"health": "true"}   etcd-0               Healthy   {"health": "true"}   etcd-1               Healthy   {"health": "true"}   [root@master03 ~]# kubectl get csNAME                 STATUS    MESSAGE              ERRORscheduler            Healthy   ok                   controller-manager   Healthy   ok                   etcd-1               Healthy   {"health": "true"}   etcd-0               Healthy   {"health": "true"}   etcd-2               Healthy   {"health": "true"}

8、生成Node kubeconfig文件

[root@master01 ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node01:/opt/kubernetes/bin/[root@master01 ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node02:/opt/kubernetes/bin/
master01执行:kubectl create clusterrolebinding kubelet-bootstrap \--clusterrole=system:node-bootstrapper \--user=kubelet-bootstrap

在master01执行:

cat kubeconfig.sh# 创建kubelet bootstrapping kubeconfig APISERVER=$1SSL_DIR=$2export BOOTSTRAP_TOKEN=`cat /opt/kubernetes/cfg/token.csv |awk -F',' '{print $1}'`export KUBE_APISERVER="https://$APISERVER:6443"# 设置集群参数kubectl config set-cluster kubernetes \  --certificate-authority=$SSL_DIR/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER} \  --kubeconfig=bootstrap.kubeconfig# 设置客户端认证参数kubectl config set-credentials kubelet-bootstrap \  --token=${BOOTSTRAP_TOKEN} \  --kubeconfig=bootstrap.kubeconfig# 设置上下文参数kubectl config set-context default \  --cluster=kubernetes \  --user=kubelet-bootstrap \  --kubeconfig=bootstrap.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=bootstrap.kubeconfig#----------------------# 创建kube-proxy kubeconfig文件kubectl config set-cluster kubernetes \  --certificate-authority=$SSL_DIR/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER} \  --kubeconfig=kube-proxy.kubeconfigkubectl config set-credentials kube-proxy \  --client-certificate=$SSL_DIR/kube-proxy.pem \  --client-key=$SSL_DIR/kube-proxy-key.pem \  --embed-certs=true \  --kubeconfig=kube-proxy.kubeconfigkubectl config set-context default \  --cluster=kubernetes \  --user=kube-proxy \  --kubeconfig=kube-proxy.kubeconfigkubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# ./kubeconfig.sh 192.168.247.160 /opt/kubernetes/ssl
# lltotal 16-rw------- 1 root root 2169 Jan 12 08:09 bootstrap.kubeconfig-rwxr-xr-x 1 root root 1419 Jan 12 08:07 kubeconfig.sh-rw------- 1 root root 6271 Jan 12 08:09 kube-proxy.kubeconfig
# scp bootstrap.kubeconfig kube-proxy.kubeconfig node01:/opt/kubernetes/cfg/# scp bootstrap.kubeconfig kube-proxy.kubeconfig node02:/opt/kubernetes/cfg/

9、部署Node组件

在node01、02执行:

1)、部署kubelet组件

cat kubelet.sh#!/bin/bashNODE_IP=$1cat </opt/kubernetes/cfg/kubeletKUBELET_OPTS="--logtostderr=true \\--v=4 \\--address=${NODE_IP} \\--hostname-override=${NODE_IP} \\--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\--config=/opt/kubernetes/cfg/kubelet.config \\--cert-dir=/opt/kubernetes/ssl \\--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"EOFcat </opt/kubernetes/cfg/kubelet.configkind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: ${NODE_IP}port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS: ["10.0.0.2"]clusterDomain: cluster.local.failSwapOn: falseauthentication:  anonymous:    enabled: true EOFcat </usr/lib/systemd/system/kubelet.service [Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTSRestart=on-failureKillMode=process[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kubeletsystemctl restart kubelet
# ./kubelet.sh 192.168.247.171
# ./kubelet.sh 192.168.247.172

2)、部署kube-proxy组件:

cat kube-proxy.sh#!/bin/bashNODE_IP=$1cat </opt/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS="--logtostderr=true \\--v=4 \\--hostname-override=${NODE_IP} \\--cluster-cidr=10.0.0.0/24 \\--proxy-mode=ipvs \\--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"EOFcat </usr/lib/systemd/system/kube-proxy.service [Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable kube-proxysystemctl restart kube-proxy
# ./kube-proxy.sh 192.168.247.171
# ./kube-proxy.sh 192.168.247.172

10、安装nginx

使用nginx四层进行转发

# cat nginx.repo [nginx]name=nginx repobaseurl=http://nginx.org/packages/centos/7/$basearch/gpgcheck=0enabled=1
# yum install nginx

1) LB01和LB02配置:

nginx配置文件添加以下内容:

# cat /etc/nginx/nginx.confstream{    log_format main "$remote_addr $upstream_addr $time_local $status";    access_log /var/log/nginx/k8s-access.log main;    upstream k8s-apiserver {           server 192.168.247.161:6443;              server 192.168.247.162:6443;              server 192.168.247.163:6443;       }    server {           listen 0.0.0.0:6443;           proxy_pass k8s-apiserver;    }}

11、安装keepalived

# yum install keepalived# yum install libnl3-devel ipset-devel
# cat /etc/keepalived/check_nginx.sh #!/bin/bashcount=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];then   systemctl stop keeplivedfi
# chmod 755 check_nginx.sh

LB01配置:

# cat /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {   notification_email {     acassen@firewall.loc     failover@firewall.loc     sysadmin@firewall.loc   }   notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 192.168.200.1   smtp_connect_timeout 30   router_id LVS_DEVEL   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0   vrrp_gna_interval 0}vrrp_script check_nginx {     script "/etc/keepalived/check_nginx.sh"}vrrp_instance VI_1 {    state MASTER    interface ens33    virtual_router_id 51    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.247.160/24    }    track_script {        check_nginx    }}

LB02配置:

# cat /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {   notification_email {     acassen@firewall.loc     failover@firewall.loc     sysadmin@firewall.loc   }   notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 192.168.200.1   smtp_connect_timeout 30   router_id LVS_DEVEL   vrrp_skip_check_adv_addr   vrrp_strict   vrrp_garp_interval 0   vrrp_gna_interval 0}vrrp_script check_nginx {     script "/etc/keepalived/check_nginx.sh"}vrrp_instance VI_1 {    state BACKUP    interface ens33    virtual_router_id 51    priority 90    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.247.160/24    }    track_script {        check_nginx    }}
# systemctl enable nginx# systemctl start nginx# systemctl enable keepalived# systemctl start keepalived

12、节点发现

# kubectl get csrNAME                                                   AGE   REQUESTOR           CONDITIONnode-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8   17m   kubelet-bootstrap   Pendingnode-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukE   20m   kubelet-bootstrap   Pending# kubectl certificate approve node-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8certificatesigningrequest.certificates.k8s.io/node-csr-gvRm9pzQJCj4cp_hGYp5qwW93LLdAbVPtz7AaztlGv8 approved# kubectl certificate approve node-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukEcertificatesigningrequest.certificates.k8s.io/node-csr-luowueA4U43ca96d-Ff64X7o8p9BW6MGIxWfASUPukE approved# kubectl get nodeNAME              STATUS   ROLES    AGE     VERSION192.168.247.171   Ready       12s     v1.12.4192.168.247.172   Ready       9m41s   v1.12.4

13、运行一个测试示例

# kubectl run nginx --image=nginx --replicas=3# kubectl get pod -o wideNAME                    READY   STATUS    RESTARTS   AGE   IP            NODE              NOMINATED NODEnginx-dbddb74b8-dkhcw   1/1     Running   0          38m   172.17.35.2   192.168.247.172   nginx-dbddb74b8-rdf2v   1/1     Running   0          38m   172.17.17.2   192.168.247.171   nginx-dbddb74b8-rn9l6   1/1     Running   0          38m   172.17.35.3   192.168.247.172   # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePortservice/nginx exposed# kubectl get svcNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGEkubernetes   ClusterIP   10.0.0.1             443/TCP        12hnginx        NodePort    10.0.0.30            88:48363/TCP   6s

浏览器访问:

http://192.168.247.171:48363

http://192.168.247.172:48363

以上是"如何使用二进制方式搭建K8S高可用集群"这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注行业资讯频道!

组件 配置 节点 集群 文件 证书 二进制 内容 参数 系统 方式 上下 上下文 拷贝 版本 篇文章 网络 测试 操作系统 介质 数据库的安全要保护哪些东西 数据库安全各自的含义是什么 生产安全数据库录入 数据库的安全性及管理 数据库安全策略包含哪些 海淀数据库安全审计系统 建立农村房屋安全信息数据库 易用的数据库客户端支持安全管理 连接数据库失败ssl安全错误 数据库的锁怎样保障安全 中国网络安全十大热点 想要看小学生网络安全的黑板报 云南白药为什么招软件开发人员 世界网络技术强国 计算机网络技术什么类别好就业 软件开发行业推广方式 文件夹打不开显示服务器运行失败 数据库表提示挂起 境外免费服务器 人工神经网络技术怎么用 数据库创建字段为什么要有数据 软件开发教程pdf网站 移动软件的数据库 海康录像管理服务器 我的世界去哪租服务器 无锡中高端服务器应用范围 谷歌服务器封udp了没 html5服务器 深圳市博远软件开发 腾讯云的服务器怎么联系 澳大利亚卵巢癌数据库AOCS 眼镜行业软件开发的好选择 网络安全的重要防护领域 预防网络安全手抄报电子版 如何免费获取ccer数据库数据 计算机网络安全的研究方法 原什么法进行软件开发 关于数据库的口号 网络安全宣传周上海地铁 物流软件开发需要学什么
0