千家信息网

kubernetes 手动搭建

发表于:2025-12-04 作者:千家信息网编辑
千家信息网最后更新 2025年12月04日,01.系统初始化和全局变量主机分配主机名系统ip地址vipdev-k8s-master1centos7.6172.19.201.244172.19.201.242dev-k8s-master2cent
千家信息网最后更新 2025年12月04日kubernetes 手动搭建

01.系统初始化和全局变量

主机分配

主机名

系统

ip地址

vip

dev-k8s-master1

centos7.6

172.19.201.244

172.19.201.242

dev-k8s-master2

centos7.6

172.19.201.249


dev-k8s-master2

centos7.6

172.19.201.248


dev-k8s-node1

centos7.6

172.19.201.247


dev-k8s-node2

centos7.6

172.19.201.246


dev-k8s-node3

centos7.6

172.19.201.243


flanne


10.10.0.0/16


docker


10.10.1.1/24


主机名

设置永久主机名称,然后重新登录:

hostnamectl set-hostname dev-k8s-master1

设置的主机名保存在 /etc/hostname 文件中;


无密码 ssh 登录其它节点

如果没有特殊指明,本文档的所有操作均在 zhangjun-k8s01 节点上执行,然后远程分发文件和执行命令,所以需要添加该节点到其它节点的 ssh 信任关系。

设置 zhangjun-k8s01 的 root 账户可以无密码登录所有节点:

ssh-keygen -t rsa

ssh-copy-id root@dev_k8s_master1

...

更新 PATH 变量

将可执行文件目录添加到 PATH 环境变量中:

echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc

source /root/.bashrc

安装依赖包

在每台机器上安装依赖包:

CentOS:

yum install -y epel-release

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget

关闭防火墙

在每台机器上关闭防火墙,清理防火墙规则,设置默认转发策略:

systemctl stop firewalld

systemctl disable firewalld

iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat

iptables -P FORWARD ACCEPT

关闭 swap 分区

如果开启了 swap 分区,kubelet 会启动失败(可以通过将参数 --fail-swap-on 设置为 false 来忽略 swap on),故需要在每台机器上关闭 swap 分区。同时注释 /etc/fstab 中相应的条目,防止开机自动挂载 swap 分区:

swapoff -a

sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

关闭 SELinux

关闭 SELinux,否则后续 K8S 挂载目录时可能报错 Permission denied:

setenforce 0

sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

加载内核模块

modprobe ip_vs_rr

modprobe br_netfilter

优化内核参数

cat > kubernetes.conf <

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

sysctl -p /etc/sysctl.d/kubernetes.conf

设置系统时区

# 调整系统 TimeZone

timedatectl set-timezone Asia/Shanghai

关闭无关的服务

systemctl stop postfix && systemctl disable postfix

设置 rsyslogd 和 systemd journald

systemd 的 journald 是 Centos 7 缺省的日志记录工具,它记录了所有系统、内核、Service Unit 的日志。

相比 systemd,journald 记录的日志有如下优势:

  • 可以记录到内存或文件系统;(默认记录到内存,对应的位置为 /run/log/jounal);

  • 可以限制占用的磁盘空间、保证磁盘剩余空间;

  • 可以限制日志文件大小、保存的时间;

journald 默认将日志转发给 rsyslog,这会导致日志写了多份,/var/log/messages 中包含了太多无关日志,不方便后续查看,同时也影响系统性能。

# 持久化保存日志的目录

mkdir /var/log/journalmkdir /etc/systemd/journald.conf.dcat > /etc/systemd/journald.conf.d/99-prophet.conf <

systemctl restart systemd-journald

创建相关目录

创建目录:

mkdir -p /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert

升级内核

yum -y update

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

yum --enablerepo=elrepo-kernel install kernel-lt.x86_64 -y

sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

sudo grub2-set-default 0

安装内核源文件(可选,在升级完内核并重启机器后执行):

02.创建 CA 证书和秘钥

安装 cfssl 工具集

sudo mkdir -p /opt/k8s/cert && cd /opt/k8s

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

mv cfssl_linux-amd64 /opt/k8s/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

chmod +x /opt/k8s/bin/*

export PATH=/opt/k8s/bin:$PATH

创建根证书 (CA)

CA 证书是集群所有节点共享的,只需要创建一个 CA 证书,后续创建的所有证书都由它签名。

创建配置文件

CA 配置文件用于配置根证书的使用场景 (profile) 和具体参数 (usage,过期时间、服务端认证、客户端认证、加密等),后续在签名其它证书时需要指定特定场景。

cd /opt/k8s/workcat > ca-config.json <

创建证书签名请求文件

cd /opt/k8s/workcat > ca-csr.json <

生成 CA 证书和私钥

cd /opt/k8s/work

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

ls ca*

分发证书文件

将生成的 CA 证书、秘钥文件、配置文件拷贝到所有节点的 /etc/kubernetes/cert 目录下:

mkdir -p /etc/kubernetes/cert

scp ca*.pem ca-config.json root@${node_ip}:/etc/kubernetes/cert

下载和分发 kubectl 二进制文件

下载和解压:

cd /opt/k8s/work

wget https://dl.k8s.io/v1.14.2/kubernetes-client-linux-amd64.tar.gz

tar -xzvf kubernetes-client-linux-amd64.tar.gz

分发到所有使用 kubectl 的节点:

scp kubernetes/client/bin/kubectl root@dev-k8s-master1:/opt/k8s/bin/

chmod +x /opt/k8s/bin/*

创建 admin 证书和私钥

kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。

kubectl 作为集群的管理工具,需要被授予最高权限,这里创建具有最高权限的 admin 证书。

创建证书签名请求:

cd /opt/k8s/workcat > admin-csr.json <

生成证书和私钥:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

-ca-key=/opt/k8s/work/ca-key.pem \

-config=/opt/k8s/work/ca-config.json \

-profile=kubernetes admin-csr.json | cfssljson -bare admin

创建 kubeconfig 文件

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

cd /opt/k8s/work

# 设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/work/ca.pem \

--embed-certs=true \

--server=https://172.19.201.242:8443 \

--kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数

kubectl config set-credentials admin \

--client-certificate=/opt/k8s/work/admin.pem \

--client-key=/opt/k8s/work/admin-key.pem \

--embed-certs=true \

--kubeconfig=kubectl.kubeconfig

# 设置上下文参数

kubectl config set-context kubernetes \

--cluster=kubernetes \

--user=admin \

--kubeconfig=kubectl.kubeconfig

# 设置默认上下文

kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

分发 kubeconfig 文件

分发到所有使用 kubectl 命令的节点:

cd /opt/k8s/work

mkdir -p ~/.kube

scp kubectl.kubeconfig root@dev-k8s-master1:~/.kube/config

03.部署 kubectl 命令行工具

下载和分发 kubectl 二进制文件

下载和解压:

cd /opt/k8s/work

wget https://dl.k8s.io/v1.14.2/kubernetes-client-linux-amd64.tar.gz

tar -xzvf kubernetes-client-linux-amd64.tar.gz

分发到所有使用 kubectl 的节点:

cd /opt/k8s/work

scp kubernetes/client/bin/kubectl root@dev-k8s-master1:/opt/k8s/bin/

chmod +x /opt/k8s/bin/*

创建 admin 证书和私钥

kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。

kubectl 作为集群的管理工具,需要被授予最高权限,这里创建具有最高权限的 admin 证书。

创建证书签名请求:

cd /opt/k8s/workcat > admin-csr.json <

生成证书和私钥:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

-ca-key=/opt/k8s/work/ca-key.pem \

-config=/opt/k8s/work/ca-config.json \

-profile=kubernetes admin-csr.json | cfssljson -bare admin

创建 kubeconfig 文件

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书;

cd /opt/k8s/work

# 设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/work/ca.pem \

--embed-certs=true \

--server="https://172.19.201.202:8443" \

--kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数

kubectl config set-credentials admin \

--client-certificate=/opt/k8s/work/admin.pem \

--client-key=/opt/k8s/work/admin-key.pem \

--embed-certs=true \

--kubeconfig=kubectl.kubeconfig

# 设置上下文参数

kubectl config set-context kubernetes \

--cluster=kubernetes \

--user=admin \

--kubeconfig=kubectl.kubeconfig

# 设置默认上下文

kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

分发 kubeconfig 文件

分发到所有使用 kubectl 命令的节点:

cd /opt/k8s/work

mkdir -p ~/.kube

scp kubectl.kubeconfig root@dev-k8s-master1:/root/.kube/config

04部署haproxy+keepalived

部署keepalived【所有master】

此处的keeplived的主要作用是为haproxy提供vip(172.19.201.242),在三个haproxy实例之间提供主备,降低当其中一个haproxy失效的时对服务的影响。

安装keepalived

yum install -y keepalived

配置keepalived:

[注意:VIP地址是否正确,且各个节点的priority不同,master1节点为MASTER,其余节点为BACKUP,killall -0 意思是根据进程名称检测进程是否存活]

cat > /etc/keepalived/keepalived.conf <

scp -pr /etc/keepalived/keepalived.conf root@dev-k8s-master2:/etc/keepalived/ (master节点)

1.killall -0 根据进程名称检测进程是否存活,如果服务器没有该命令,请使用yum install psmisc -y安装

2.第一个master节点的state为MASTER,其他master节点的state为BACKUP

3.priority表示各个节点的优先级,范围:0~250(非强制要求)

启动并检测服务

systemctl enable keepalived.service

systemctl start keepalived.service

systemctl status keepalived.service

ip address show eno1

部署haproxy【所有master】

此处的haproxy为apiserver提供反向代理,haproxy将所有请求轮询转发到每个master节点上。相对于仅仅使用keepalived主备模式仅单个master节点承载流量,这种方式更加合理、健壮。

安装haproxy

yum install -y haproxy

配置haproxy【三个master节点一样】

cat > /etc/haproxy/haproxy.cfg << EOF#---------------------------------------------------------------------# Example configuration for a possible web application.  See the# full configuration options online.##   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt##--------------------------------------------------------------------- #---------------------------------------------------------------------# Global settings#---------------------------------------------------------------------global   # to have these messages end up in /var/log/haproxy.log you will   # need to:   #   # 1) configure syslog to accept network log events.  This is done   #    by adding the '-r' option to the SYSLOGD_OPTIONS in   #    /etc/sysconfig/syslog   #   # 2) configure local2 events to go to the /var/log/haproxy.log   #   file. A line like the following can be added to   #   /etc/sysconfig/syslog   #   #    local2.*                       /var/log/haproxy.log   #   log         127.0.0.1 local2    chroot      /var/lib/haproxy   pidfile     /var/run/haproxy.pid   maxconn     4000   user        haproxy   group       haproxy   daemon    # turn on stats unix socket   stats socket /var/lib/haproxy/stats #---------------------------------------------------------------------# common defaults that all the 'listen' and 'backend' sections will# use if not designated in their block#---------------------------------------------------------------------defaults   mode                    http   log                     global   option                  httplog   option                  dontlognull   option http-server-close   option forwardfor       except 127.0.0.0/8   option                  redispatch   retries                 3   timeout http-request    10s   timeout queue           1m   timeout connect         10s   timeout client          1m   timeout server          1m   timeout http-keep-alive 10s   timeout check           10s   maxconn                 3000 #---------------------------------------------------------------------# main frontend which proxys to the backends#---------------------------------------------------------------------frontend  kubernetes-apiserver   mode                 tcp   bind                 *:8443   option               tcplog   default_backend      kubernetes-apiserver  #---------------------------------------------------------------------# round robin balancing between the various backends#---------------------------------------------------------------------backend kubernetes-apiserver   mode        tcp   balance     roundrobin   server  dev-k8s-master1 172.19.201.244:6443 check   server  dev-k8s-master2 172.19.201.249:6443 check   server  dev-k8s-master3 172.19.201.248:6443 check #---------------------------------------------------------------------# collection haproxy statistics message#---------------------------------------------------------------------listen stats   bind                 *:1080   stats auth           admin:awesomePassword   stats refresh        5s   stats realm          HAProxy\ Statistics   stats uri            /admin?stats

把配置文件拷贝到其他两台master节点

scp -pr /etc/haproxy/haproxy.cfg root@dev-k8s-master2:/etc/haproxy/

启动并检测服务

systemctl enable haproxy.service

systemctl start haproxy.service

systemctl status haproxy.service

ss -lnt | grep -E "8443|1080"

05.部署 etcd 集群

下载和分发 etcd 二进制文件

到 etcd 的 release 页面 下载最新版本的发布包:

cd /opt/k8s/work

wget https://github.com/coreos/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz

tar -xvf etcd-v3.3.13-linux-amd64.tar.gz

分发二进制文件到集群所有节点:

cd /opt/k8s/work

scp etcd-v3.3.13-linux-amd64/etcd* root@${node_ip}:/opt/k8s/bin

chmod +x /opt/k8s/bin/*

创建 etcd 证书和私钥

创建证书签名请求:

cd /opt/k8s/workcat > etcd-csr.json <

生成证书和私钥:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

-ca-key=/opt/k8s/work/ca-key.pem \

-config=/opt/k8s/work/ca-config.json \

-profile=kubernetes etcd-csr.json | cfssljson -bare etcd

分发生成的证书和私钥到各 etcd 节点:

cd /opt/k8s/work

mkdir -p /etc/etcd/cert

scp etcd*.pem root@dev-k8s-master1:/etc/etcd/cert/

创建 etcd 的 systemd 文件

vim /etc/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.targetDocumentation=https://github.com/coreos [Service]Type=notifyWorkingDirectory=/data/k8s/etcd/dataExecStart=/opt/k8s/bin/etcd \  --data-dir=/data/k8s/etcd/data \  --wal-dir=/data/k8s/etcd/wal \  --name=dev-k8s-master1 \  --cert-file=/etc/etcd/cert/etcd.pem \  --key-file=/etc/etcd/cert/etcd-key.pem \  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \  --peer-cert-file=/etc/etcd/cert/etcd.pem \  --peer-key-file=/etc/etcd/cert/etcd-key.pem \  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \  --peer-client-cert-auth \  --client-cert-auth \  --listen-peer-urls=https://172.19.201.244:2380 \  --initial-advertise-peer-urls=https://172.19.201.244:2380 \  --listen-client-urls=https://172.19.201.244:2379,http://127.0.0.1:2379 \  --advertise-client-urls=https://172.19.201.244:2379 \  --initial-cluster-token=etcd-cluster-0 \  --initial-cluster=dev-k8s-master1=https://172.19.201.244:2380,dev-k8s-master2=https://172.19.201.249:2380,dev-k8s-master3=https://172.19.201.248:2380 \  --initial-cluster-state=new \  --auto-compaction-mode=periodic \  --auto-compaction-retention=1 \  --max-request-bytes=33554432 \  --quota-backend-bytes=6442450944 \  --heartbeat-interval=250 \  --election-timeout=2000Restart=on-failureRestartSec=5LimitNOFILE=65536 [Install]WantedBy=multi-user.target

验证服务状态

部署完 etcd 集群后,在任一 etcd 节点上执行如下命令:

cd /opt/k8s/work

ETCDCTL_API=3 /opt/k8s/bin/etcdctl \

--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

--cacert=/opt/k8s/work/ca.pem \

--cert=/etc/etcd/cert/etcd.pem \

--key=/etc/etcd/cert/etcd-key.pem endpoint health

输出结果:输出均为 healthy 时表示集群服务正常。

查看当前的 leader

ETCDCTL_API=3 /opt/k8s/bin/etcdctl \

-w table --cacert=/opt/k8s/work/ca.pem \

--cert=/etc/etcd/cert/etcd.pem \

--key=/etc/etcd/cert/etcd-key.pem \

--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 endpoint status

06.部署 flannel 网络

下载和分发 flanneld 二进制文件

从 flannel 的 release 页面 下载最新版本的安装包:

cd /opt/k8s/work

mkdir flannel

wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

tar -xzvf flannel-v0.11.0-linux-amd64.tar.gz -C flannel

分发二进制文件到集群所有节点:

cd /opt/k8s/work

source /opt/k8s/bin/environment.sh

scp flannel/{flanneld,mk-docker-opts.sh} root@dev-k8s-node1:/opt/k8s/bin/

chmod +x /opt/k8s/bin/*

创建 flannel 证书和私钥

flanneld 从 etcd 集群存取网段分配信息,而 etcd 集群启用了双向 x509 证书认证,所以需要为 flanneld 生成证书和私钥。

创建证书签名请求:

cd /opt/k8s/workcat > flanneld-csr.json <

  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

cfssl gencert -ca=/opt/k8s/work/ca.pem \

-ca-key=/opt/k8s/work/ca-key.pem \

-config=/opt/k8s/work/ca-config.json \

-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

将生成的证书和私钥分发到所有节点(master 和 worker):

cd /opt/k8s/work

mkdir -p /etc/flanneld/cert

scp flanneld*.pem root@dev-k8s-master1:/etc/flanneld/cert

向 etcd 写入集群 Pod 网段信息

注意:本步骤只需执行一次。

cd /opt/k8s/work

etcdctl \

--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

--ca-file=/opt/k8s/work/ca.pem \

--cert-file=/opt/k8s/work/flanneld.pem \

--key-file=/opt/k8s/work/flanneld-key.pem \

set /kubernetes/network/config '{"Network":"'10.10.0.0/16'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'

  • flanneld 当前版本 (v0.11.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;

  • 写入的 Pod 网段 ${CLUSTER_CIDR} 地址段(如 /16)必须小于 SubnetLen,必须与 kube-controller-manager 的 --cluster-cidr 参数值一致;

创建 flanneld 的 systemd unit 文件

cat /etc/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network.targetAfter=network-online.targetWants=network-online.targetAfter=etcd.serviceBefore=docker.service [Service]Type=notifyExecStart=/opt/k8s/bin/flanneld \  -etcd-cafile=/etc/kubernetes/cert/ca.pem \  -etcd-certfile=/etc/flanneld/cert/flanneld.pem \  -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \  -etcd-endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \  -etcd-prefix=/kubernetes/network \  -iface=eno1 \  -ip-masqExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/dockerRestart=alwaysRestartSec=5StartLimitInterval=0 [Install]WantedBy=multi-user.targetRequiredBy=docker.service

启动 flanneld 服务

systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld

检查分配给各 flanneld 的 Pod 网段信息

查看集群 Pod 网段(/16):

etcdctl \

--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

--ca-file=/etc/kubernetes/cert/ca.pem \

--cert-file=/etc/flanneld/cert/flanneld.pem \

--key-file=/etc/flanneld/cert/flanneld-key.pem \

get /kubernetes/network/config

输出:

{"Network":"10.10.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}

查看已分配的 Pod 子网段列表(/24):

etcdctl \

--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

--ca-file=/etc/kubernetes/cert/ca.pem \

--cert-file=/etc/flanneld/cert/flanneld.pem \

--key-file=/etc/flanneld/cert/flanneld-key.pem \

ls /kubernetes/network/subnets

输出(结果视部署情况而定):

查看某一 Pod 网段对应的节点 IP 和 flannel 接口地址:

etcdctl \

--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

--ca-file=/etc/kubernetes/cert/ca.pem \

--cert-file=/etc/flanneld/cert/flanneld.pem \

--key-file=/etc/flanneld/cert/flanneld-key.pem \

get /kubernetes/network/subnets/10.10.80.0-21

输出(结果视部署情况而定):

检查节点 flannel 网络信息

flannel.1 网卡的地址为分配的 Pod 子网段的第一个 IP(.0),且是 /32 的地址;

[root@dev-k8s-node1 ~]# ip route show |grep flannel.1

验证各节点能通过 Pod 网段互通

在各节点上部署 flannel 后,检查是否创建了 flannel 接口(名称可能为 flannel0、flannel.0、flannel.1 等):

ssh dev-k8s-node2 "/usr/sbin/ip addr show flannel.1|grep -w inet"

在各节点上 ping 所有 flannel 接口 IP,确保能通:

ssh dev-k8s-node2 "ping -c 2 10.10.176.0"

07.部署高可用 kube-apiserver 集群

创建 kubernetes 证书和私钥

创建证书签名请求:

cd /opt/k8s/workcat > kubernetes-csr.json <

kubernetes 服务 IP 是 apiserver 自动创建的,一般是 --service-cluster-ip-range 参数指定的网段的第一个IP,后续可以通过下面命令获取:

kubectl get svc kubernetes

生成证书和私钥:

cfssl gencert -ca=/opt/k8s/work/ca.pem \

-ca-key=/opt/k8s/work/ca-key.pem \

-config=/opt/k8s/work/ca-config.json \

-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

ls kubernetes*pem

将生成的证书和私钥文件拷贝到所有 master 节点:

cd /opt/k8s/work

mkdir -p /etc/kubernetes/cert

scp kubernetes*.pem root@dev-k8s-master1:/etc/kubernetes/cert/

创建加密配置文件

cd /opt/k8s/workcat > encryption-config.yaml <

将加密配置文件拷贝到 master 节点的 /etc/kubernetes 目录下:

cd /opt/k8s/work

scp encryption-config.yaml root@dev-k8s-master1:/etc/kubernetes/

创建审计策略文件

cd /opt/k8s/workcat > audit-policy.yaml <

分发审计策略文件:

cd /opt/k8s/work

scp audit-policy.yaml root@dev-k8s-master1:/etc/kubernetes/audit-policy.yaml

创建后续访问 metrics-server 使用的证书

创建证书签名请求:

cat > proxy-client-csr.json <


生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

-ca-key=/etc/kubernetes/cert/ca-key.pem \

-config=/etc/kubernetes/cert/ca-config.json \

-profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client

ls proxy-client*.pem

将生成的证书和私钥文件拷贝到所有 master 节点:

scp proxy-client*.pem root@dev-k8s-master1:/etc/kubernetes/cert/

创建 kube-apiserver systemd unit 配置文件

cd /opt/k8s/work

vim /etc/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target [Service]WorkingDirectory=/data/k8s/k8s/kube-apiserverExecStart=/opt/k8s/bin/kube-apiserver \  --advertise-address=172.19.201.244 \  --default-not-ready-toleration-seconds=360 \  --default-unreachable-toleration-seconds=360 \  --feature-gates=DynamicAuditing=true \  --max-mutating-requests-inflight=2000 \  --max-requests-inflight=4000 \  --default-watch-cache-size=200 \  --delete-collection-workers=2 \  --encryption-provider-config=/etc/kubernetes/encryption-config.yaml \  --etcd-cafile=/etc/kubernetes/cert/ca.pem \  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \  --etcd-servers=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \  --bind-address=172.19.201.244 \  --secure-port=6443 \  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \  --insecure-port=0 \  --audit-dynamic-configuration \  --audit-log-maxage=15 \  --audit-log-maxbackup=3 \  --audit-log-maxsize=100 \  --audit-log-truncate-enabled \  --audit-log-path=/data/k8s/k8s/kube-apiserver/audit.log \  --audit-policy-file=/etc/kubernetes/audit-policy.yaml \  --profiling \  --anonymous-auth=false \  --client-ca-file=/etc/kubernetes/cert/ca.pem \  --enable-bootstrap-token-auth \  --requestheader-allowed-names="aggregator" \  --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem \  --requestheader-extra-headers-prefix="X-Remote-Extra-" \  --requestheader-group-headers=X-Remote-Group \  --requestheader-username-headers=X-Remote-User \  --service-account-key-file=/etc/kubernetes/cert/ca.pem \  --authorization-mode=Node,RBAC \  --runtime-config=api/all=true \  --enable-admission-plugins=NodeRestriction \  --allow-privileged=true \  --apiserver-count=3 \  --event-ttl=168h \  --kubelet-certificate-authority=/etc/kubernetes/cert/ca.pem \  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \  --kubelet-https=true \  --kubelet-timeout=10s \  --proxy-client-cert-file=/etc/kubernetes/cert/proxy-client.pem \  --proxy-client-key-file=/etc/kubernetes/cert/proxy-client-key.pem \  --service-cluster-ip-range=10.254.0.0/16 \  --service-node-port-range=30000-32767 \  --logtostderr=true \  --v=2Restart=on-failureRestartSec=10Type=notifyLimitNOFILE=65536 [Install]WantedBy=multi-user.target

启动 kube-apiserver 服务

启动服务前必须先创建工作目录;

mkdir -p /data/k8s/k8s/kube-apiserver

systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver

打印 kube-apiserver 写入 etcd 的数据

ETCDCTL_API=3 etcdctl \

--endpoints=https://172.19.201.244:2379,https://172.19.201.249:2379,https://172.19.201.248:2379 \

--cacert=/opt/k8s/work/ca.pem \

--cert=/opt/k8s/work/etcd.pem \

--key=/opt/k8s/work/etcd-key.pem \

get /registry/ --prefix --keys-only

检查集群信息

$ kubectl cluster-info

Kubernetes master is running at https://172.19.201.242:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get all --all-namespaces

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

default service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 12m

$ kubectl get componentstatuses

检查 kube-apiserver 监听的端口

sudo netstat -lnpt|grep kube

授予 kube-apiserver 访问 kubelet API 的权限

在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes)访问 kubelet API 的权限:

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

08.部署高可用 kube-controller-manager 集群

创建 kube-controller-manager 证书和私钥

创建证书签名请求:

cd /opt/k8s/workcat > kube-controller-manager-csr.json <

生成证书和私钥:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

-ca-key=/opt/k8s/work/ca-key.pem \

-config=/opt/k8s/work/ca-config.json \

-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

ls kube-controller-manager*pem

将生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work

scp kube-controller-manager*.pem root@dev-k8s-master1:/etc/kubernetes/cert/

创建和分发 kubeconfig 文件

kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书:

cd /opt/k8s/work

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/work/ca.pem \

--embed-certs=true \

--server=https://172.19.201.242:8443 \

--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \

--client-certificate=kube-controller-manager.pem \

--client-key=kube-controller-manager-key.pem \

--embed-certs=true \

--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \

--cluster=kubernetes \

--user=system:kube-controller-manager \

--kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

分发 kubeconfig 到所有 master 节点:

cd /opt/k8s/work

scp kube-controller-manager.kubeconfig root@dev-k8s-master1:/etc/kubernetes/

创建 kube-controller-manager systemd unit 模板文件

cd /opt/k8s/work

cat /etc/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]WorkingDirectory=/data/k8s/k8s/kube-controller-managerExecStart=/opt/k8s/bin/kube-controller-manager \  --port=0 \  --secure-port=10252 \  --bind-address=127.0.0.1 \  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \  --authentication-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \  --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \  --service-cluster-ip-range=10.254.0.0/16 \  --cluster-name=kubernetes \  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \  --experimental-cluster-signing-duration=8760h \  --root-ca-file=/etc/kubernetes/cert/ca.pem \  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \  --leader-elect=true \  --controllers=*,bootstrapsigner,tokencleaner \  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \  --use-service-account-credentials=true \  --kube-api-qps=1000 \  --kube-api-burst=2000 \  --logtostderr=true \  --v=2Restart=on-failureRestartSec=5 [Install]WantedBy=multi-user.target

创建目录

mkdir -p /data/k8s/k8s/kube-controller-manager

启动服务

systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager

kube-controller-manager 监听 10252 端口,接收 https 请求:

sudo netstat -lnpt | grep kube-cont

kube-controller-manager 赋予相应的权限

kubectl create clusterrolebinding controller-manager:system:auth-delegator --user system:kube-controller-manager --clusterrole system:auth-delegator

kubectl describe clusterrole system:kube-controller-manager

kubectl get clusterrole|grep controller

kubectl describe clusterrole system:controller:deployment-controller

查看当前的 leader

kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

09.部署高可用 kube-scheduler 集群

创建 kube-scheduler 证书和私钥

创建证书签名请求:

cd /opt/k8s/workcat > kube-scheduler-csr.json <

生成证书和私钥:

cd /opt/k8s/work

cfssl gencert -ca=/opt/k8s/work/ca.pem \

-ca-key=/opt/k8s/work/ca-key.pem \

-config=/opt/k8s/work/ca-config.json \

-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

将生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work

scp kube-scheduler*.pem root@dev-k8s-master1:/etc/kubernetes/cert/

创建和分发 kubeconfig 文件

kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书:

cd /opt/k8s/work

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/work/ca.pem \

--embed-certs=true \

--server=https://172.19.201.242:8443 \

--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \

--client-certificate=kube-scheduler.pem \

--client-key=kube-scheduler-key.pem \

--embed-certs=true \

--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \

--cluster=kubernetes \

--user=system:kube-scheduler \

--kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

分发 kubeconfig 到所有 master 节点:

cd /opt/k8s/work

scp kube-scheduler.kubeconfig root@dev-k8s-master1:/etc/kubernetes/

创建 kube-scheduler 配置文件

cd /opt/k8s/workcat <

分发 kube-scheduler 配置文件到所有 master 节点:

scp kube-scheduler.yaml root@dev-k8s-master1:/etc/kubernetes/

创建 kube-scheduler systemd 文件

cat /etc/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes [Service]WorkingDirectory=/data/k8s/k8s/kube-schedulerExecStart=/opt/k8s/bin/kube-scheduler \  --config=/etc/kubernetes/kube-scheduler.yaml \  --address=127.0.0.1 \  --kube-api-qps=100 \  --logtostderr=true \  --v=2Restart=alwaysRestartSec=5StartLimitInterval=0 [Install]WantedBy=multi-user.target

创建目录

mkdir -p /data/k8s/k8s/kube-scheduler

启动 kube-scheduler 服务

systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler

检查服务运行状态

systemctl status kube-scheduler

查看输出的 metrics

sudo netstat -lnpt |grep kube-sch

curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.19.201.244:10259/metrics |head

查看当前的 leader

$ kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

10.部署 docker 组件

下载和分发 docker 二进制文件

docker 下载页面 下载最新发布包:

cd /opt/k8s/work

wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz

tar -xvf docker-18.09.6.tgz

分发二进制文件到所有 worker 节点:

cd /opt/k8s/work

scp docker/* root@dev-k8s-node1:/opt/k8s/bin/

ssh root@dev-k8s-node1 "chmod +x /opt/k8s/bin/*"

在work节点创建和分发 systemd unit 文件

cat /etc/systemd/system/docker.service[Unit]Description=Docker Application Container EngineDocumentation=http://docs.docker.io [Service]WorkingDirectory=/data/k8s/dockerEnvironment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"EnvironmentFile=-/run/flannel/dockerExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill -s HUP $MAINPIDRestart=on-failureRestartSec=5LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityDelegate=yesKillMode=process [Install]WantedBy=multi-user.target

sudo iptables -P FORWARD ACCEPT

/sbin/iptables -P FORWARD ACCEPT

分发 systemd unit 文件到所有 worker 机器:

cd /opt/k8s/work

scp docker.service root@dev-k8s-node1:/etc/systemd/system/

配置和分发 docker 配置文件

使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效):

cd /opt/k8s/work

分发 docker 配置文件到所有 worker 节点:

mkdir -p /etc/docker/ /data/k8s/docker/{data,exec}

cat /etc/docker/daemon.json{    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],    "insecure-registries": ["docker02:35000"],    "max-concurrent-downloads": 20,    "live-restore": true,    "max-concurrent-uploads": 10,    "debug": true,    "data-root": "/data/k8s/docker/data",    "exec-root": "/data/k8s/docker/exec",    "log-opts": {      "max-size": "100m",      "max-file": "5"    }}

分发 docker 配置文件到所有 worker 节点:

mkdir -p /etc/docker/ /data/k8s/docker/{data,exec}

scp docker-daemon.json root@dev-k8s-node1:/etc/docker/daemon.json

启动 docker 服务

systemctl daemon-reload && systemctl enable docker && systemctl restart docker

检查服务运行状态

systemctl status docker|grep active


配置docker的配置文件

vim /etc/docker/daemon.json{    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],    "insecure-registries": ["docker02:35000"],    "max-concurrent-downloads": 20,    "live-restore": true,    "max-concurrent-uploads": 10,    "debug": true,    "data-root": "/data/k8s/docker/data",    "exec-root": "/data/k8s/docker/exec",    "log-opts": {      "max-size": "100m",      "max-file": "5"    }}

启动 docker 服务

systemctl daemon-reload && systemctl enable docker && systemctl restart docker

12.部署 kubelet 组件

创建 kubelet bootstrap kubeconfig 文件

cd /opt/k8s/workvim /opt/k8s/bin/environment.sh#!/bin/bashKUBE_APISERVER="https://172.19.201.202:8443"BOOTSTRAP_TOKEN="head -c 16 /dev/urandom | od -An -t x | tr -d ' '"NODE_NAMES=(dev-k8s-node1 dev-k8s-node2 dev-k8s-node3)  source /opt/k8s/bin/environment.shfor node_name in ${NODE_NAMES[@]} do   echo ">>> ${node_name}"    # 创建 token   export BOOTSTRAP_TOKEN=$(kubeadm token create \     --description kubelet-bootstrap-token \     --groups system:bootstrappers:${node_name} \     --kubeconfig ~/.kube/config)    # 设置集群参数   kubectl config set-cluster kubernetes \     --certificate-authority=/etc/kubernetes/cert/ca.pem \     --embed-certs=true \     --server=${KUBE_APISERVER} \     --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig    # 设置客户端认证参数   kubectl config set-credentials kubelet-bootstrap \     --token=${BOOTSTRAP_TOKEN} \     --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig    # 设置上下文参数   kubectl config set-context default \     --cluster=kubernetes \     --user=kubelet-bootstrap \     --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig    # 设置默认上下文   kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig done

证书中写入 Token 而非证书,证书后续由 kube-controller-manager 创建。

查看 kubeadm 为各节点创建的 token:

kubeadm token list --kubeconfig ~/.kube/config

分发 bootstrap kubeconfig 文件到所有 worker 节点

scp -pr kubelet-bootstrap-dev-k8s-master1.kubeconfig root@dev-k8s-master1:/etc/kubernetes/kubelet-bootstrap.kubeconfig

注:对应的文件名传到对应的主机上

创建和分发 kubelet 参数配置文件

cat > /etc/kubernetes/kubelet-config.yaml << EOFkind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1authentication: anonymous:   enabled: false webhook:   enabled: true x509:   clientCAFile: "/etc/kubernetes/cert/ca.pem"authorization: mode: WebhookclusterDomain: "cluster.local"clusterDNS: - "10.254.0.2"podCIDR: ""maxPods: 220serializeImagePulls: falsehairpinMode: promiscuous-bridgecgroupDriver: cgroupfsruntimeRequestTimeout: "15m"rotateCertificates: trueserverTLSBootstrap: truereadOnlyPort: 0port: 10250address: "172.19.201.247"EOF

为各节点创建和分发 kubelet 配置文件:(分发到work节点上)

scp -pr /etc/kubernetes/kubelet-config.yaml root@dev-k8s-master2:/etc/kubernetes/

创建和分发 kubelet systemd unit 文件

cat /etc/systemd/system/kubelet.service

[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service [Service]WorkingDirectory=/data/k8s/k8s/kubeletExecStart=/opt/k8s/bin/kubelet \ --root-dir=/data/k8s/k8s/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \ --cert-dir=/etc/kubernetes/cert \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --config=/etc/kubernetes/kubelet-config.yaml \ --hostname-override=dev-k8s-node1 \ --pod-infra-container-image=registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1 \ --allow-privileged=true \ --event-qps=0 \ --kube-api-qps=1000 \ --kube-api-burst=2000 \ --registry-qps=0 \ --image-pull-progress-deadline=30m \ --logtostderr=true \ --v=2Restart=alwaysRestartSec=5StartLimitInterval=0 [Install]WantedBy=multi-user.target

为各节点创建和分发 kubelet systemd unit 文件:

scp -pr /etc/systemd/system/kubelet.service root@dev-k8s-node1:/etc/systemd/system/

Bootstrap Token Auth 和授予权限:

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

创建工作路径

mkdir -p /data/k8s/k8s/kubelet

关闭swapoff, 否则 kubelet 会启动失败

/usr/sbin/swapoff -a

启动 kubelet 服务

systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet

检测服务其否启动

systemctl status kubelet |grep active

自动 approve CSR 请求

创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:

cd /opt/k8s/work

cat > csr-crb.yaml <

生效配置:

kubectl apply -f csr-crb.yaml

查看 kublet 的情况

等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approved:

手动 approve server cert csr

基于安全性考虑,CSR approving controllers 默认不会自动 approve kubelet server 证书签名请求,需要手动 approve。

kubectl get csr

kubectl certificate approve csr-bjtp4

13.部署kube-proxy 组件

kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

创建 kube-proxy 证书

创建证书签名请求:

cd /opt/k8s/work

cat > kube-proxy-csr.json <

生成证书和私钥:

cfssl gencert -ca=/opt/k8s/work/ca.pem \

-ca-key=/opt/k8s/work/ca-key.pem \

-config=/opt/k8s/work/ca-config.json \

-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

创建和分发 kubeconfig 文件

kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/work/ca.pem \

--embed-certs=true \

--server=https://172.19.201242:8443 \

--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \

--client-certificate=kube-proxy.pem \

--client-key=kube-proxy-key.pem \

--embed-certs=true \

--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \

--cluster=kubernetes \

--user=kube-proxy \

--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

分发 kubeconfig 文件:(拷贝到work工作节点)

scp kube-proxy.kubeconfig root@master1:/etc/kubernetes/

创建 kube-proxy 配置文件

cat >  /etc/kubernetes/kube-proxy-config.yaml <

注:修改各个节点上的配置文件,写上对应的主机名

为各节点创建和分发 kube-proxy 配置文件:(拷贝到所有工作节点)

scp -pr /etc/kubernetes/kube-proxy-config.yaml root@dev-k8s-node1: /etc/kubernetes/


创建和分发 kube-proxy systemd unit 文件

cat  /etc/systemd/system/kube-proxy.service[Unit]Description=Kubernetes Kube-Proxy ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.target [Service]WorkingDirectory=/data/k8s/k8s/kube-proxyExecStart=/opt/k8s/bin/kube-proxy \  --config=/etc/kubernetes/kube-proxy-config.yaml \  --logtostderr=true \  --v=2Restart=on-failureRestartSec=5LimitNOFILE=65536 [Install]WantedBy=multi-user.target

必须先创建工作目录

mkdir -p /data/k8s/k8s/kube-proxy

启动kube-proxy服务

systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy

检查启动结果,确保状态为 active (running)

systemctl status kube-proxy|grep active

查看监听端口和 metrics

netstat -lnpt|grep kube-proxy

14.部署coredns插件

修改配置文件

将下载的 kubernetes-server-linux-amd64.tar.gz 解压后,再解压其中的 kubernetes-src.tar.gz 文件。

cd /opt/k8s/work/kubernetes/

tar -xzvf kubernetes-src.tar.gz

coredns 目录是 cluster/addons/dns:

cd /opt/k8s/work/kubernetes/cluster/addons/dns/coredns

cp coredns.yaml.base coredns.yaml

source /opt/k8s/bin/environment.sh

sed -i -e "s/__PILLAR__DNS__DOMAIN__/${CLUSTER_DNS_DOMAIN}/" -e "s/__PILLAR__DNS__SERVER__/${CLUSTER_DNS_SVC_IP}/" coredns.yaml

创建 coredns

kubectl create -f coredns.yaml

检查 coredns 功能

[root@dev-k8s-master1 test]# kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE

coredns-6dcf4d5b7b-tvn26 1/1 Running 5 17h

15.部署 ingress 插件

下载源码包

wget https://github.com/kubernetes/ingress-nginx/archive/nginx-0.20.0.tar.gz

tar -zxvf nginx-0.20.0.tar.gz

进入工作路径

cd ingress-nginx-nginx-0.20.0/deploy

创建 ingress

kubectl create -f mandatory.yaml

cd /opt/k8s/work/ingress-nginx-nginx-0.20.0/deploy/provider/baremetal

创建 ingress service

kubectl create -f service-nodeport.yaml

检验ingress-nginx是否启动

kubectl get pods -n ingress-nginx

16.部署dashboard 插件

修改配置文件

cd /opt/k8s/work/kubernetes/cluster/addons/dashboard

修改 service 定义,指定端口类型为 NodePort,这样外界可以通过地址 NodeIP:NodePort 访问 dashboard;

cat dashboard-service.yaml

apiVersion: v1kind: Servicemetadata: name: kubernetes-dashboard namespace: kube-system labels:   k8s-app: kubernetes-dashboard   kubernetes.io/cluster-service: "true"   addonmanager.kubernetes.io/mode: Reconcilespec: type: NodePort # 增加这一行 selector:   k8s-app: kubernetes-dashboard ports: - port: 443   targetPort: 8443

执行所有定义文件

$ ls *.yaml

dashboard-configmap.yaml dashboard-controller.yaml dashboard-rbac.yaml dashboard-secret.yaml dashboard-service.yaml

$ kubectl apply -f .

查看分配的 NodePort

$ kubectl get deployment kubernetes-dashboard -n kube-system

创建登录 Dashboard 的 token 和 kubeconfig 配置文件

dashboard 默认只支持 token 认证(不支持 client 证书认证),所以如果使用 Kubeconfig 文件,需要将 token 写入到该文件。

创建登录 token

kubectl create sa dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')

DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')

echo ${DASHBOARD_LOGIN_TOKEN}

使用输出的 token 登录 Dashboard。

创建使用 token 的 KubeConfig 文件

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/cert/ca.pem \

--embed-certs=true \

--server=https://172.19.201.202:8443 \

--kubeconfig=dashboard.kubeconfig

# 设置客户端认证参数,使用上面创建的 Token

kubectl config set-credentials dashboard_user \

--token=${DASHBOARD_LOGIN_TOKEN} \

--kubeconfig=dashboard.kubeconfig

# 设置上下文参数

kubectl config set-context default \

--cluster=kubernetes \

--user=dashboard_user \

--kubeconfig=dashboard.kubeconfig

# 设置默认上下文

kubectl config use-context default --kubeconfig=dashboard.kubeconfig

用生成的 dashboard.kubeconfig 登录 Dashboard。

17.错误排查

当k8s新增加节点时, 新添加的node,创建pods 分配不了ip,报错如下:

Warning FailedCreatePodSandBox 72s (x26 over 6m40s) kubelet, dev-k8s-master2 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "registr

y.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1": Error response from daemon: pull access denied for registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64, repository does not exist or may require 'docker login'

解决方法:

在node节点操作:

docker pull lc13579443/pause-amd64

docker tag lc13579443/pause-amd64 registry.cn-beijing.aliyuncs.com/k8s_images/pause-amd64:3.1

重启kubelet

systemctl daemon-reload && systemctl restart kubelet



0