Kubernetes 1.17.0有什么变化
发表于:2025-12-02 作者:千家信息网编辑
千家信息网最后更新 2025年12月02日,这篇文章主要讲解了"Kubernetes 1.17.0有什么变化",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"Kubernetes 1.17.0有什么
千家信息网最后更新 2025年12月02日Kubernetes 1.17.0有什么变化
这篇文章主要讲解了"Kubernetes 1.17.0有什么变化",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"Kubernetes 1.17.0有什么变化"吧!
Kubernetes 1.17.0 已发布,与之前版本的变化较大。
各服务的容器镜像版本如下:
k8s.gcr.io/kube-apiserver:v1.17.0k8s.gcr.io/kube-controller-manager:v1.17.0k8s.gcr.io/kube-scheduler:v1.17.0k8s.gcr.io/kube-proxy:v1.17.0k8s.gcr.io/pause:3.1k8s.gcr.io/etcd:3.4.3-0k8s.gcr.io/coredns:1.6.5
拉取容器镜像:
原始的kubernetes镜像文件在gcr上,不能直接下载。我给镜像到了阿里云的杭州机房的容器仓库里,拉取还是比较快的。
echo ""echo "=========================================================="echo "Pull Kubernetes v1.17.0 Images from aliyuncs.com ......"echo "=========================================================="echo ""MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings## 拉取镜像docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.17.0docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.17.0docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.17.0docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.17.0docker pull ${MY_REGISTRY}/k8s-gcr-io-etcd:3.4.3-0docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.6.5## 添加Tagdocker tag ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.17.0 k8s.gcr.io/kube-apiserver:v1.17.0docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0docker tag ${MY_REGISTRY}/k8s-gcr-io-etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.6.5 k8s.gcr.io/coredns:1.6.5echo ""echo "=========================================================="echo "Pull Kubernetes v1.17.0 Images FINISHED."echo "into registry.cn-hangzhou.aliyuncs.com/openthings, "echo " by openthings@https://my.oschina.net/u/2306127."echo "=========================================================="echo ""保存为shell脚本,然后执行。
建立新的集群:
(base) supermap@openbox00:~/iobjectspy$ sudo kubeadm init --kubernetes-version=v1.17.0 --apiserver-advertise-address=192.168.199.173 --pod-network-cidr=10.244.0.0/16W1213 10:44:01.861855 14517 validation.go:28] Cannot validate kube-proxy config - no validator is availableW1213 10:44:01.861884 14517 validation.go:28] Cannot validate kubelet config - no validator is available[init] Using Kubernetes version: v1.17.0[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [openbox00 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.199.173][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [openbox00 localhost] and IPs [192.168.199.173 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [openbox00 localhost] and IPs [192.168.199.173 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"W1213 10:44:05.415511 14517 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"W1213 10:44:05.416242 14517 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 17.001902 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node openbox00 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node openbox00 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: iq5i5d.xbrsj7ilq026786r[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.199.173:6443 --token iq5i5d.xbrsj7ilq026786r \ --discovery-token-ca-cert-hash sha256:1275462841fd4d1a65734869bf75b73e80786cb7cd923937a6cdcec8f968c495 (base) supermap@openbox00:~/iobjectspy$
关于--control-plane-endpoint的指定方法:
sudo kubeadm init --kubernetes-version=v1.17.0 \--apiserver-advertise-address=192.168.199.173 \--control-plane-endpoint=192.168.199.173:6443 \--pod-network-cidr=10.244.0.0/16 \--upload-certs
使用kubeadm创建高可用集群,请参考:
Creating Highly Available clusters with kubeadm
注意,使用多个Master节点的kubeadm init方法后,输出有所不同:
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.199.173:6443 --token rlxvkn.2ine1loolri50tzt \ --discovery-token-ca-cert-hash sha256:86e68de8febb844ab8f015f6af4526d78a980d9cdcf7863eebb05b17c24b9383 \ --control-plane --certificate-key 440a880086e7e9cbbcebbd7924e6a9562d77ee8de7e0ec63511436f2467f7ddePlease note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.199.173:6443 --token rlxvkn.2ine1loolri50tzt \ --discovery-token-ca-cert-hash sha256:86e68de8febb844ab8f015f6af4526d78a980d9cdcf7863eebb05b17c24b9383
感谢各位的阅读,以上就是"Kubernetes 1.17.0有什么变化"的内容了,经过本文的学习后,相信大家对Kubernetes 1.17.0有什么变化这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是,小编将为大家推送更多相关知识点的文章,欢迎关注!
变化
镜像
容器
学习
内容
方法
版本
集群
不同
原始
较大
仓库
多个
就是
思路
情况
文件
文章
更多
有所不同
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
应聘网络技术自我陈述
网络安全综合治理体系建设
网络安全与支付系统教学
网络技术开发图片
保定数据库安全
自己怎么弄服务器玩游戏不卡
紫天科技软件开发
浦东新区二手网络技术施工
江苏hpe服务器哪家好
网络安全风险事件预测算法
万方数据库查询
主营计算机软件开发
网络安全法中关键信息运营者
网络安全和信息化第二次会议
数据库建立什么意思
网络安全整治的主要任务包括
数据库增加数据文件
服务器性能测试标准
学网络安全找工作容易吗
计算机软件开发需要哪些证书
网络安全与支付系统教学
陕西省金泰网络技术有限公司
网络代理服务器
张家港知名服务器应用范围
创建数据库试题
将数据库中a列数据更新
中山市券鱼网络技术有限公司
Java软件开发实验目的
西安旅游团软件开发
图像怎么转成bin数据库