千家信息网

Kubernetes部署(十):储存之glusterfs和heketi部署

发表于:2025-12-02 作者:千家信息网编辑
千家信息网最后更新 2025年12月02日,相关内容:Kubernetes部署(一):架构及功能说明Kubernetes部署(二):系统环境初始化Kubernetes部署(三):CA证书制作Kubernetes部署(四):ETCD集群部署Kub
千家信息网最后更新 2025年12月02日Kubernetes部署(十):储存之glusterfs和heketi部署

相关内容:

Kubernetes部署(一):架构及功能说明
Kubernetes部署(二):系统环境初始化
Kubernetes部署(三):CA证书制作
Kubernetes部署(四):ETCD集群部署
Kubernetes部署(五):Haproxy、Keppalived部署
Kubernetes部署(六):Master节点部署
Kubernetes部署(七):Node节点部署
Kubernetes部署(八):Flannel网络部署
Kubernetes部署(九):CoreDNS、Dashboard、Ingress部署
Kubernetes部署(十):储存之glusterfs和heketi部署
Kubernetes部署(十一):管理之Helm和Rancher部署
Kubernetes部署(十二):helm部署harbor企业级镜像仓库

概观

本指南支持在Kubernetes集群中集成,部署和管理GlusterFS容器化存储节点。这使Kubernetes管理员能够为其用户提供可靠的共享存储。

包括设置指南、其中包含一个示例服务器pod,它使用动态配置的GlusterFS卷进行存储。对于那些希望测试或了解有关此主题的更多信息的人,请按照主要自述文件中的快速入门说明 了解gluster-kubernetes

本指南旨在演示Heketi在Kubernetes环境中管理Gluster的最小示例。

基础设施要求

  • 一个正在运行的Kubernetes集群,至少有三个Kubernetes工作节点,每个节点至少连接一个可用的原始块设备(如EBS卷或本地磁盘)。
    #使用file -s 查看硬盘如果显示为data则为原始块设备。如果不是data类型,可先用pvcreate,pvremove来变更。[root@node-04 ~]# file -s /dev/sdc/dev/sdc: x86 boot sector, code offset 0xb8[root@node-04 ~]# pvcreate /dev/sdcWARNING: dos signature detected on /dev/sdc at offset 510. Wipe it? [y/n]: yWiping dos signature on /dev/sdc.Physical volume "/dev/sdc" successfully created.[root@node-04 ~]# pvremove /dev/sdcLabels on physical volume "/dev/sdc" successfully wiped.[root@node-04 ~]# file -s /dev/sdc /dev/sdc: data
  • 在glusterfs节点的宿主机需要安装glusterfs-client、glusterfs-fuse包和socat包。
    yum install -y glusterfs-client glusterfs-fuse socat
  • 每个kubetnetes节点的宿主机需要加载dm_thin_pool模块modprobe dm_thin_pool

客户端安装

Heketi提供CLI,为用户提供管理Kubernetes中GlusterFS的部署和配置的方法。 在您的客户端计算机上下载并安装下载并安装heketi-cli,下载的heketi-cli版本最好是和heketi服务端版本一致,不然可能会出现报错。

Kubernetes部署

  • 部署GlusterFS DaemonSet

    {"kind": "DaemonSet","apiVersion": "extensions/v1beta1","metadata": {    "name": "glusterfs",    "labels": {        "glusterfs": "deployment"    },    "annotations": {        "description": "GlusterFS Daemon Set",        "tags": "glusterfs"    }},"spec": {    "template": {        "metadata": {            "name": "glusterfs",            "labels": {                "glusterfs-node": "daemonset"            }        },        "spec": {            "nodeSelector": {                "storagenode" : "glusterfs"            },            "hostNetwork": true,            "containers": [                {                    "image": "gluster/gluster-centos:latest",                    "imagePullPolicy": "Always",                    "name": "glusterfs",                    "volumeMounts": [                        {                            "name": "glusterfs-heketi",                            "mountPath": "/var/lib/heketi"                        },                        {                            "name": "glusterfs-run",                            "mountPath": "/run"                        },                        {                            "name": "glusterfs-lvm",                            "mountPath": "/run/lvm"                        },                        {                            "name": "glusterfs-etc",                            "mountPath": "/etc/glusterfs"                        },                        {                            "name": "glusterfs-logs",                            "mountPath": "/var/log/glusterfs"                        },                        {                            "name": "glusterfs-config",                            "mountPath": "/var/lib/glusterd"                        },                        {                            "name": "glusterfs-dev",                            "mountPath": "/dev"                        },                        {                            "name": "glusterfs-cgroup",                            "mountPath": "/sys/fs/cgroup"                        }                    ],                    "securityContext": {                        "capabilities": {},                        "privileged": true                    },                    "readinessProbe": {                        "timeoutSeconds": 3,                        "initialDelaySeconds": 60,                        "exec": {                            "command": [                                "/bin/bash",                                "-c",                                "systemctl status glusterd.service"                            ]                        }                    },                    "livenessProbe": {                        "timeoutSeconds": 3,                        "initialDelaySeconds": 60,                        "exec": {                            "command": [                                "/bin/bash",                                "-c",                                "systemctl status glusterd.service"                            ]                        }                    }                }            ],            "volumes": [                {                    "name": "glusterfs-heketi",                    "hostPath": {                        "path": "/var/lib/heketi"                    }                },                {                    "name": "glusterfs-run"                },                {                    "name": "glusterfs-lvm",                    "hostPath": {                        "path": "/run/lvm"                    }                },                {                    "name": "glusterfs-etc",                    "hostPath": {                        "path": "/etc/glusterfs"                    }                },                {                    "name": "glusterfs-logs",                    "hostPath": {                        "path": "/var/log/glusterfs"                    }                },                {                    "name": "glusterfs-config",                    "hostPath": {                        "path": "/var/lib/glusterd"                    }                },                {                    "name": "glusterfs-dev",                    "hostPath": {                        "path": "/dev"                    }                },                {                    "name": "glusterfs-cgroup",                    "hostPath": {                        "path": "/sys/fs/cgroup"                    }                }            ]        }    }}}
    $ kubectl create -f glusterfs-daemonset.json
  • 通过运行获取节点名称:
$ kubectl get nodes
  • 通过storagenode=glusterfs在该节点上设置标签,将gluster容器部署到指定节点上。
[root@node-01 heketi]# kubectl label node 10.31.90.204 storagenode=glusterfs[root@node-01 heketi]# kubectl label node 10.31.90.205 storagenode=glusterfs[root@node-01 heketi]# kubectl label node 10.31.90.206 storagenode=glusterfs

根据需要,验证pod正在节点上运行,至少应运行三个pod。

$ kubectl get pods
  • 接下来我们将为Heketi创建一个ServiceAccount:

    {"apiVersion": "v1","kind": "ServiceAccount","metadata": {"name": "heketi-service-account"}}
    $ kubectl create -f heketi-service-account.json
  • 我们现在必须建立该服务帐户控制gluster pod的能力。我们通过为新创建的服务帐户创建集群角色绑定来实现此目的。

    $ kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
  • 现在我们需要创建一个Kubernetes secret,它将保存我们的Heketi实例的配置。必须将配置文件设置为使用 kubernetes执行程序,以便Heketi服务器控制gluster pod。除此之外,您可以随意尝试配置选项。
{  "_port_comment": "Heketi Server Port Number",  "port": "8080",  "_use_auth": "Enable JWT authorization. Please enable for deployment",  "use_auth": false,  "_jwt": "Private keys for access",  "jwt": {    "_admin": "Admin has access to all APIs",    "admin": {      "key": "My Secret"    },    "_user": "User only has access to /volumes endpoint",    "user": {      "key": "My Secret"    }  },  "_glusterfs_comment": "GlusterFS Configuration",  "glusterfs": {    "_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",    "executor": "kubernetes",    "_db_comment": "Database file name",    "db": "/var/lib/heketi/heketi.db",    "kubeexec": {      "rebalance_on_expansion": true    },    "sshexec": {      "rebalance_on_expansion": true,      "keyfile": "/etc/heketi/private_key",      "fstab": "/etc/fstab",      "port": "22",      "user": "root",      "sudo": false    }  },  "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",  "backup_db_to_kube_secret": false}
$ kubectl create secret generic heketi-config-secret --from-file=./heketi.json
  • 接下来,我们需要部署一个初始Pod和一个服务来访问该pod。如下会有一个heketi-bootstrap.json文件。
    提交文件并验证一切正常运行,如下所示:
    {"kind": "List","apiVersion": "v1","items": [{  "kind": "Service",  "apiVersion": "v1",  "metadata": {    "name": "deploy-heketi",    "labels": {      "glusterfs": "heketi-service",      "deploy-heketi": "support"    },    "annotations": {      "description": "Exposes Heketi Service"    }  },  "spec": {    "selector": {      "name": "deploy-heketi"    },    "ports": [      {        "name": "deploy-heketi",        "port": 8080,        "targetPort": 8080      }    ]  }},{  "kind": "Deployment",  "apiVersion": "extensions/v1beta1",  "metadata": {    "name": "deploy-heketi",    "labels": {      "glusterfs": "heketi-deployment",      "deploy-heketi": "deployment"    },    "annotations": {      "description": "Defines how to deploy Heketi"    }  },  "spec": {    "replicas": 1,    "template": {      "metadata": {        "name": "deploy-heketi",        "labels": {          "name": "deploy-heketi",          "glusterfs": "heketi-pod",          "deploy-heketi": "pod"        }      },      "spec": {        "serviceAccountName": "heketi-service-account",        "containers": [          {            "image": "heketi/heketi:8",            "imagePullPolicy": "Always",            "name": "deploy-heketi",            "env": [              {                "name": "HEKETI_EXECUTOR",                "value": "kubernetes"              },              {                "name": "HEKETI_DB_PATH",                "value": "/var/lib/heketi/heketi.db"              },              {                "name": "HEKETI_FSTAB",                "value": "/var/lib/heketi/fstab"              },              {                "name": "HEKETI_SNAPSHOT_LIMIT",                "value": "14"              },              {                "name": "HEKETI_KUBE_GLUSTER_DAEMONSET",                "value": "y"              }            ],            "ports": [              {                "containerPort": 8080              }            ],            "volumeMounts": [              {                "name": "db",                "mountPath": "/var/lib/heketi"              },              {                "name": "config",                "mountPath": "/etc/heketi"              }            ],            "readinessProbe": {              "timeoutSeconds": 3,              "initialDelaySeconds": 3,              "httpGet": {                "path": "/hello",                "port": 8080              }            },            "livenessProbe": {              "timeoutSeconds": 3,              "initialDelaySeconds": 30,              "httpGet": {                "path": "/hello",                "port": 8080              }            }          }        ],        "volumes": [          {            "name": "db"          },          {            "name": "config",            "secret": {              "secretName": "heketi-config-secret"            }          }        ]      }    }  }}]}
# kubectl create -f heketi-bootstrap.jsonservice "deploy-heketi" createddeployment "deploy-heketi" created[root@node-01 heketi]# kubectl get podNAME                            READY     STATUS    RESTARTS   AGEdeploy-heketi-8888799fd-cmfp6   1/1       Running   0          6mglusterfs-7t5ls                 1/1       Running   0          8mglusterfs-drsx9                 1/1       Running   0          8mglusterfs-pnnn8                 1/1       Running   0          8m
  • 现在Bootstrap Heketi服务正在运行,我们将配置端口转发,以便我们可以使用Heketi CLI与服务进行通信。使用Heketi pod的名称,运行以下命令:
kubectl port-forward deploy-heketi-8888799fd-cmfp6 :8080

如果在运行命令的系统上本地端口8080空闲,则可以运行port-forward命令,以便它为了方便而绑定到8080:

kubectl port-forward deploy-heketi-8888799fd-cmfp6 18080:8080

现在通过对Heketi服务运行示例查询来验证端口转发是否正常工作。该命令应该打印将要转发的本地端口。将其合并到URL中以测试服务,如下所示:

curl http://localhost:18080/helloHandling connection for 18080Hello from Heketi

最后,为Heketi CLI客户端设置环境变量,以便它知道如何到达Heketi Server。

export HEKETI_CLI_SERVER=http://localhost:18080
  • 接下来,我们将向Heketi提供有关要管理的GlusterFS集群的信息。我们通过拓扑文件提供此信息 。您克隆的repo中有一个示例拓扑文件,名为topology-sample.json。拓扑指定运行GlusterFS容器的Kubernetes节点以及每个节点的相应原始块设备。

  • 确保hostnames/manage指向下面显示的确切名称kubectl get nodes,并且hostnames/storage是存储网络的IP地址。

  • 重要信息:此时,必须使用与服务器版本匹配的heketi-cli版本加载拓扑文件。作为最后的手段,Heketi容器附带了一份可以通过的方式访问的heketi-cli kubectl exec ...。

修改拓扑文件以反映您所做的选择,然后部署它,如下所示:

{  "clusters": [    {      "nodes": [        {          "node": {            "hostnames": {              "manage": [                "10.31.90.204"              ],              "storage":[                "10.31.90.204"              ]            },            "zone": 1          },          "devices": [            "/dev/sdc"          ]        },        {          "node": {            "hostnames": {              "manage": [                "10.31.90.205"              ],              "storage":[                "10.31.90.205"              ]            },            "zone": 1          },          "devices": [            "/dev/sdc"          ]        },        {          "node": {            "hostnames": {              "manage": [                "10.31.90.206"              ],              "storage":[                "10.31.90.206"              ]            },            "zone": 1          },          "devices": [            "/dev/sdc"          ]        }      ]    }  ]}
[root@node-01 ~]# heketi-cli topology load --json=top.jsonCreating cluster ... ID: e758afb77ee26d5f969d7efee1516e64        Allowing file volumes on cluster.        Allowing block volumes on cluster.        Creating node 10.31.90.204 ... ID: a6eedd58c118dcfe44a0db2af1a4f863                Adding device /dev/sdc ... OK        Creating node 10.31.90.205 ... ID: 4066962c14bcdebd28aca193b5690792                Adding device /dev/sdc ... OK        Creating node 10.31.90.206 ... ID: 91e42a2361f0266ae334354e5c34ce11                Adding device /dev/sdc ... OK
  • 接下来我们将使用Heketi为它配置一个卷来存储其数据库:

执行此命令后会生成一个heketi-storage.json的文件,我们最好是将此文件里的
"image": "heketi/heketi:dev"
改为
"image": "heketi/heketi:8"

# heketi-client/bin/heketi-cli setup-openshift-heketi-storage

然后在创建heketi相关服务

# kubectl create -f heketi-storage.json

陷阱:如果heketi-cli在运行setup-openshift-heketi-storage子命令时报告"无空间"错误,则可能无意中运行topology load了服务器和heketi-cli的不匹配版本。停止正在运行的Heketi pod(kubectl scale deployment deploy-heketi --replicas=0),手动从存储块设备中删除任何签名,然后继续运行Heketi pod(kubectl scale deployment deploy-heketi --replicas=1)。然后使用匹配版本的heketi-cli重新加载拓扑并重试该步骤。

  • 等到作业完成然后删除引导程序Heketi:
# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"
  • 创建长期Heketi实例:
    {"kind": "List","apiVersion": "v1","items": [{  "kind": "Secret",  "apiVersion": "v1",  "metadata": {    "name": "heketi-db-backup",    "labels": {      "glusterfs": "heketi-db",      "heketi": "db"    }  },  "data": {  },  "type": "Opaque"},{  "kind": "Service",  "apiVersion": "v1",  "metadata": {    "name": "heketi",    "labels": {      "glusterfs": "heketi-service",      "deploy-heketi": "support"    },    "annotations": {      "description": "Exposes Heketi Service"    }  },  "spec": {    "selector": {      "name": "heketi"    },    "ports": [      {        "name": "heketi",        "port": 8080,        "targetPort": 8080      }    ]  }},{  "kind": "Deployment",  "apiVersion": "extensions/v1beta1",  "metadata": {    "name": "heketi",    "labels": {      "glusterfs": "heketi-deployment"    },    "annotations": {      "description": "Defines how to deploy Heketi"    }  },  "spec": {    "replicas": 1,    "template": {      "metadata": {        "name": "heketi",        "labels": {          "name": "heketi",          "glusterfs": "heketi-pod"        }      },      "spec": {        "serviceAccountName": "heketi-service-account",        "containers": [          {            "image": "heketi/heketi:8",            "imagePullPolicy": "Always",            "name": "heketi",            "env": [              {                "name": "HEKETI_EXECUTOR",                "value": "kubernetes"              },              {                "name": "HEKETI_DB_PATH",                "value": "/var/lib/heketi/heketi.db"              },              {                "name": "HEKETI_FSTAB",                "value": "/var/lib/heketi/fstab"              },              {                "name": "HEKETI_SNAPSHOT_LIMIT",                "value": "14"              },              {                "name": "HEKETI_KUBE_GLUSTER_DAEMONSET",                "value": "y"              }            ],            "ports": [              {                "containerPort": 8080              }            ],            "volumeMounts": [              {                "mountPath": "/backupdb",                "name": "heketi-db-secret"              },              {                "name": "db",                "mountPath": "/var/lib/heketi"              },              {                "name": "config",                "mountPath": "/etc/heketi"              }            ],            "readinessProbe": {              "timeoutSeconds": 3,              "initialDelaySeconds": 3,              "httpGet": {                "path": "/hello",                "port": 8080              }            },            "livenessProbe": {              "timeoutSeconds": 3,              "initialDelaySeconds": 30,              "httpGet": {                "path": "/hello",                "port": 8080              }            }          }        ],        "volumes": [          {            "name": "db",            "glusterfs": {              "endpoints": "heketi-storage-endpoints",              "path": "heketidbstorage"            }          },          {            "name": "heketi-db-secret",            "secret": {              "secretName": "heketi-db-backup"            }          },          {            "name": "config",            "secret": {              "secretName": "heketi-config-secret"            }          }        ]      }    }  }}]}
# kubectl create -f heketi-deployment.jsonservice "heketi" createddeployment "heketi" created
  • 现在这样做了,Heketi数据库将保留在GlusterFS卷中,并且每次重启Heketi pod时都不会重置。

使用诸如heketi-cli cluster list和之类的命令heketi-cli volume list 来确认先前建立的集群是否存在,以及Heketi是否知道在引导阶段创建的db存储卷。

演示测试

  1. 接下来就是建立存储卷,然后挂载测试。
    在测试之前我们需要先将heketi服务通过Ingress对外发布,将heketi.cnlinux.clubA记录解析为10.31.90.200
    apiVersion: extensions/v1beta1kind: Ingressmetadata:name: ingress-heketiannotations:nginx.ingress.kubernetes.io/rewrite-target: /kubernetes.io/ingress.class: nginxspec:rules:- host: heketi.cnlinux.club  http:    paths:      - path:         backend:          serviceName: heketi          servicePort: 8080
    [root@node-01 heketi]# kubectl create -f ingress-heketi.yaml

    在浏览器访问http://heketi.cnlinux.club/hello

  2. 创建StorageClass
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:  name: gluster-heketiprovisioner: kubernetes.io/glusterfsparameters:  resturl: "http://heketi.cnlinux.club"  restauthenabled: "false"   volumetype: "replicate:2"
[root@node-01 heketi]# kubectl create -f storageclass-gluster-heketi.yaml[root@node-01 heketi]# kubectl get scNAME             PROVISIONER               AGEgluster-heketi   kubernetes.io/glusterfs   10s
  1. 创建pvc
    apiVersion: v1kind: PersistentVolumeClaimmetadata:name: pvc-gluster-heketispec:storageClassName: gluster-heketiaccessModes:- ReadWriteOnceresources:requests:  storage: 1Gi
    [root@node-01 heketi]# kubectl create -f pvc-gluster-heketi.yaml [root@node-01 heketi]# kubectl get pvcNAME                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGEpvc-gluster-heketi   Bound     pvc-d978f524-0b74-11e9-875c-005056826470   1Gi        RWO            gluster-heketi   30s
  2. 在pod里挂载pvc
    apiVersion: v1kind: Podmetadata:name: pod-pvcspec:containers:- name: pod-pvcimage: busybox:latestcommand:- sleep- "3600"volumeMounts:- name: gluster-volume  mountPath: "/pv-data"volumes:- name: gluster-volume  persistentVolumeClaim:    claimName: pvc-gluster-heketi
    [root@node-01 heketi]# kubectl create -f pod-pvc.yaml 

    进入容器查看是否已经挂载成功

    [root@node-01 heketi]# kubectl exec pod-pvc -it /bin/sh/ # df -hFilesystem                Size      Used Available Use% Mounted onoverlay                  47.8G      4.3G     43.5G   9% /tmpfs                    64.0M         0     64.0M   0% /devtmpfs                     1.9G         0      1.9G   0% /sys/fs/cgroup10.31.90.204:vol_675cc9fe0e959157919c886ea7786d33                   1014.0M     42.7M    971.3M   4% /pv-data/dev/sda3                47.8G      4.3G     43.5G   9% /dev/termination-log/dev/sda3                47.8G      4.3G     43.5G   9% /etc/resolv.conf/dev/sda3                47.8G      4.3G     43.5G   9% /etc/hostname/dev/sda3                47.8G      4.3G     43.5G   9% /etc/hosts

    #往/pv-data写文件,当容量超过1G时就自动退出了,证明容量限制是生效的。

    / # cd /pv-data//pv-data # dd if=/dev/zero of=/pv-data/test.img bs=8M count=300123+0 records in122+0 records out1030225920 bytes (982.5MB) copied, 24.255925 seconds, 40.5MB/s

    在宿主机磁盘里查看是否创建了test.img文件

    [root@node-04 cfg]# mount /dev/vg_2631413b8b87bbd6cb526568ab697d37/brick_1691ef862dd504e12e8384af76e5a9f2 /mnt[root@node-04 cfg]# ll -h /mnt/brick/total 982M-rw-r--r-- 2 root 2001 982M Jan  2 15:14 test.img

    至此,所有的操作都已完成。

0