千家信息网

怎么搭建Harbor私有仓库

发表于:2025-12-02 作者:千家信息网编辑
千家信息网最后更新 2025年12月02日,搭建Harbor私有仓库此时再开启一台新的虚拟机:CentOS 7-2 192.168.18.134(可以将网卡设置为静态IP)`部署docker引擎`[root@harbor ~]# yum ins
千家信息网最后更新 2025年12月02日怎么搭建Harbor私有仓库

搭建Harbor私有仓库

此时再开启一台新的虚拟机:CentOS 7-2 192.168.18.134(可以将网卡设置为静态IP)

`部署docker引擎`[root@harbor ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y[root@harbor ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo[root@harbor ~]# yum install -y docker-ce[root@harbor ~]# systemctl stop firewalld.service[root@harbor ~]# setenforce 0[root@harbor ~]# systemctl start docker.service[root@harbor ~]# systemctl enable docker.service`检查相关进程开启情况`[root@harbor ~]# ps aux | grep dockerroot       4913  0.8  3.6 565612 68884 ?        Ssl  12:23   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockroot       5095  0.0  0.0 112676   984 pts/1    R+   12:23   0:00 grep --color=auto docker`镜像加速服务`[root@harbor ~]# tee /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"]}EOF[root@harbor ~]# systemctl daemon-reload[root@harbor ~]# systemctl restart docker`网络优化部分`[root@harbor ~]# echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf[root@harbor ~]# service network restartRestarting network (via systemctl):                        [  确定  ][root@harbor ~]# systemctl restart docker----------[root@harbor ~]# mkdir /aaa[root@harbor ~]# mount.cifs //192.168.0.105/rpm /aaaPassword for root@//192.168.0.105/rpm:[root@harbor ~]# cd /aaa/docker/[root@harbor docker]# cp docker-compose /usr/local/bin/[root@harbor docker]# cd /usr/local/bin/[root@harbor bin]# lsdocker-compose[root@harbor bin]# docker-compose -vdocker-compose version 1.21.1, build 5a3f1a3[root@harbor bin]# cd /aaa/docker/[root@harbor docker]# tar zxvf harbor-offline-installer-v1.2.2.tgz -C /usr/local/[root@harbor docker]# cd /usr/local/harbor/[root@harbor harbor]# lscommon                     docker-compose.yml     harbor.v1.2.2.tar.gz  NOTICEdocker-compose.clair.yml   harbor_1_1_0_template  install.sh            preparedocker-compose.notary.yml  harbor.cfg             LICENSE               upgrade`配置Harbor参数文件`[root@harbor harbor]# vim harbor.cfg5 hostname = 192.168.18.134     #5行改为自己本机的IP地址59 harbor_admin_password = Harbor12345      #此行为默认账号和密码不要忘记,登陆时要用#修改完成后按Esc退出插入模式,输入:wq保存退出[root@harbor harbor]# ./install.sh......此处省略多行Creating harbor-log ... doneCreating harbor-adminserver ... doneCreating harbor-db          ... doneCreating registry           ... doneCreating harbor-ui          ... doneCreating nginx              ... doneCreating harbor-jobservice  ... done✔ ----Harbor has been installed and started successfully.----Now you should be able to visit the admin portal at http://192.168.18.134.For more details, please visit https://github.com/vmware/harbor .
第一步:登录Harbor私有仓库

在宿主机浏览器地址栏中输入:192.168.18.134,输入默认的账户admin,密码Harbor12345,就可以点击登录

第二步:新建项目并设为私有

在项目界面点击"+项目"添加新项目,输入项目名称,点击创建,然后点击新项目左侧的三个小点,将项目设为私有


两个node节点配置连接私有仓库(注意后面的逗号要添加)
`node2节点`[root@node2 ~]# vim /etc/docker/daemon.json{  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"],     #末尾要有,  "insecure-registries":["192.168.18.134"]                          #添加这行}[root@node2 ~]# systemctl restart docker`node2节点`[root@node1 ~]# vim /etc/docker/daemon.json{  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"],     #末尾要有,  "insecure-registries":["192.168.18.134"]                          #添加这行}[root@node1 ~]# systemctl restart docker
第三步:节点上登录harbor私有仓库
`node2节点:`[root@node2 ~]# docker login 192.168.18.134Username: admin     #输入账户adminPassword:           #输入密码:Harbor12345WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded     #此时成功登录`下载tomcat镜像并打标签推送:``[root@node2 ~]# docker pull tomcat......此处省略多行Status: Downloaded newer image for tomcat:latestdocker.io/library/tomcat:latest[root@node2 ~]# docker imagesREPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZEtomcat                                                            latest              aeea3708743f        3 days ago          529MB[root@node2 ~]# docker tag tomcat 192.168.18.134/project/tomcat     #打标签的过程[root@node2 ~]# docker push 192.168.18.134/project/tomcat           #上传镜像
此时在harbor私仓界面就能看到推送上去的tomcat镜像


问题:如果我们想使用另一个节点node1去拉取私仓中的tomcar镜像就会出现error报错,提示被拒绝(也就是需要登陆)
[root@node1 ~]# docker pull 192.168.18.134/project/tomcatUsing default tag: latestError response from daemon: pull access denied for 192.168.18.134/project/tomcat, repository does not exist or may require 'docker login': denied: requested access to the resource is denied       #提示出错,缺少仓库的凭据`node1节点下载tomcat镜像`[root@node1 ~]# docker pull tomcat:8.0.52[root@node1 ~]# docker imagesREPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZEtomcat                                                            8.0.52              b4b762737ed4        19 months ago       356MB

第四步:master1上操作
[root@master1 demo]# vim tomcat01.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: my-tomcatspec:  replicas: 2  template:    metadata:      labels:        app: my-tomcat    spec:      containers:      - name: my-tomcat        image: docker.io/tomcat:8.0.52        ports:        - containerPort: 80---apiVersion: v1kind: Servicemetadata:  name: my-tomcatspec:  type: NodePort  ports:  - port: 8080    targetPort: 8080  selector:    app: my-tomcat`创建`[root@master1 demo]# kubectl create -f tomcat01.yamldeployment.extensions/my-tomcat createdservice/my-tomcat created`查看资源`[root@master1 demo]# kubectl get pods,deploy,svcNAME                                    READY   STATUS    RESTARTS   AGEpod/my-nginx-d55b94fd-kc2gl             1/1     Running   1          2dpod/my-nginx-d55b94fd-tkr42             1/1     Running   1          2d`pod/my-tomcat-57667b9d9-8bkns`         1/1     Running   0          84s`pod/my-tomcat-57667b9d9-kcddv`         1/1     Running   0          84spod/mypod                               1/1     Running   1          8hpod/nginx-6c94d899fd-8pf48              1/1     Running   1          3dpod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          2d23hpod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          2d23hpod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          2d23hNAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEdeployment.extensions/my-nginx           2         2         2            2           2d`deployment.extensions/my-tomcat`        2         2         2            2           84sdeployment.extensions/nginx              1         1         1            1           8ddeployment.extensions/nginx-deployment   3         3         3            3           2d23hNAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGEservice/kubernetes         ClusterIP   10.0.0.1             443/TCP          10dservice/my-nginx-service   NodePort    10.0.0.210           80:40377/TCP     2d`service/my-tomcat          NodePort    10.0.0.86            8080:41860/TCP   84s`service/nginx-service      NodePort    10.0.0.242           80:40422/TCP     3d10h#内部端口8080,对外端口41860[root@master1 demo]# kubectl get epNAME               ENDPOINTS                                 AGEkubernetes         192.168.18.128:6443,192.168.18.132:6443   10dmy-nginx-service   172.17.32.4:80,172.17.40.3:80             2d`my-tomcat          172.17.32.6:8080,172.17.40.6:8080         5m29s`nginx-service      172.17.40.5:80                            3d10h#此时my-tomcat被分配到了后面两个节点上去
验证:在宿主机浏览器中输入192.168.18.148:41860和192.168.18.145:41860这两个节点地址加对外暴露端口号,查看是否都可以访问tomcat的主页

`验证可以成功访问之后我们先把资源删除,后面使用私有仓库中的镜像进行创建`[root@master1 demo]# kubectl delete -f tomcat01.yamldeployment.extensions "my-tomcat" deletedservice "my-tomcat" deleted

问题处理:

`如果遇到处于Terminating状态的无法删除的资源`[root@localhost demo]# kubectl get podsNAME                              READY   STATUS        RESTARTS   AGEmy-tomcat-57667b9d9-8bkns         1/1     `Terminating`   0          84smy-tomcat-57667b9d9-kcddv         1/1     `Terminating`   0          84s#这种情况下可以使用强制删除命令`格式:kubectl delete pod [pod name] --force --grace-period=0 -n [namespace]`[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-8bkns --force --grace-period=0 -n defaultwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "my-tomcat-57667b9d9-8bkns" force deleted[root@localhost demo]# kubectl delete pod my-tomcat-57667b9d9-kcddv --force --grace-period=0 -n defaultwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "my-tomcat-57667b9d9-kcddv" force deleted[root@localhost demo]# kubectl get podsNAME                              READY   STATUS    RESTARTS   AGEpod/mypod                               1/1     Running   1          8hpod/nginx-6c94d899fd-8pf48              1/1     Running   1          3dpod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          2d23hpod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          2d23hpod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          2d23h

第五步:node1上操作(之前登陆过Harbor仓库的节点)

我们需要先删除我们之前上传到私有仓库的额project/tomcat镜像

node2中之前打标签的镜像也需要删除:
[root@node2 ~]# docker imagesREPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE192.168.18.134/project/tomcat                                     latest              aeea3708743f        3 days ago          529MB[root@node2 ~]# docker rmi 192.168.18.134/project/tomcatUntagged: 192.168.18.134/project/tomcat:latestUntagged: 192.168.18.134/project/tomcat@sha256:8ffa1b72bf611ac305523ed5bd6329afd051c7211fbe5f0b5c46ea5fb1adba46

`镜像打标签`[root@node2 ~]# docker tag tomcat:8.0.52 192.168.18.134/project/tomcat`上传镜像到Harbor`[root@node2 ~]# docker push 192.168.18.134/project/tomcat#此时我们就可以在私有仓库中看到新上传的镜像了`查看登陆凭据`[root@node2 ~]# cat .docker/config.json{        "auths": {                "192.168.18.134": {     #访问的IP地址                        "auth": "YWRtaW46SGFyYm9yMTIzNDU="      #验证                }        },        "HttpHeaders": {                #头部信息                "User-Agent": "Docker-Client/19.03.5 (linux)"        }`生成非换行形式的验证码`[root@node2 ~]# cat .docker/config.json | base64 -w 0ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4LjEzNCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=   

特别注意:此时下载次数为0,一会我们使用私有仓库中的镜像进行资源的创建,那么拉取的过程必定会下载镜像,应当数值会有变化


第六步:master1中创建安全组件的yaml文件
[root@master1 demo]# vim registry-pull-secret.yamlapiVersion: v1kind: Secretmetadata:  name: registry-pull-secretdata:  .dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjE4LjEzNCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOS4wMy41IChsaW51eCkiCgl9Cn0=type: kubernetes.io/dockerconfigjson`创建secret资源`[root@master1 demo]# kubectl create -f registry-pull-secret.yamlsecret/registry-pull-secret created`查看secret资源`[root@master1 demo]# kubectl get secretNAME                   TYPE                                  DATA   AGEdefault-token-pbr9p    kubernetes.io/service-account-token   3      10d`registry-pull-secret   kubernetes.io/dockerconfigjson        1      25s`[root@master1 demo]# vim tomcat01.yamlapiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: my-tomcatspec:  replicas: 2  template:    metadata:      labels:        app: my-tomcat    spec:      imagePullSecrets:             #证书拉取的凭据      - name: registry-pull-secret  #名称      containers:      - name: my-tomcat        image: 192.168.18.134/project/tomcat    #镜像的下载位置做此修改        ports:        - containerPort: 80......以下省略多行#修改完成后按Esc退出插入模式,输入:wq保存退出`创建tomcat01资源`[root@master1 demo]# kubectl create -f tomcat01.yamldeployment.extensions/my-tomcat createdservice/my-tomcat created[root@master1 demo]# kubectl get pods,deploy,svc,epNAME                                    READY   STATUS    RESTARTS   AGEpod/my-nginx-d55b94fd-kc2gl             1/1     Running   1          2d1hpod/my-nginx-d55b94fd-tkr42             1/1     Running   1          2d1h`pod/my-tomcat-7c5b6db486-bzjlv`        1/1     Running   0          56s`pod/my-tomcat-7c5b6db486-kw8m4`        1/1     Running   0          56spod/mypod                               1/1     Running   1          9hpod/nginx-6c94d899fd-8pf48              1/1     Running   1          3d1hpod/nginx-deployment-5477945587-f5dsm   1/1     Running   1          3dpod/nginx-deployment-5477945587-hmgd2   1/1     Running   1          3dpod/nginx-deployment-5477945587-pl2hn   1/1     Running   1          3dNAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEdeployment.extensions/my-nginx           2         2         2            2          2d1h`deployment.extensions/my-tomcat`        2         2         2            2           56sdeployment.extensions/nginx              1         1         1            1           8ddeployment.extensions/nginx-deployment   3         3         3            3           3dNAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGEservice/kubernetes         ClusterIP   10.0.0.1             443/TCP          10dservice/my-nginx-service   NodePort    10.0.0.210           80:40377/TCP     2d1h`service/my-tomcat`        NodePort    10.0.0.235           8080:43654/TCP   56sservice/nginx-service      NodePort    10.0.0.242           80:40422/TCP     3d11h#对外端口为43654NAME                         ENDPOINTS                                 AGEendpoints/kubernetes         192.168.18.128:6443,192.168.18.132:6443   10dendpoints/my-nginx-service   172.17.32.4:80,172.17.40.3:80             2d1h`endpoints/my-tomcat`        172.17.32.6:8080,172.17.40.6:8080         56sendpoints/nginx-service      172.17.40.5:80                            3d11h
接下来我们需要验证的就是资源加载没有任何问题的情况下,镜像资源是否来自我们的Harbor私有仓库呢?

这里就需要关注我们私有仓库中镜像的下载数了

结果:这时显示下载数由之前的0变为2,这就说明我们创建的两个资源镜像是从私有仓库中下载的!

我们再使用宿主机的浏览器验证192.168.18.148:43654和192.168.18.145:43654这两个节点地址还是可以访问tomcat的主页


0