千家信息网

playbook自动安装kafka集群

发表于:2025-11-06 作者:千家信息网编辑
千家信息网最后更新 2025年11月06日,一、环境说明1、服务器信息172.21.184.43 kafka、zk172.21.184.44 kafka、zk172.21.184.45 kafka、zk172.21.244.7 ansibl
千家信息网最后更新 2025年11月06日playbook自动安装kafka集群

一、环境说明
1、服务器信息

172.21.184.43 kafka、zk172.21.184.44 kafka、zk172.21.184.45 kafka、zk172.21.244.7   ansible

2、软件版本信息

系统:CentOS Linux release 7.5.1804 (Core)kafka:kafka_2.11-2.2.0Zookeeper version: 3.4.8ansible:ansible 2.7.10

二、配置准备
1、编写playbook相关配置文件,先tree看下整目录结构

tree.├── kafka│   ├── group_vars│   │   └── kafka│   ├── hosts│   ├── kafkainstall.yml│   └── templates│       ├── server.properties-1.j2│       ├── server.properties-2.j2│       ├── server.properties-3.j2│       └── server.properties.j2└── zookeeper    ├── group_vars    │   └── zook    ├── hosts    ├── templates    │   └── zoo.cfg.j2    └── zooKeeperinstall.yml

2、建立相关目录

mkdir /chj/ansibleplaybook/kafka/group_vars -pmkdir /chj/ansibleplaybook/kafka/templatesmkdir /chj/ansibleplaybook/zookeeper/group_vars -pmkdir /chj/ansibleplaybook/zookeeper/templates

3、撰写部署zookeeper的配置文件
A、zookeeper的group_vars文件

vim /chj/ansibleplaybook/zookeeper/group_vars/zook    ---zk01server: 172.21.184.43zk02server: 172.21.184.44zk03server: 172.21.184.45zookeeper_group: workzookeeper_user:  workzookeeper_dir: /chj/data/zookeeperzookeeper_appdir: /chj/app/zookeeperzk01myid: 43zk02myid: 44zk03myid: 45

B、zookeeper的templates文件

vim /chj/ansibleplaybook/zookeeper/templates/zoo.cfg.j2tickTime=2000initLimit=500syncLimit=20dataDir={{ zookeeper_dir }}dataLogDir=/chj/data/log/zookeeper/clientPort=10311maxClientCnxns=1000000server.{{ zk01myid }}={{ zk01server }}:10301:10331server.{{ zk02myid }}={{ zk02server }}:10302:10332server.{{ zk03myid }}={{ zk03server }}:10303:10333

C、zookeeper的host文件

vim /chj/ansibleplaybook/zookeeper/hosts[zook]172.21.184.43172.21.184.44172.21.184.45

D、zookeeper的安装的yml文件

vim /chj/ansibleplaybook/zookeeper/zooKeeperinstall.yml---- hosts: "zook"  gather_facts: no  tasks:    - name: Create zookeeper group      group:        name: '{{ zookeeper_group }}'        state: present      tags:        - zookeeper_user    - name: Create zookeeper user      user:        name: '{{ zookeeper_user }}'        group: '{{ zookeeper_group }}'        state: present        createhome: no      tags:        - zookeeper_group    - name: 检测是否安过zk      stat:        path: /chj/app/zookeeper      register: node_files    - debug:        msg: "{{ node_files.stat.exists }}"    - name: 检查是否存在java环境      shell: if [ ! -f "/usr/local/jdk/bin/java" ];then echo "创建目录"; curl -o /usr/local/jdk1.8.0_121.tar.gz  http://download.pkg.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf /usr/local/jdk1.8.0_121.tar.gz -C /usr/local/jdk1.8.0_121; cd /usr/local/; mv  /usr/local/jdk1.8.0_121 jdk; ln -s /usr/local/jdk/bin/java /sbin/java;  else echo "目录已存在\n" ;fi    - name: 下载解压 chj_zookeeper      unarchive: src=http://ops.chehejia.com:9090/pkg/zookeeper.tar.gz dest=/chj/app/ copy=no      when: node_files.stat.exists == False      register: unarchive_msg    - debug:        msg: "{{ unarchive_msg }}"    - name: 创建zookeeper 数据目录和日志目录      shell: if [ ! -d "/chj/data/zookeeper" ] && [ ! -d "/chj/data/log/zookeeper" ];then echo "创建目录"; mkdir -p /chj/data/{zookeeper,zookeeperLog}  ; else echo "目录已存在\n" ;fi    - name: 修改目录权限      shell: chown work:work -R /chj/{data,app}      when: node_files.stat.exists == False    - name: 配置zk myid      shell: "hostname -i| cut -d '.' -f 4|awk '{print $1}' > /chj/data/zookeeper/myid"    - name: Config zookeeper service      template:        src:  zoo.cfg.j2        dest: /chj/app/zookeeper/conf/zoo.cfg        mode: 0755    - name: Reload systemd      command: systemctl daemon-reload    - name: Restart ZooKeeper service      shell: sudo su - work -c "/chj/app/zookeeper/console start"    - name: Status ZooKeeper service      shell: "sudo su - work -c '/chj/app/zookeeper/console status'"      register: zookeeper_status_result      ignore_errors: True    - debug:        msg: "{{ zookeeper_status_result }}"

4、编写部署kafka的配置文件
A、kafka的group_vars文件

vim /chj/ansibleplaybook/kafka/group_vars/kafka---kafka01: 172.21.184.43kafka02: 172.21.184.44kafka03: 172.21.184.45kafka_group: workkafka_user:  worklog_dir: /chj/data/kafkabrokerid1: 1brokerid2: 2brokerid3: 3zk_addr: 172.21.184.43:10311,172.21.184.44:10311,172.21.184.45:10311/kafka

B、kafka的templates文件

vim /chj/ansibleplaybook/kafka/templates/server.properties-1.j2broker.id={{ brokerid1 }}   ##server.properties-2.j2和server.properties-3.j2分别配置为brokerid2和brokerid3auto.create.topics.enable=falseauto.leader.rebalance.enable=truebroker.rack=/default-rackcompression.type=snappycontrolled.shutdown.enable=truecontrolled.shutdown.max.retries=3controlled.shutdown.retry.backoff.ms=5000controller.message.queue.size=10controller.socket.timeout.ms=30000default.replication.factor=1delete.topic.enable=truefetch.message.max.bytes=10485760fetch.purgatory.purge.interval.requests=10000leader.imbalance.check.interval.seconds=300leader.imbalance.per.broker.percentage=10host.name= {{ kafka01 }}listeners=PLAINTEXT://{{ kafka01}}:9092  ##server.properties-2.j2和server.properties-3.j2分别配置为brokerid2和brokerid3log.cleanup.interval.mins=1200log.dirs= {{ log_dir}}log.index.interval.bytes=4096log.index.size.max.bytes=10485760log.retention.bytes=-1log.retention.hours=168log.roll.hours=168log.segment.bytes=1073741824message.max.bytes=10000000min.insync.replicas=1num.io.threads=8num.network.threads=3num.partitions=1num.recovery.threads.per.data.dir=1num.replica.fetchers=1offset.metadata.max.bytes=4096offsets.commit.required.acks=-1offsets.commit.timeout.ms=5000offsets.load.buffer.size=5242880offsets.retention.check.interval.ms=600000offsets.retention.minutes=86400000offsets.topic.compression.codec=0offsets.topic.num.partitions=50offsets.topic.replication.factor=3transaction.state.log.replication.factor=3transaction.state.log.min.isr=1offsets.topic.segment.bytes=104857600port=9092producer.purgatory.purge.interval.requests=10000queued.max.requests=500replica.fetch.max.bytes=10485760replica.fetch.min.bytes=1replica.fetch.wait.max.ms=500replica.high.watermark.checkpoint.interval.ms=5000replica.lag.max.messages=4000replica.lag.time.max.ms=10000replica.socket.receive.buffer.bytes=65536replica.socket.timeout.ms=30000sasl.enabled.mechanisms=GSSAPIsasl.mechanism.inter.broker.protocol=GSSAPIsocket.receive.buffer.bytes=102400socket.request.max.bytes=104857600socket.send.buffer.bytes=102400zookeeper.connect= {{ zk_addr }}zookeeper.connection.timeout.ms=25000zookeeper.session.timeout.ms=30000zookeeper.sync.time.ms=2000group.initial.rebalance.delay.ms=10000

C、kafka的host文件

vim /chj/ansibleplaybook/kafka/hosts[kafka]172.21.184.43172.21.184.44172.21.184.45

D、kafka的安装的yml文件

vim /chj/ansibleplaybook/kafka/kafkainstall.yml---- hosts: "kafka"  gather_facts: yes  tasks:    - name: obtain eth0 ipv4 address      debug: msg={{ ansible_default_ipv4.address }}      when: ansible_default_ipv4.alias == "eth0"    - name: Create kafka group      group:        name: '{{ kafka_group }}'        state: present      tags:        - kafka_user    - name: Create kafka user      user:        name: '{{ kafka_user }}'        group: '{{ kafka_group }}'        state: present        createhome: no      tags:        - kafka_group    - name: 检测是否安过zk      stat:        path: /chj/app/kafka      register: node_files    - debug:        msg: "{{ node_files.stat.exists }}"    - name: 检查是否存在java环境      shell: if [ ! -f "/usr/local/jdk/bin/java" ];then echo "创建目录"; curl -o /usr/local/jdk1.8.0_121.tar.gz  http://download.pkg.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf /usr/local/jdk1.8.0_121.tar.gz -C /usr/local/jdk1.8.0_121; cd /usr/local/; mv  /usr/local/jdk1.8.0_121 jdk; ln -s /usr/local/jdk/bin/java /sbin/java;  else echo "目录已存在\n" ;fi    - name: 下载解压 kafka      unarchive: src=http://ops.chehejia.com:9090/pkg/kafka.tar.gz dest=/chj/app/ copy=no      when: node_files.stat.exists == False      register: unarchive_msg    - debug:        msg: "{{ unarchive_msg }}"    - name: 创建kafka 数据目录和日志目录      shell: if [ ! -d "/chj/data/kafka" ] && [ ! -d "/chj/data/log/kafka" ];then echo "创建目录"; mkdir -p /chj/data/{kafka,log/kafka}  ; else echo "目录已存在\n" ;fi    - name: 修改目录权限      shell: chown work:work -R /chj/{data,app}      when: node_files.stat.exists == False    - name: Config kafka01 service      template:        src: server.properties-1.j2        dest: /chj/app/kafka/config/server.properties        mode: 0755      when: ansible_default_ipv4.address == "172.21.184.43"    - name: Config kafka02 service      template:        src:  server.properties-2.j2        dest: /chj/app/kafka/config/server.properties        mode: 0755      when: ansible_default_ipv4.address == "172.21.184.44"    - name: Config kafka03 service      template:        src:  server.properties-3.j2        dest: /chj/app/kafka/config/server.properties        mode: 0755      when: ansible_default_ipv4.address == "172.21.184.45"    - name: Reload systemd      command: systemctl daemon-reload    - name: Restart kafka service      shell: sudo su - work -c "/chj/app/kafka/console start"    - name: Status kafka service      shell: "sudo su - work -c '/chj/app/kafka/console status'"      register: kafka_status_result      ignore_errors: True    - debug:        msg: "{{ kafka_status_result }}"

PS:安装需要用到的jdk、kafka、zk的二进制包,自行替换成能访问到的下载地址

三、部署
1、先部署zookeeper集群

cd /chj/ansibleplaybook/zookeeper/ansible-playbook -i hosts zooKeeperinstall.yml -b

2、在部署kafka集群

cd /chj/ansibleplaybook/kafka/ansible-playbook -i hosts kafkainstall.yml -b
0