千家信息网

Apache Flink JobManager HA部署

发表于:2025-12-02 作者:千家信息网编辑
千家信息网最后更新 2025年12月02日,1. 下载源代码: git clone https://github.com/apache/flink.git git branch -a检出blink分支 git checkout -b blin
千家信息网最后更新 2025年12月02日Apache Flink JobManager HA部署

1. 下载源代码:

 git clone https://github.com/apache/flink.git
 git branch  -a


检出blink分支

 git checkout -b blink remotes/origin/blink

查看分支

 git branch

2. 编译

mvn package -DskipTests


注意:
由于网络问题,编译flink-filesystems/flink-mapr-fs模块时,连接http://repository.mapr.com/maven 仓库速度较慢,修改link-filesystems/flink-mapr-fs/pom.xml文件,切换仓库为aliyun仓库:

                      aliyun-mapr-releases            https://maven.aliyun.com/repository/mapr-public/            false            true                    

修改nodejs仓库, flink-runtime-web/pom.xml:

                                  npm install                                                    npm                                                                            install -g -registry=https://registry.npm.taobao.org --cache-max=0 --no-save                                            

3. 打包

tar -cjvpf blink-1.5.1.tar.bz2 ./flink-1.5.1/

4. 部署

4.1 前置条件

  • 部署有Hadoop集群(HDFS)
  • 部署有Zookeeper集群

节点信息:
res-spark-0001 (master)
res-spark-0002 (master)
res-spark-0003 (slave)
res-spark-0004 (slave)
res-spark-0005 (slave)

4.2 解压缩

tar   -jxvf   blink-1.5.1.tar.bz2

4.3 配置文件

1).hdfs-site.xml

?xml version="1.0" encoding="UTF-8"?>               dfs.ha.automatic-failover.enabled           true                       dfs.nameservices           cluster1           Logical name for this new nameservice                       dfs.ha.namenodes.cluster1           nn1,nn2           Unique identifiers for each NameNode in the nameservice                   dfs.namenode.rpc-address.cluster1.nn1           res-spark-0001:8020                   dfs.namenode.rpc-address.cluster1.nn2           res-spark-0002:8020                   dfs.namenode.http-address.cluster1.nn1           res-spark-0001:50070                   dfs.namenode.http-address.cluster1.nn2           res-spark-0002:50070                       dfs.namenode.shared.edits.dir           qjournal://res-spark-0005:8485;res-spark-0004:8485;res-spark-0002:8485/cluster1                       dfs.client.failover.proxy.provider.cluster1           org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider                   dfs.ha.fencing.ssh.private-key-file           /root/.ssh/id_rsa                   dfs.ha.fencing.methods           sshfence                   dfs.ha.fencing.ssh.connect-timeout           30000                        dfs.namenode.handler.count            20                        dfs.webhdfs.enabled            true                dfs.permissions.enabled        false                    dfs.datanode.max.transfer.threads            8192         

2). core-site.xml

            fs.defaultFS         hdfs://cluster1         true                   dfs.journalnode.edits.dir          /data/disk1/hadoop/tmp/journal/node/local/data                   hadoop.tmp.dir         /data/disk1/hadoop/tmp/hadoop/hadoop-${user.name}         A bas for other temporary directories                 ha.zookeeper.quorum          res-spark-0001:2181,res-spark-0002:2181,res-spark-0003:2181                 io.file.buffer.size         131072                  fs.file.impl            org.apache.hadoop.fs.LocalFileSystem            The FileSystem for file: uris.                fs.hdfs.impl        org.apache.hadoop.hdfs.DistributedFileSystem        The FileSystem for hdfs: uris.        

3). masters

res-spark-0001:8081res-spark-0002:8081

4). slaves

res-spark-0003res-spark-0004res-spark-0005

5). flink-conf.yaml

#################################################################################  Licensed to the Apache Software Foundation (ASF) under one#  or more contributor license agreements.  See the NOTICE file#  distributed with this work for additional information#  regarding copyright ownership.  The ASF licenses this file#  to you under the Apache License, Version 2.0 (the#  "License"); you may not use this file except in compliance#  with the License.  You may obtain a copy of the License at##      http://www.apache.org/licenses/LICENSE-2.0##  Unless required by applicable law or agreed to in writing, software#  distributed under the License is distributed on an "AS IS" BASIS,#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.#  See the License for the specific language governing permissions and# limitations under the License.#################################################################################==============================================================================# Common#==============================================================================# The external address of the host on which the JobManager runs and can be# reached by the TaskManagers and any clients which want to connect. This setting# is only used in Standalone mode and may be overwritten on the JobManager side# by specifying the --host  parameter of the bin/jobmanager.sh executable.# In high availability mode, if you use the bin/start-cluster.sh script and setup# the conf/masters file, this will be taken care of automatically. Yarn/Mesos# automatically configure the host name based on the hostname of the node where the# JobManager runs.jobmanager.rpc.address: localhost# The RPC port where the JobManager is reachable.jobmanager.rpc.port: 6123# The heap size for the JobManager JVMjobmanager.heap.size: 1024m# The heap size for the TaskManager JVMtaskmanager.heap.size: 1024m# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.taskmanager.numberOfTaskSlots: 6# The parallelism used for programs that did not specify and other parallelism.parallelism.default: 1# The default file system scheme and authority.# # By default file paths without scheme are interpreted relative to the local# root file system 'file:///'. Use this to override the default and interpret# relative paths relative to a different file system,# for example 'hdfs://mynamenode:12345'## fs.default-scheme#fs.default-scheme: hdfs://cluster1#==============================================================================# High Availability#==============================================================================# The high-availability mode. Possible options are 'NONE' or 'zookeeper'.## high-availability: zookeeperhigh-availability: zookeeper# The path where metadata for master recovery is persisted. While ZooKeeper stores# the small ground truth for checkpoint and leader election, this location stores# the larger objects, like persisted dataflow graphs.# # Must be a durable file system that is accessible from all nodes# (like HDFS, S3, Ceph, nfs, ...) ## high-availability.storageDir: hdfs:///flink/ha/high-availability.storageDir: hdfs:///flink/ha/# The list of ZooKeeper quorum peers that coordinate the high-availability# setup. This must be a list of the form:# "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)## high-availability.zookeeper.quorum: localhost:2181high-availability.zookeeper.quorum: res-spark-0001:2181,res-spark-0002:2181,res-spark-0003:2181high-availability.cluster-id: /cluster_one# ACL options are based on https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_BuiltinACLSchemes# It can be either "creator" (ZOO_CREATE_ALL_ACL) or "open" (ZOO_OPEN_ACL_UNSAFE)# The default value is "open" and it can be changed to "creator" if ZK security is enabled## high-availability.zookeeper.client.acl: open#==============================================================================# Fault tolerance and checkpointing#==============================================================================# The backend that will be used to store operator state checkpoints if# checkpointing is enabled.## Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the# .## state.backend: filesystem# Directory for checkpoints filesystem, when using any of the default bundled# state backends.## state.checkpoints.dir: hdfs://namenode-host:port/flink-checkpointsstate.checkpoints.dir: hdfs://cluster1/flink-checkpoints# Default target directory for savepoints, optional.## state.savepoints.dir: hdfs://namenode-host:port/flink-checkpointsstate.savepoints.dir: hdfs://cluster1/flink-checkpoints# Flag to enable/disable incremental checkpoints for backends that# support incremental checkpoints (like the RocksDB state backend). ## state.backend.incremental: false#==============================================================================# Web Frontend#==============================================================================# The address under which the web-based runtime monitor listens.##web.address: 0.0.0.0# The port under which the web-based runtime monitor listens.# A value of -1 deactivates the web server.rest.port: 8081# Flag to specify whether job submission is enabled from the web-based# runtime monitor. Uncomment to disable.#web.submit.enable: falseweb.submit.enable: true#==============================================================================# Advanced#==============================================================================# Override the directories for temporary files. If not specified, the# system-specific Java temporary directory (java.io.tmpdir property) is taken.## For framework setups on Yarn or Mesos, Flink will automatically pick up the# containers' temp directories without any need for configuration.## Add a delimited list for multiple directories, using the system directory# delimiter (colon ':' on unix) or a comma, e.g.:#     /data1/tmp:/data2/tmp:/data3/tmp## Note: Each directory entry is read from and written to by a different I/O# thread. You can include the same directory multiple times in order to create# multiple I/O threads against that directory. This is for example relevant for# high-throughput RAIDs.## io.tmp.dirs: /tmp# Specify whether TaskManager's managed memory should be allocated when starting# up (true) or when memory is requested.## We recommend to set this value to 'true' only in setups for pure batch# processing (DataSet API). Streaming setups currently do not use the TaskManager's# managed memory: The 'rocksdb' state backend uses RocksDB's own memory management,# while the 'memory' and 'filesystem' backends explicitly keep data as objects# to save on serialization cost.## taskmanager.memory.preallocate: false# The classloading resolve order. Possible values are 'child-first' (Flink's default)# and 'parent-first' (Java's default).## Child first classloading allows users to use different dependency/library# versions in their application than those in the classpath. Switching back# to 'parent-first' may help with debugging dependency issues.## classloader.resolve-order: child-first# The amount of memory going to the network stack. These numbers usually need # no tuning. Adjusting them may be necessary in case of an "Insufficient number# of network buffers" error. The default min is 64MB, teh default max is 1GB.# # taskmanager.network.memory.fraction: 0.1# taskmanager.network.memory.min: 64mb# taskmanager.network.memory.max: 1gb#==============================================================================# Flink Cluster Security Configuration#==============================================================================# Kerberos authentication for various components - Hadoop, ZooKeeper, and connectors -# may be enabled in four steps:# 1. configure the local krb5.conf file# 2. provide Kerberos credentials (either a keytab or a ticket cache w/ kinit)# 3. make the credentials available to various JAAS login contexts# 4. configure the connector to use JAAS/SASL# The below configure how Kerberos credentials are provided. A keytab will be used instead of# a ticket cache if the keytab path and principal are set.# security.kerberos.login.use-ticket-cache: true# security.kerberos.login.keytab: /path/to/kerberos/keytab# security.kerberos.login.principal: flink-user# The configuration below defines which JAAS login contexts# security.kerberos.login.contexts: Client,KafkaClient#==============================================================================# ZK Security Configuration#==============================================================================# Below configurations are applicable if ZK ensemble is configured for security# Override below configuration to provide custom ZK service name if configured# zookeeper.sasl.service-name: zookeeper# The configuration below must match one of the values set in "security.kerberos.login.contexts"# zookeeper.sasl.login-context-name: Client#==============================================================================# HistoryServer#==============================================================================# The HistoryServer is started and stopped via bin/historyserver.sh (start|stop)# Directory to upload completed jobs to. Add this directory to the list of# monitored directories of the HistoryServer as well (see below).#jobmanager.archive.fs.dir: hdfs:///completed-jobs/jobmanager.archive.fs.dir: hdfs:///completed-jobs/# The address under which the web-based HistoryServer listens.#historyserver.web.address: 0.0.0.0# The port under which the web-based HistoryServer listens.#historyserver.web.port: 8082# Comma separated list of directories to monitor for completed jobs.#historyserver.archive.fs.dir: hdfs:///completed-jobs/historyserver.archive.fs.dir: hdfs:///completed-jobs/# Interval in milliseconds for refreshing the monitored directories.#historyserver.archive.fs.refresh-interval: 10000

res-spark-0001节点:

jobmanager.rpc.address: res-spark-0001

res-spark-0002节点:

jobmanager.rpc.address: res-spark-0002

6)启动集群

bin/start-cluster.sh


7)启动historyserver

bin/historyserver.sh start
0