You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by Fei Han <ha...@aliyun.com.INVALID> on 2019/03/13 05:16:49 UTC

回复:flink 1.7.2集群异常退出

你好:
  对于这个问题,为什么 high-availability.storageDir的目录会产生如此多的子目录?
因为在HA时,你的作业可能因为各种原因切换HA 的Attempt ID可能是不停切换导致的。
对于HA的配置我是这样配置的。

 flink-conf.yaml
 taskmanager.heap.mb: 3072
 taskmanager.numberOfTaskSlots: 4
 parallelism.default: 2
 taskmanager.tmp.dirs: /tmp
 jobmanager.heap.mb: 1024
 jobmanager.web.port: 8081
 jobmanager.rpc.port: 6123
 yarn.application-attempts: 8
 env.java.home: /usr/java/jdk1.8.0_111
 high-availability: zookeeper
 high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
 high-availability.storageDir: hdfs://cdh1:9000/flink/recovery
 high-availability.zookeeper.path.root: /flink
 state.backend: filesystem
 state.backend.fs.checkpointdir: hdfs://cdh1:9000/flink/checkpoints
 taskmanager.network.numberOfBuffers: 1024
 fs.hdfs.hadoopconf: /usr/local/hadoop-2.7.4/etc/hadoop

配置hadoop(yarn-site.xml)

 <property>
 <name>yarn.resourcemanager.am.max-attempts</name>
 <value>4</value>
 </property>

以上是我的配置,希望对你有帮助。
Best,
Fei Han


------------------------------------------------------------------
发件人:ppp Yun <yu...@hotmail.com>
发送时间:2019年3月13日(星期三) 10:24
收件人:user-zh <us...@flink.apache.org>
主 题:flink 1.7.2集群异常退出

Hi,ALL

         写了个测试程序,大概跑了不到三个小时,flink集群就挂了,所有节点退出,报错如下:

2019-03-12 20:45:14,623 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph        - Job Tbox from Kafka Sink To Kafka And Print (21949294d4750b869b341c5d2942d499) switched from state RUNNING to FAILING.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): The directory item limit of /tmp/ha is exceeded: limit=1048576 items=1048576


hdfs count结果:

2097151            4          124334563 hdfs://banma/tmp/ha


下面是flink-conf.yaml的配置:

[hdfs@qa-hdpdn06 flink-1.7.2]$ cat conf/flink-conf.yaml |grep ^[^#]
jobmanager.rpc.address: 10.4.11.252
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.heap.size: 1024m
taskmanager.numberOfTaskSlots: 10
parallelism.default: 1
 high-availability: zookeeper
 high-availability.storageDir: hdfs://banma/tmp/ha
 high-availability.zookeeper.quorum: qa-hdpdn05.ebanma.com:2181
rest.port: 8081

flink版本:官方最新的flink 1,7.2

为什么 high-availability.storageDir的目录会产生如此多的子目录?里面存的都是什么?什么情况下回触发这些存储操作?如何避免这个问题?

谢谢!


Re: 回复:flink 1.7.2集群异常退出

Posted by Yun ppp <yu...@hotmail.com>.
非常感谢提供配置参考,我的配置确实需要完善,很多都是默认的

________________________________
From: Fei Han <ha...@aliyun.com.INVALID>
Sent: Tuesday, March 12, 2019 22:16
To: user-zh
Subject: 回复:flink 1.7.2集群异常退出

你好:
  对于这个问题,为什么 high-availability.storageDir的目录会产生如此多的子目录?
因为在HA时,你的作业可能因为各种原因切换HA 的Attempt ID可能是不停切换导致的。
对于HA的配置我是这样配置的。

 flink-conf.yaml
 taskmanager.heap.mb: 3072
 taskmanager.numberOfTaskSlots: 4
 parallelism.default: 2
 taskmanager.tmp.dirs: /tmp
 jobmanager.heap.mb: 1024
 jobmanager.web.port: 8081
 jobmanager.rpc.port: 6123
 yarn.application-attempts: 8
 env.java.home: /usr/java/jdk1.8.0_111
 high-availability: zookeeper
 high-availability.zookeeper.quorum: cdh1:2181,cdh2:2181,cdh3:2181
 high-availability.storageDir: hdfs://cdh1:9000/flink/recovery
 high-availability.zookeeper.path.root: /flink
 state.backend: filesystem
 state.backend.fs.checkpointdir: hdfs://cdh1:9000/flink/checkpoints
 taskmanager.network.numberOfBuffers: 1024
 fs.hdfs.hadoopconf: /usr/local/hadoop-2.7.4/etc/hadoop

配置hadoop(yarn-site.xml)

 <property>
 <name>yarn.resourcemanager.am.max-attempts</name>
 <value>4</value>
 </property>

以上是我的配置,希望对你有帮助。
Best,
Fei Han


------------------------------------------------------------------
发件人:ppp Yun <yu...@hotmail.com>
发送时间:2019年3月13日(星期三) 10:24
收件人:user-zh <us...@flink.apache.org>
主 题:flink 1.7.2集群异常退出

Hi,ALL

         写了个测试程序,大概跑了不到三个小时,flink集群就挂了,所有节点退出,报错如下:

2019-03-12 20:45:14,623 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph        - Job Tbox from Kafka Sink To Kafka And Print (21949294d4750b869b341c5d2942d499) switched from state RUNNING to FAILING.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException): The directory item limit of /tmp/ha is exceeded: limit=1048576 items=1048576


hdfs count结果:

2097151            4          124334563 hdfs://xx/tmp/ha


下面是flink-conf.yaml的配置:

[hdfs@qa-xxflink-1.7.2]$ cat conf/flink-conf.yaml |grep ^[^#]
jobmanager.rpc.address: xx
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.heap.size: 1024m
taskmanager.numberOfTaskSlots: 10
parallelism.default: 1
 high-availability: zookeeper
 high-availability.storageDir: hdfs://xx/tmp/ha
 high-availability.zookeeper.quorum: qa-xx.xx.com:2181
rest.port: 8081

flink版本:官方最新的flink 1,7.2

为什么 high-availability.storageDir的目录会产生如此多的子目录?里面存的都是什么?什么情况下回触发这些存储操作?如何避免这个问题?

谢谢!