You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by ri...@sina.cn on 2016/11/29 11:43:27 UTC
flink-job-in-yarn,has max memory
Hi, i have a flink job,and abt assembly to get a jar file,so i put it to yarn and run it,use the follow commend:------------------------------------------------------------------------/home/www/flink-1.1.1/bin/flink run \-m yarn-cluster \-yn 1 \-ys 2 \-yjm 4096 \-ytm 4096 \--class skRecomm.SkProRecommFlink \--classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar \--classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.jar \--classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-common.jar \--classpath file:///opt/cloudera/parcels/CDH/jars/htrace-core-3.1.0-incubating.jar \--classpath file:///opt/cloudera/parcels/CDH/lib/hbase/lib/guava-12.0.1.jar \/home/www/flink-mining/deploy/zx_article-7cffb87.jar -----------------------------------------------------------------------------------
the commend is in supervisor on a computer(*,*,*,22),----------------------------and flink/conf/flink-conf.yaml,i set those pargam,------------------------------------------fs.hdfs.hadoopconf: /etc/hadoop/conf/jobmanager.web.port: 8081parallelism.default: 1taskmanager.memory.preallocate: falsetaskmanager.numberOfTaskSlots: 1taskmanager.heap.mb: 512jobmanager.heap.mb: 256arallelism.default: 1jobmanager.rpc.port: 6123jobmanager.rpc.address: localhost
------------------------------------------the job is success, can find follow message in yarn monitor,
flink.base.dir.path /data1/yarn/nm/usercache/work/appcache/application_1472623395420_36719/container_e03_1472623395420_36719_01_000001fs.hdfs.hadoopconf /etc/hadoop/conf/jobmanager.heap.mb 256jobmanager.rpc.address *.*.*.79 -----(is not *.*.*.22,and taskmanager is *.*.*.69)jobmanager.rpc.port 32987jobmanager.web.port 0parallelism.default 1recovery.zookeeper.path.namespace application_1472623395420_36719taskmanager.heap.mb 512taskmanager.memory.preallocate falsetaskmanager.numberOfTaskSlots 1
-----------------------------------------------------OverviewData Port All Slots Free Slots CPU Cores Physical Memory Free Memory Flink Managed Memory30471 2 0 32 189 GB 2.88 GB 1.96 GB-----------------------------------------------------------------------------------------------------------------------MemoryJVM (Heap/Non-Heap)Type Committed Initial MaximumHeap 2.92 GB 3.00 GB 2.92 GBNon-Heap 53.4 MB 23.4 MB 130 MBTotal 2.97 GB 3.02 GB 3.04 GB-----------------------------------------------------------------Outside JVMType Count Used CapacityDirect 510 860 KB 860 KBMapped 0 0 B 0 B-------------------------------------------------------------------
i find in computer(*,*,*,22),the pid=345 has 2.36g memory,and the pid=345 is the job that from supervisor run,
i really do not know why ?the job was run in yarn ,why occupy so much memory in computer(*.*.*.22),i just run the job in computer(*.*.*.22).
thank you answer my question.
Re: flink-job-in-yarn,has max memory
Posted by Robert Metzger <rm...@apache.org>.
Hi,
The TaskManager reports a total memory usage of 3 GB. That's fine, given
that you requested containers of size 4GB. Flink doesn't allocate all the
memory assigned to the container to the heap.
Are you running a batch or a streaming job?
On Tue, Nov 29, 2016 at 12:43 PM, <ri...@sina.cn> wrote:
> Hi,
> i have a flink job,and abt assembly to get a jar file,so i put it to
> yarn and run it,use the follow commend:
> ------------------------------------------------------------------------
> /home/www/flink-1.1.1/bin/flink run \
> -m yarn-cluster \
> -yn 1 \
> -ys 2 \
> -yjm 4096 \
> -ytm 4096 \
> --class skRecomm.SkProRecommFlink \
> --classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar \
> --classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.jar
> \
> --classpath file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-common.jar \
> --classpath file:///opt/cloudera/parcels/CDH/jars/htrace-core-3.1.0-incubating.jar
> \
> --classpath file:///opt/cloudera/parcels/CDH/lib/hbase/lib/guava-12.0.1.jar
> \
> /home/www/flink-mining/deploy/zx_article-7cffb87.jar
> ------------------------------------------------------------
> -----------------------
> the commend is in supervisor on a computer(*,*,*,22),
> ----------------------------
> and flink/conf/flink-conf.yaml,i set those pargam,
> ------------------------------------------
> fs.hdfs.hadoopconf: /etc/hadoop/conf/
> jobmanager.web.port: 8081
> parallelism.default: 1
> taskmanager.memory.preallocate: false
> taskmanager.numberOfTaskSlots: 1
> taskmanager.heap.mb: 512
> jobmanager.heap.mb: 256
> arallelism.default: 1
> jobmanager.rpc.port: 6123
> jobmanager.rpc.address: localhost
>
> ------------------------------------------
> the job is success, can find follow message in yarn monitor,
>
> flink.base.dir.path /data1/yarn/nm/usercache/work/appcache/application_
> 1472623395420_36719/container_e03_1472623395420_36719_01_000001
> fs.hdfs.hadoopconf /etc/hadoop/conf/
> jobmanager.heap.mb 256
> jobmanager.rpc.address *.*.*.79 -----(is not *.*.*.22,and taskmanager is
> *.*.*.69)
> jobmanager.rpc.port 32987
> jobmanager.web.port 0
> parallelism.default 1
> recovery.zookeeper.path.namespace application_1472623395420_36719
> taskmanager.heap.mb 512
> taskmanager.memory.preallocate false
> taskmanager.numberOfTaskSlots 1
>
> -----------------------------------------------------
> Overview
> Data Port All Slots Free Slots CPU Cores Physical Memory Free Memory Flink
> Managed Memory
> 30471 2 0 32 189 GB 2.88 GB
> 1.96 GB
> ------------------------------------------------------------
> -----------------------------------------------------------
> Memory
> JVM (Heap/Non-Heap)
> Type Committed Initial Maximum
> Heap 2.92 GB 3.00 GB 2.92 GB
> Non-Heap 53.4 MB 23.4 MB 130 MB
> Total 2.97 GB 3.02 GB 3.04 GB
> -----------------------------------------------------------------
> Outside JVM
> Type Count Used Capacity
> Direct 510 860 KB 860 KB
> Mapped 0 0 B 0 B
> -------------------------------------------------------------------
>
> i find in computer(*,*,*,22),the pid=345 has 2.36g memory,and the pid=345
> is the job that from supervisor run,
>
> i really do not know why ?the job was run in yarn ,why occupy so much
> memory in computer(*.*.*.22),i just run the job in computer(*.*.*.22).
>
> thank you answer my question.
>
>