You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by JoneZhang <jo...@gmail.com> on 2015/10/23 15:46:02 UTC

I don't understand what this sentence means."7.1 GB of 7 GB physical memory used"

Here is the spark configure and error log


================================================================
spark.dynamicAllocation.enabled     true
spark.shuffle.service.enabled       true
spark.dynamicAllocation.minExecutors        10
spark.executor.cores        1
spark.executor.memory       6G
spark.yarn.executor.memoryOverhead 1536
spark.driver.memory 2G
15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
Container killed by YARN for exceeding memory limits. 7.5 GB of 7.5 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
Container killed by YARN for exceeding memory limits. 7.5 GB of 7.5 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
Container killed by YARN for exceeding memory limits. 7.6 GB of 7.5 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

================================================================
spark.dynamicAllocation.enabled     true
spark.shuffle.service.enabled       true
spark.dynamicAllocation.minExecutors        10
spark.executor.cores        2
spark.executor.memory       4G
spark.yarn.executor.memoryOverhead 2048
spark.driver.memory 2G
15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
Container killed by YARN for exceeding memory limits. 6.2 GB of 6 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
Container killed by YARN for exceeding memory limits. 6.0 GB of 6 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
Container killed by YARN for exceeding memory limits. 6.3 GB of 6 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.


================================================================
spark.dynamicAllocation.enabled     true
spark.shuffle.service.enabled       true
spark.dynamicAllocation.minExecutors        10
spark.executor.cores        2
spark.executor.memory       4G
spark.yarn.executor.memoryOverhead 3072
spark.driver.memory 2G
15/10/23 21:15:10 main INFO org.apache.spark.deploy.yarn.YarnAllocator>>
Will request 10 executor containers, each with 2 cores and 7168 MB memory
including 3072 MB overhead
...
15/10/23 21:15:15 ContainerLauncher #1 INFO
org.apache.spark.deploy.yarn.ExecutorRunnable>> Setting up executor with
commands: List({{JAVA_HOME}}/bin/java, -server, -XX:OnOutOfMemoryError='kill
%p', -Xms4096m, -Xmx4096m,
'-Dlog4j.configuration=file:///data/home/sparkWithOutHive/conf/log4j.properties',
'-Dhive.spark.log.dir=/data/home/sparkWithOutHive/logs/',
'-Dlog4j.configuration=file:///data/home/sparkWithOutHive/conf/log4j.properties',
-Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.driver.port=41100',
'-Dspark.history.ui.port=8080', '-Dspark.ui.port=0',
-Dspark.yarn.app.container.log.dir=<LOG_DIR>,
org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url,
akka.tcp://sparkDriver@10.196.24.32:41100/user/CoarseGrainedScheduler,
--executor-id, 2, --hostname, 10.119.91.207, --cores, 2, --app-id,
application_1445484223147_0470, --user-class-path, file:$PWD/__app__.jar,
1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
...
15/10/23 21:07:54 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
Container killed by YARN for exceeding memory limits. 7.1 GB of 7 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
15/10/23 21:07:54 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
Container killed by YARN for exceeding memory limits. 7.0 GB of 7 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.



I have two questions
1.-Xms4096m, -Xmx4096m, why not -Xms4096m, -Xmx4096m+3072m?
2."7.1 GB of 7 GB physical memory used" means compare  7.1  with 7, or
compare  7.1  with something else?




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/I-don-t-understand-what-this-sentence-means-7-1-GB-of-7-GB-physical-memory-used-tp25180.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: I don't understand what this sentence means."7.1 GB of 7 GB physical memory used"

Posted by Sean Owen <so...@cloudera.com>.
Spark asked YARN to let an executor use 7GB of memory, but it used
more so was killed. In each case you see that the exectuor memory plus
overhead equals the YARN allocation requested. What's the issue with
that?

On Fri, Oct 23, 2015 at 6:46 AM, JoneZhang <jo...@gmail.com> wrote:
> Here is the spark configure and error log
>
>
> ================================================================
> spark.dynamicAllocation.enabled     true
> spark.shuffle.service.enabled       true
> spark.dynamicAllocation.minExecutors        10
> spark.executor.cores        1
> spark.executor.memory       6G
> spark.yarn.executor.memoryOverhead 1536
> spark.driver.memory 2G
> 15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
> Container killed by YARN for exceeding memory limits. 7.5 GB of 7.5 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
> 15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
> Container killed by YARN for exceeding memory limits. 7.5 GB of 7.5 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
> 15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
> Container killed by YARN for exceeding memory limits. 7.6 GB of 7.5 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
>
> ================================================================
> spark.dynamicAllocation.enabled     true
> spark.shuffle.service.enabled       true
> spark.dynamicAllocation.minExecutors        10
> spark.executor.cores        2
> spark.executor.memory       4G
> spark.yarn.executor.memoryOverhead 2048
> spark.driver.memory 2G
> 15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
> Container killed by YARN for exceeding memory limits. 6.2 GB of 6 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
> 15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
> Container killed by YARN for exceeding memory limits. 6.0 GB of 6 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
> 15/10/23 17:37:13 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
> Container killed by YARN for exceeding memory limits. 6.3 GB of 6 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
>
>
> ================================================================
> spark.dynamicAllocation.enabled     true
> spark.shuffle.service.enabled       true
> spark.dynamicAllocation.minExecutors        10
> spark.executor.cores        2
> spark.executor.memory       4G
> spark.yarn.executor.memoryOverhead 3072
> spark.driver.memory 2G
> 15/10/23 21:15:10 main INFO org.apache.spark.deploy.yarn.YarnAllocator>>
> Will request 10 executor containers, each with 2 cores and 7168 MB memory
> including 3072 MB overhead
> ...
> 15/10/23 21:15:15 ContainerLauncher #1 INFO
> org.apache.spark.deploy.yarn.ExecutorRunnable>> Setting up executor with
> commands: List({{JAVA_HOME}}/bin/java, -server, -XX:OnOutOfMemoryError='kill
> %p', -Xms4096m, -Xmx4096m,
> '-Dlog4j.configuration=file:///data/home/sparkWithOutHive/conf/log4j.properties',
> '-Dhive.spark.log.dir=/data/home/sparkWithOutHive/logs/',
> '-Dlog4j.configuration=file:///data/home/sparkWithOutHive/conf/log4j.properties',
> -Djava.io.tmpdir={{PWD}}/tmp, '-Dspark.driver.port=41100',
> '-Dspark.history.ui.port=8080', '-Dspark.ui.port=0',
> -Dspark.yarn.app.container.log.dir=<LOG_DIR>,
> org.apache.spark.executor.CoarseGrainedExecutorBackend, --driver-url,
> akka.tcp://sparkDriver@10.196.24.32:41100/user/CoarseGrainedScheduler,
> --executor-id, 2, --hostname, 10.119.91.207, --cores, 2, --app-id,
> application_1445484223147_0470, --user-class-path, file:$PWD/__app__.jar,
> 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
> ...
> 15/10/23 21:07:54 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
> Container killed by YARN for exceeding memory limits. 7.1 GB of 7 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
> 15/10/23 21:07:54 Reporter WARN org.apache.spark.deploy.yarn.YarnAllocator>>
> Container killed by YARN for exceeding memory limits. 7.0 GB of 7 GB
> physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
>
>
>
> I have two questions
> 1.-Xms4096m, -Xmx4096m, why not -Xms4096m, -Xmx4096m+3072m?
> 2."7.1 GB of 7 GB physical memory used" means compare  7.1  with 7, or
> compare  7.1  with something else?
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/I-don-t-understand-what-this-sentence-means-7-1-GB-of-7-GB-physical-memory-used-tp25180.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org