You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "hotdog (JIRA)" <ji...@apache.org> on 2015/10/14 09:58:05 UTC
[jira] [Created] (SPARK-11101) pipe() operation OOM
hotdog created SPARK-11101:
------------------------------
Summary: pipe() operation OOM
Key: SPARK-11101
URL: https://issues.apache.org/jira/browse/SPARK-11101
Project: Spark
Issue Type: Bug
Components: Spark Core
Affects Versions: 1.4.1
Environment: spark on yarn
Reporter: hotdog
when using pipe() operation with large data(10TB), the pipe() operation always OOM.
my parameters:
executor-memory 16g
executor-cores 4
num-executors 400
"spark.yarn.executor.memoryOverhead", "8192"
partition number: 60000
does pipe() operation use many off-heap memory?
the log is :
killed by YARN for exceeding memory limits. 24.4 GB of 24 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
should I continue boosting spark.yarn.executor.memoryOverhead? Or there are some bugs in the pipe() operation?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org