You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by WangJianfei <wa...@otcaix.iscas.ac.cn> on 2016/09/28 14:03:24 UTC

Broadcast big dataset

Hi Devs
 In my application, i just broadcast a dataset(about 500M) to  the
ececutors(100+), I got a java heap error
Jmartad-7219.hadoop.jd.local:53591 (size: 4.0 MB, free: 3.3 GB)
16/09/28 15:56:48 INFO BlockManagerInfo: Added broadcast_9_piece19 in memory
on BJHC-Jmartad-9012.hadoop.jd.local:53197 (size: 4.0 MB, free: 3.3 GB)
16/09/28 15:56:49 INFO BlockManagerInfo: Added broadcast_9_piece8 in memory
on BJHC-Jmartad-84101.hadoop.jd.local:52044 (size: 4.0 MB, free: 3.3 GB)
16/09/28 15:56:58 INFO BlockManagerInfo: Removed broadcast_8_piece0 on
172.22.176.114:37438 in memory (size: 2.7 KB, free: 3.1 GB)
16/09/28 15:56:58 WARN TaskSetManager: Lost task 125.0 in stage 7.0 (TID
130, BJHC-Jmartad-9376.hadoop.jd.local): java.lang.OutOfMemoryError: Java
heap space
	at java.io.ObjectInputStream$HandleTable.grow(ObjectInputStream.java:3465)
	at
java.io.ObjectInputStream$HandleTable.assign(ObjectInputStream.java:3271)
	at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1789)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1706)

My configuration is 4G memory in driver.  Any advice is appreciated.
Thank you!



--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Broadcast-big-dataset-tp19127.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: Broadcast big dataset

Posted by Takeshi Yamamuro <li...@gmail.com>.
Hi,

# I dropped dev and added user because this is more suitable in
user-mailinglist.

I think you need to describe more about your environments,
e.g. spark version, executor memory, and so on.

// maropu


On Wed, Sep 28, 2016 at 11:03 PM, WangJianfei <
wangjianfei15@otcaix.iscas.ac.cn> wrote:

> Hi Devs
>  In my application, i just broadcast a dataset(about 500M) to  the
> ececutors(100+), I got a java heap error
> Jmartad-7219.hadoop.jd.local:53591 (size: 4.0 MB, free: 3.3 GB)
> 16/09/28 15:56:48 INFO BlockManagerInfo: Added broadcast_9_piece19 in
> memory
> on BJHC-Jmartad-9012.hadoop.jd.local:53197 (size: 4.0 MB, free: 3.3 GB)
> 16/09/28 15:56:49 INFO BlockManagerInfo: Added broadcast_9_piece8 in memory
> on BJHC-Jmartad-84101.hadoop.jd.local:52044 (size: 4.0 MB, free: 3.3 GB)
> 16/09/28 15:56:58 INFO BlockManagerInfo: Removed broadcast_8_piece0 on
> 172.22.176.114:37438 in memory (size: 2.7 KB, free: 3.1 GB)
> 16/09/28 15:56:58 WARN TaskSetManager: Lost task 125.0 in stage 7.0 (TID
> 130, BJHC-Jmartad-9376.hadoop.jd.local): java.lang.OutOfMemoryError: Java
> heap space
>         at java.io.ObjectInputStream$HandleTable.grow(
> ObjectInputStream.java:3465)
>         at
> java.io.ObjectInputStream$HandleTable.assign(ObjectInputStream.java:3271)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1789)
>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.
> java:1350)
>         at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:1990)
>         at java.io.ObjectInputStream.readSerialData(
> ObjectInputStream.java:1915)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.
> java:1350)
>         at java.io.ObjectInputStream.defaultReadFields(
> ObjectInputStream.java:1990)
>         at java.io.ObjectInputStream.readSerialData(
> ObjectInputStream.java:1915)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
>         at java.io.ObjectInputStream.readObject0(ObjectInputStream.
> java:1350)
>         at java.io.ObjectInputStream.readArray(ObjectInputStream.
> java:1706)
>
> My configuration is 4G memory in driver.  Any advice is appreciated.
> Thank you!
>
>
>
> --
> View this message in context: http://apache-spark-
> developers-list.1001551.n3.nabble.com/Broadcast-big-dataset-tp19127.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>


-- 
---
Takeshi Yamamuro

Re: Broadcast big dataset

Posted by Anastasios Zouzias <zo...@gmail.com>.
Hey,

Is the driver running OOM? Try 8g on the driver memory. Speaking of which,
how do you estimate that your broadcasted dataset is 500M?

Best,
Anastasios

Am 29.09.2016 5:32 AM schrieb "WangJianfei" <wangjianfei15@otcaix.iscas.
ac.cn>:

> First thank you very much!
>   My executor memeory is also 4G, but my spark version is 1.5. Does spark
> version make a trouble?
>
>
>
>
> --
> View this message in context: http://apache-spark-
> developers-list.1001551.n3.nabble.com/Broadcast-big-
> dataset-tp19127p19143.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>

Re: Broadcast big dataset

Posted by WangJianfei <wa...@otcaix.iscas.ac.cn>.
First thank you very much!
  My executor memeory is also 4G, but my spark version is 1.5. Does spark
version make a trouble?




--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Broadcast-big-dataset-tp19127p19143.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: Broadcast big dataset

Posted by Andrew Duffy <ro...@aduffy.org>.
Have you tried upping executor memory? There's a separate spark conf for that: spark.executor.memory
In general driver configurations don't automatically apply to executors.





On Wed, Sep 28, 2016 at 7:03 AM -0700, "WangJianfei" <wa...@otcaix.iscas.ac.cn> wrote:










Hi Devs
 In my application, i just broadcast a dataset(about 500M) to  the
ececutors(100+), I got a java heap error
Jmartad-7219.hadoop.jd.local:53591 (size: 4.0 MB, free: 3.3 GB)
16/09/28 15:56:48 INFO BlockManagerInfo: Added broadcast_9_piece19 in memory
on BJHC-Jmartad-9012.hadoop.jd.local:53197 (size: 4.0 MB, free: 3.3 GB)
16/09/28 15:56:49 INFO BlockManagerInfo: Added broadcast_9_piece8 in memory
on BJHC-Jmartad-84101.hadoop.jd.local:52044 (size: 4.0 MB, free: 3.3 GB)
16/09/28 15:56:58 INFO BlockManagerInfo: Removed broadcast_8_piece0 on
172.22.176.114:37438 in memory (size: 2.7 KB, free: 3.1 GB)
16/09/28 15:56:58 WARN TaskSetManager: Lost task 125.0 in stage 7.0 (TID
130, BJHC-Jmartad-9376.hadoop.jd.local): java.lang.OutOfMemoryError: Java
heap space
	at java.io.ObjectInputStream$HandleTable.grow(ObjectInputStream.java:3465)
	at
java.io.ObjectInputStream$HandleTable.assign(ObjectInputStream.java:3271)
	at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1789)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1706)

My configuration is 4G memory in driver.  Any advice is appreciated.
Thank you!



--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Broadcast-big-dataset-tp19127.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org