You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Jacek Laskowski <ja...@japila.pl> on 2016/09/06 20:32:36 UTC

df.groupBy('m).agg(sum('n)).show dies with 10^3 elements?

Hi,

I'm concerned with the OOME in local mode with the version built today:

scala> val intsMM = 1 to math.pow(10, 3).toInt
intsMM: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4,
5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120,
121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162,
163, 164, 165, 166, 167, 168, 169, 1...
scala> val df = intsMM.toDF("n").withColumn("m", 'n % 2)
df: org.apache.spark.sql.DataFrame = [n: int, m: int]

scala> df.groupBy('m).agg(sum('n)).show
...
16/09/06 22:28:02 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID 6)
java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 0
...

Please see https://gist.github.com/jaceklaskowski/906d62b830f6c967a7eee5f8eb6e9237
and let me know if I should file an issue. I don't think 10^3 elements
and groupBy should kill spark-shell.

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: df.groupBy('m).agg(sum('n)).show dies with 10^3 elements?

Posted by Jacek Laskowski <ja...@japila.pl>.
Hi Josh,

Yes, that seems to be the issue. As I commented out in the JIRA, just
yesterday (after I had sent the email), such simple queries like the
following killed spark-shell:

Seq(1).toDF.groupBy('value).count.show

Hoping to see it get resolved soon. If there's anything I could help
you with to fix/reproduce the issue, let me know. I wish I knew how to
write a unit test for this. Where in the code to look for inspiration?

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski


On Tue, Sep 6, 2016 at 11:51 PM, Josh Rosen <jo...@databricks.com> wrote:
> I think that this is a simpler case of
> https://issues.apache.org/jira/browse/SPARK-17405. I'm going to comment on
> that ticket with your simpler reproduction.
>
> On Tue, Sep 6, 2016 at 1:32 PM Jacek Laskowski <ja...@japila.pl> wrote:
>>
>> Hi,
>>
>> I'm concerned with the OOME in local mode with the version built today:
>>
>> scala> val intsMM = 1 to math.pow(10, 3).toInt
>> intsMM: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4,
>> 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
>> 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
>> 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
>> 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
>> 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
>> 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
>> 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120,
>> 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
>> 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
>> 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162,
>> 163, 164, 165, 166, 167, 168, 169, 1...
>> scala> val df = intsMM.toDF("n").withColumn("m", 'n % 2)
>> df: org.apache.spark.sql.DataFrame = [n: int, m: int]
>>
>> scala> df.groupBy('m).agg(sum('n)).show
>> ...
>> 16/09/06 22:28:02 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID
>> 6)
>> java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got
>> 0
>> ...
>>
>> Please see
>> https://gist.github.com/jaceklaskowski/906d62b830f6c967a7eee5f8eb6e9237
>> and let me know if I should file an issue. I don't think 10^3 elements
>> and groupBy should kill spark-shell.
>>
>> Pozdrawiam,
>> Jacek Laskowski
>> ----
>> https://medium.com/@jaceklaskowski/
>> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>> Follow me at https://twitter.com/jaceklaskowski
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: df.groupBy('m).agg(sum('n)).show dies with 10^3 elements?

Posted by Josh Rosen <jo...@databricks.com>.
I think that this is a simpler case of
https://issues.apache.org/jira/browse/SPARK-17405. I'm going to comment on
that ticket with your simpler reproduction.

On Tue, Sep 6, 2016 at 1:32 PM Jacek Laskowski <ja...@japila.pl> wrote:

> Hi,
>
> I'm concerned with the OOME in local mode with the version built today:
>
> scala> val intsMM = 1 to math.pow(10, 3).toInt
> intsMM: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4,
> 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
> 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
> 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
> 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
> 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
> 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
> 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120,
> 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
> 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
> 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162,
> 163, 164, 165, 166, 167, 168, 169, 1...
> scala> val df = intsMM.toDF("n").withColumn("m", 'n % 2)
> df: org.apache.spark.sql.DataFrame = [n: int, m: int]
>
> scala> df.groupBy('m).agg(sum('n)).show
> ...
> 16/09/06 22:28:02 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID
> 6)
> java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 0
> ...
>
> Please see
> https://gist.github.com/jaceklaskowski/906d62b830f6c967a7eee5f8eb6e9237
> and let me know if I should file an issue. I don't think 10^3 elements
> and groupBy should kill spark-shell.
>
> Pozdrawiam,
> Jacek Laskowski
> ----
> https://medium.com/@jaceklaskowski/
> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
> Follow me at https://twitter.com/jaceklaskowski
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>