You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sabarish Sasidharan <sa...@manthan.com> on 2015/03/21 21:23:28 UTC

ArrayIndexOutOfBoundsException in ALS.trainImplicit

I am consistently running into this ArrayIndexOutOfBoundsException issue
when using trainImplicit. I have tried changing the partitions and
switching to JavaSerializer. But they don't seem to help. I see that this
is the same as https://issues.apache.org/jira/browse/SPARK-3080. My lambda
is 0.01, rank is 5,  iterations is 10 and alpha is 0.01. I am using 41
executors, each with 8GB on a 48 million dataset.

15/03/21 13:07:29 ERROR executor.Executor: Exception in task 12.0 in stage
2808.0 (TID 40575)
java.lang.ArrayIndexOutOfBoundsException: 692
        at
org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateBlock$1.apply$mcVI$sp(ALS.scala:548)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
        at org.apache.spark.mllib.recommendation.ALS.org
$apache$spark$mllib$recommendation$ALS$$updateBlock(ALS.scala:542)
        at
org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateFeatures$2.apply(ALS.scala:510)
        at
org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateFeatures$2.apply(ALS.scala:509)
        at
org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValuesRDD.scala:31)
        at
org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValuesRDD.scala:31)

How can I get around this issue?

​Regards
Sab

-- 

Architect - Big Data
Ph: +91 99805 99458

Manthan Systems | *Company of the year - Analytics (2014 Frost and Sullivan
India ICT)*
+++

Re: ArrayIndexOutOfBoundsException in ALS.trainImplicit

Posted by Sabarish Sasidharan <sa...@manthan.com>.
My bad. This was an outofmemory disguised as something else.

Regards
Sab

On Sun, Mar 22, 2015 at 1:53 AM, Sabarish Sasidharan <
sabarish.sasidharan@manthan.com> wrote:

> I am consistently running into this ArrayIndexOutOfBoundsException issue
> when using trainImplicit. I have tried changing the partitions and
> switching to JavaSerializer. But they don't seem to help. I see that this
> is the same as https://issues.apache.org/jira/browse/SPARK-3080. My
> lambda is 0.01, rank is 5,  iterations is 10 and alpha is 0.01. I am using
> 41 executors, each with 8GB on a 48 million dataset.
>
> 15/03/21 13:07:29 ERROR executor.Executor: Exception in task 12.0 in stage
> 2808.0 (TID 40575)
> java.lang.ArrayIndexOutOfBoundsException: 692
>         at
> org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateBlock$1.apply$mcVI$sp(ALS.scala:548)
>         at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>         at org.apache.spark.mllib.recommendation.ALS.org
> $apache$spark$mllib$recommendation$ALS$$updateBlock(ALS.scala:542)
>         at
> org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateFeatures$2.apply(ALS.scala:510)
>         at
> org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateFeatures$2.apply(ALS.scala:509)
>         at
> org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValuesRDD.scala:31)
>         at
> org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValuesRDD.scala:31)
>
> How can I get around this issue?
>
> ​Regards
> Sab
>
> --
>
> Architect - Big Data
> Ph: +91 99805 99458
>
> Manthan Systems | *Company of the year - Analytics (2014 Frost and
> Sullivan India ICT)*
> +++
>



-- 

Architect - Big Data
Ph: +91 99805 99458

Manthan Systems | *Company of the year - Analytics (2014 Frost and Sullivan
India ICT)*
+++