You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@crunch.apache.org by "Nithin Asokan (JIRA)" <ji...@apache.org> on 2015/10/06 17:45:26 UTC

[jira] [Updated] (CRUNCH-568) Aggregators fail on SparkPipeline

     [ https://issues.apache.org/jira/browse/CRUNCH-568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Nithin Asokan updated CRUNCH-568:
---------------------------------
    Summary: Aggregators fail on SparkPipeline  (was: Aggregators fails on SparkPipeline)

> Aggregators fail on SparkPipeline
> ---------------------------------
>
>                 Key: CRUNCH-568
>                 URL: https://issues.apache.org/jira/browse/CRUNCH-568
>             Project: Crunch
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 0.12.0
>            Reporter: Nithin Asokan
>
> Logging this based on mailing list discussion
> http://mail-archives.apache.org/mod_mbox/crunch-user/201510.mbox/%3CCANb5z2KBqxZng92ToFo0MdTk2fd8jtGTjZ85h1yUo_akaetcXg%40mail.gmail.com%3E
> Running a Crunch SparkPipeline with FirstN aggregator results in a NullPointerException. 
> Example to recreate this
> https://gist.github.com/nasokan/853ff80ce20ad7a78886
> Stack trace on driver logs. 
> {code}
> 15/10/05 16:02:33 WARN TaskSetManager: Lost task 3.0 in stage 0.0 (TID 0, 123.domain.xyz): java.lang.NullPointerException
>     at org.apache.crunch.impl.mr.run.UniformHashPartitioner.getPartition(UniformHashPartitioner.java:32)
>     at org.apache.crunch.impl.spark.fn.PartitionedMapOutputFunction.call(PartitionedMapOutputFunction.java:62)
>     at org.apache.crunch.impl.spark.fn.PartitionedMapOutputFunction.call(PartitionedMapOutputFunction.java:35)
>     at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1002)
>     at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1002)
>     at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>     at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>     at org.apache.spark.util.collection.ExternalSorter.spillToPartitionFiles(ExternalSorter.scala:366)
>     at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:211)
>     at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
>     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>     at org.apache.spark.scheduler.Task.run(Task.scala:64)
>     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)