You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Brock Noland (JIRA)" <ji...@apache.org> on 2014/08/06 07:23:12 UTC

[jira] [Commented] (HIVE-7624) Reduce operator initialization failed when running multiple MR query on spark

    [ https://issues.apache.org/jira/browse/HIVE-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087264#comment-14087264 ] 

Brock Noland commented on HIVE-7624:
------------------------------------

[~lirui] In our sync-up you mentioned overwriting values in JobConf for Reduce work. I have found while digging around that we need to clone the jobConf for each MapWork or ReduceWork so they don't overwrite each other. We should do this in SparkPlanGenerator.generate methods
{noformat}
    JobConf newJobConf = new JobConf(jobConf);
{noformat}



> Reduce operator initialization failed when running multiple MR query on spark
> -----------------------------------------------------------------------------
>
>                 Key: HIVE-7624
>                 URL: https://issues.apache.org/jira/browse/HIVE-7624
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Rui Li
>
> The following error occurs when I try to run a query with multiple reduce works (M->R->R):
> {quote}
> 14/08/05 12:17:07 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 1)
> java.lang.RuntimeException: Reduce operator initialization failed
>         at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:170)
>         at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunction.call(HiveReduceFunction.java:53)
>         at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunction.call(HiveReduceFunction.java:31)
>         at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
>         at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:164)
>         at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
>         at org.apache.spark.rdd.RDD$$anonfun$13.apply(RDD.scala:596)
>         at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>        at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
>         at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>         at org.apache.spark.scheduler.Task.run(Task.scala:54)
>         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.RuntimeException: cannot find field reducesinkkey0 from [0:_col0]
>         at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:415)
>         at org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldRef(StandardStructObjectInspector.java:147)
> …
> {quote}
> I suspect we're applying the reduce function in wrong order.



--
This message was sent by Atlassian JIRA
(v6.2#6252)