You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "tom (Jira)" <ji...@apache.org> on 2020/05/19 09:57:00 UTC

[jira] [Created] (HIVE-23502) 【hive on spark】 return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

tom created HIVE-23502:
--------------------------

             Summary: 【hive on spark】 return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
                 Key: HIVE-23502
                 URL: https://issues.apache.org/jira/browse/HIVE-23502
             Project: Hive
          Issue Type: Bug
         Environment: hadoop 2.7.2   hive 1.2.1  sclala 2.9.x   spark 1.3.1
            Reporter: tom


Spark UI Log:

 

20/05/19 17:07:11 INFO exec.Utilities: No plan file found: hdfs://mycluster/tmp/hive/root/a3b20597-61d1-47a9-86b1-dde289fded78/hive_2020-05-19_17-06-53_394_4024151029162597012-1/-mr-10003/c586ae6a-eefb-49fd-92b6-7593e57f0a93/map.xml
20/05/19 17:07:11 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.NullPointerException
 at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
 at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:437)
 at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:430)
 at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:587)
 at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
 at org.apache.spark.scheduler.Task.run(Task.scala:64)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
20/05/19 17:07:11 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 1
20/05/19 17:07:11 INFO executor.Executor: Running task 0.1 in stage 0.0 (TID 1)
20/05/19 17:07:11 INFO rdd.HadoopRDD: Input split: Paths:/user/hive/warehouse/orginfobig_fq/nd=2014/frcode=410503/fqdate=2014-01-01/part-m-00000:0+100InputFormatClass: org.apache.hadoop.mapred.TextInputFormat

20/05/19 17:07:11 INFO exec.Utilities: No plan file found: hdfs://mycluster/tmp/hive/root/a3b20597-61d1-47a9-86b1-dde289fded78/hive_2020-05-19_17-06-53_394_4024151029162597012-1/-mr-10003/c586ae6a-eefb-49fd-92b6-7593e57f0a93/map.xml
20/05/19 17:07:11 ERROR executor.Executor: Exception in task 0.1 in stage 0.0 (TID 1)
java.lang.NullPointerException
 at org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
 at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:437)
 at org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:430)
 at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:587)
 at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:236)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:212)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
 at org.apache.spark.scheduler.Task.run(Task.scala:64)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
20/05/19 17:19:19 INFO storage.BlockManager: Removing broadcast 1
20/05/19 17:19:19 INFO storage.BlockManager: Removing block broadcast_1
20/05/19 17:19:19 INFO storage.MemoryStore: Block broadcast_1 of size 189144 dropped from memory (free 1665525606)
20/05/19 17:19:19 INFO storage.BlockManager: Removing block broadcast_1_piece0
20/05/19 17:19:19 INFO storage.MemoryStore: Block broadcast_1_piece0 of size 55965 dropped from memory (free 1665581571)
20/05/19 17:19:19 INFO storage.BlockManagerMaster: Updated info of block broadcast_1_piece0
20/05/19 17:19:19 INFO storage.BlockManager: Removing broadcast 0
20/05/19 17:19:19 INFO storage.BlockManager: Removing block broadcast_0
20/05/19 17:19:19 INFO storage.MemoryStore: Block broadcast_0 of size 1349884 dropped from memory (free 1666931455)
20/05/19 17:19:19 INFO storage.BlockManager: Removing block broadcast_0_piece0
20/05/19 17:19:19 INFO storage.MemoryStore: Block broadcast_0_piece0 of size 52726 dropped from memory (free 1666984181)
20/05/19 17:19:19 INFO storage.BlockManagerMaster: Updated info of block broadcast_0_piece0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)