You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "alton.jung (JIRA)" <ji...@apache.org> on 2014/09/04 12:25:52 UTC

[jira] [Created] (HIVE-7980) Hive on spark issue..

alton.jung created HIVE-7980:
--------------------------------

             Summary: Hive on spark issue..
                 Key: HIVE-7980
                 URL: https://issues.apache.org/jira/browse/HIVE-7980
             Project: Hive
          Issue Type: Bug
          Components: HiveServer2, Spark
    Affects Versions: spark-branch
         Environment: Test Environment is..

. hive 0.14.0(spark branch version)
. spark (http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark-assembly-1.1.0-SNAPSHOT-hadoop2.3.0.jar)
. hadoop 2.4.0 (yarn)
            Reporter: alton.jung
             Fix For: spark-branch


.I followed this guide(https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started). and i compiled hive from spark branch. in the next step i met the below error..
(* when i typed the hive query on beeline, i used the  simple query using "order by" to invoke the palleral works 
                   ex) select * from test where id = 1 order by id;
)

[Error list is]
2014-09-04 02:58:08,796 ERROR spark.SparkClient (SparkClient.java:execute(158)) - Error generating Spark Plan
java.lang.NullPointerException
	at org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:1262)
	at org.apache.spark.SparkContext.defaultMinPartitions(SparkContext.scala:1269)
	at org.apache.spark.SparkContext.hadoopRDD$default$5(SparkContext.scala:537)
	at org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:318)
	at org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generateRDD(SparkPlanGenerator.java:160)
	at org.apache.hadoop.hive.ql.exec.spark.SparkPlanGenerator.generate(SparkPlanGenerator.java:88)
	at org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:156)
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.submit(SparkSessionImpl.java:52)
	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:77)
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
2014-09-04 02:58:11,108 ERROR ql.Driver (SessionState.java:printError(696)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)