You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Qiuzhuang Lian (JIRA)" <ji...@apache.org> on 2015/11/12 09:36:10 UTC

[jira] [Commented] (HIVE-9970) Hive on spark

    [ https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001843#comment-15001843 ] 

Qiuzhuang Lian commented on HIVE-9970:
--------------------------------------

Hi Xuefu, 

I use spark trunk version of 1.6 + hadoop 2.6.0 + hive 1.2.1, when running the HQL in HIVE CLI, We got following error,

2015-11-12 16:31:17,245 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/12 16:31:17 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
2015-11-12 16:31:17,246 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - java.lang.AbstractMethodError
2015-11-12 16:31:17,246 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 	at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:62)
2015-11-12 16:31:17,246 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 	at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
2015-11-12 16:31:17,246 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 	at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
2015-11-12 16:31:17,246 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 	at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)

Any ideas?

> Hive on spark
> -------------
>
>                 Key: HIVE-9970
>                 URL: https://issues.apache.org/jira/browse/HIVE-9970
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Amithsha
>
> Hi all,
> Recently i have configured Spark 1.2.0 and my environment is hadoop
> 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing
> insert into i am getting the following g error.
> Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=<number>
> Failed to execute spark task, with exception
> 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create
> spark client.)'
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.spark.SparkTask
> Have added the spark-assembly jar in hive lib
> And also in hive console using the command add jar followed by the steps
> set spark.home=/opt/spark-1.2.1/;
> add jar /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar;
> set hive.execution.engine=spark;
> set spark.master=spark://xxxxxxx:7077;
> set spark.eventLog.enabled=true;
> set spark.executor.memory=512m;
> set spark.serializer=org.apache.spark.serializer.KryoSerializer;
> Can anyone suggest!!!!
> Thanks & Regards
> Amithsha



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)