You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Amithsha (JIRA)" <ji...@apache.org> on 2015/03/16 05:39:38 UTC

[jira] [Created] (HIVE-9970) Hive on spark

Amithsha created HIVE-9970:
------------------------------

             Summary: Hive on spark
                 Key: HIVE-9970
                 URL: https://issues.apache.org/jira/browse/HIVE-9970
             Project: Hive
          Issue Type: Bug
            Reporter: Amithsha


Hi all,

Recently i have configured Spark 1.2.0 and my environment is hadoop
2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing
insert into i am getting the following g error.

Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception
'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create
spark client.)'
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.spark.SparkTask

Have added the spark-assembly jar in hive lib
And also in hive console using the command add jar followed by the steps

set spark.home=/opt/spark-1.2.1/;

add jar /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar;

set hive.execution.engine=spark;

set spark.master=spark://xxxxxxx:7077;

set spark.eventLog.enabled=true;

set spark.executor.memory=512m;

set spark.serializer=org.apache.spark.serializer.KryoSerializer;

Can anyone suggest!!!!

Thanks & Regards
Amithsha




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)