You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@bigtop.apache.org by "Andrew (Jira)" <ji...@apache.org> on 2022/02/10 17:14:00 UTC

[jira] [Updated] (BIGTOP-3641) Hive on Spark error

     [ https://issues.apache.org/jira/browse/BIGTOP-3641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew updated BIGTOP-3641:
---------------------------
    Description: 
Hi! I've tried to launch Hadoop stack in docker in 2 ways:
 # successfully build _hdfs, yarn, mapreduce, hbase, hive, spark, zookeeper_ from bigtop master branch (3.1.0 version) and launched docker from local repo via provisioner with all this components
 # same as 1st approach but with bigtop repo (3.0.0 version)

In both cases everything works fine, but Hive on Spark fails with an error:

 
{code:java}
hive> set hive.execution.engine=spark;
hive> select id, count(*) from default.test group by id;
Query ID = root_20220209133134_cf3aec7d-ee2e-4d38-b200-6d616020d4b6
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job failed with java.lang.ClassNotFoundException: oot_20220209133134_cf3aec7d-ee2e-4d38-b200-6d616020d4b6:1
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.{code}
I've also tried to create an hdfs dir with spark libs and specify config was done in https://issues.apache.org/jira/browse/BIGTOP-3333 - it didn't help. Any ideas what is missing and how to fix it?

 

  was:
Hi! I've tried to launch Hadoop stack in docker in 2 ways:
 # successfully build _hdfs, yarn, mapreduce, hbase, hive, spark, zookeeper_ from bigtop master branch (3.1.0 version) and launched docker from local repo via provisioner with all this components
 # same as 1st approach but with bigtop repo (3.0.0 version)

In both cases everything works fine, but Hive on Spark fails with an error:

 
{code:java}
hive> set hive.execution.engine=spark;
hive> select id, count(*) from default.test group by id;
Query ID = root_20220209133134_cf3aec7d-ee2e-4d38-b200-6d616020d4b6
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job failed with java.lang.ClassNotFoundException: oot_20220209133134_cf3aec7d-ee2e-4d38-b200-6d616020d4b6:1
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.{code}
I've also tried to create an hdfs dir with spark libs and specify config was done in this ticket- it didn't help. Any ideas what is missing and how to fix it?

 


> Hive on Spark error
> -------------------
>
>                 Key: BIGTOP-3641
>                 URL: https://issues.apache.org/jira/browse/BIGTOP-3641
>             Project: Bigtop
>          Issue Type: Bug
>          Components: hive, spark
>    Affects Versions: 3.0.0, 3.1.0
>            Reporter: Andrew
>            Priority: Major
>
> Hi! I've tried to launch Hadoop stack in docker in 2 ways:
>  # successfully build _hdfs, yarn, mapreduce, hbase, hive, spark, zookeeper_ from bigtop master branch (3.1.0 version) and launched docker from local repo via provisioner with all this components
>  # same as 1st approach but with bigtop repo (3.0.0 version)
> In both cases everything works fine, but Hive on Spark fails with an error:
>  
> {code:java}
> hive> set hive.execution.engine=spark;
> hive> select id, count(*) from default.test group by id;
> Query ID = root_20220209133134_cf3aec7d-ee2e-4d38-b200-6d616020d4b6
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=<number>
> Job failed with java.lang.ClassNotFoundException: oot_20220209133134_cf3aec7d-ee2e-4d38-b200-6d616020d4b6:1
> FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.{code}
> I've also tried to create an hdfs dir with spark libs and specify config was done in https://issues.apache.org/jira/browse/BIGTOP-3333 - it didn't help. Any ideas what is missing and how to fix it?
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)