You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "xiongjun (JIRA)" <ji...@apache.org> on 2016/02/24 11:32:18 UTC
[jira] [Commented] (HIVE-11125) when i run a sql use hive on spark,
it seem like the hive cli finished, but the application is always running
[ https://issues.apache.org/jira/browse/HIVE-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15162787#comment-15162787 ]
xiongjun commented on HIVE-11125:
---------------------------------
Hi,Xuefu Zhang, I find that the running application tends to block the running of other application if it is not killed.
can I kill the running applicaition after the query finished!
> when i run a sql use hive on spark, it seem like the hive cli finished, but the application is always running
> -------------------------------------------------------------------------------------------------------------
>
> Key: HIVE-11125
> URL: https://issues.apache.org/jira/browse/HIVE-11125
> Project: Hive
> Issue Type: Bug
> Components: spark-branch
> Affects Versions: 1.2.0
> Environment: Hive1.2.0
> Spark1.3.1
> Hadoop2.5.1
> Reporter: JoneZhang
> Assignee: Xuefu Zhang
> Labels: TODOC-SPARK
>
> when i run a sql use hive on spark,.
> The hive cli has finished
> hive (default)> select count(id) from t1 where id>100;
> Query ID = mqq_20150626174732_9e18f0c9-7b56-46ab-bf90-3b66f1a51300
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=<number>
> Starting Spark Job = 7d34cb8c-eaad-4724-a99a-37e517db80d9
> Query Hive on Spark job[0] stages:
> 0
> 1
> Status: Running (Hive on Spark job[0])
> Job Progress Format
> CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
> 2015-06-26 17:47:53,746 Stage-0_0: 0(+1)/5 Stage-1_0: 0/1
> 2015-06-26 17:47:56,771 Stage-0_0: 1(+0)/5 Stage-1_0: 0/1
> 2015-06-26 17:47:57,778 Stage-0_0: 4(+1)/5 Stage-1_0: 0/1
> 2015-06-26 17:47:59,791 Stage-0_0: 5/5 Finished Stage-1_0: 0(+1)/1
> 2015-06-26 17:48:00,797 Stage-0_0: 5/5 Finished Stage-1_0: 1/1 Finished
> Status: Finished successfully in 18.08 seconds
> OK
> 5
> Time taken: 28.512 seconds, Fetched: 1 row(s)
> But the application is always running state on resourcemanager
> User: mqq
> Name: Hive on Spark
> Application Type: SPARK
> Application Tags:
> State: RUNNING
> FinalStatus: UNDEFINED
> Started: 2015-06-26 17:47:38
> Elapsed: 24mins, 33sec
> Tracking URL: ApplicationMaster
> Diagnostics:
> the hive.log is
> 2015-06-26 18:12:26,878 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/06/26 18:12:26 main INFO org.apache.spark.deploy.yarn.Client>> Application report for application_1433328839160_0071 (state: RUNNING)
> 2015-06-26 18:12:27,879 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/06/26 18:12:27 main INFO org.apache.spark.deploy.yarn.Client>> Application report for application_1433328839160_0071 (state: RUNNING)
> 2015-06-26 18:12:28,880 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/06/26 18:12:28 main INFO org.apache.spark.deploy.yarn.Client>> Application report for application_1433328839160_0071 (state: RUNNING)
> ...
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)