You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Satoshi Iijima (JIRA)" <ji...@apache.org> on 2016/11/01 14:01:58 UTC

[jira] [Updated] (HIVE-15101) Spark client can be stuck in RUNNING state

     [ https://issues.apache.org/jira/browse/HIVE-15101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Satoshi Iijima updated HIVE-15101:
----------------------------------
    Description: 
When a Hive-on-Spark job is executed on YARN environment where UNHEALTHY NodeManager exists, Spark client can be stuck in RUNNING state.

thread dump:
{code}
"008ee7b6-b083-4ac9-ae1c-b6097d9bf761 main" #1 prio=5 os_prio=0 tid=0x00007f14f4013800 nid=0x3855 in Object.wait() [0x00007f14fd9b1000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait(Native Method)
	- waiting on <0x00000000f6615550> (a io.netty.util.concurrent.DefaultPromise)
	at java.lang.Object.wait(Object.java:502)
	at io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:254)
	- locked <0x00000000f6615550> (a io.netty.util.concurrent.DefaultPromise)
	at io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:32)
	at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:31)
	at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:104)
	at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80)
	- locked <0x00000000f21b8e08> (a java.lang.Class for org.apache.hive.spark.client.SparkClientFactory)
	at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:99)
	at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:95)
	at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:67)
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62)
	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:114)
	at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136)
	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:89)
	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1858)
	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1562)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1313)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:742)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
{code}

  was:When a Hive-on-Spark job is executed on YARN environment where UNHEALTHY NodeManager exists, Spark client can be stuck in RUNNING state.


> Spark client can be stuck in RUNNING state
> ------------------------------------------
>
>                 Key: HIVE-15101
>                 URL: https://issues.apache.org/jira/browse/HIVE-15101
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 2.0.0, 2.1.0
>            Reporter: Satoshi Iijima
>
> When a Hive-on-Spark job is executed on YARN environment where UNHEALTHY NodeManager exists, Spark client can be stuck in RUNNING state.
> thread dump:
> {code}
> "008ee7b6-b083-4ac9-ae1c-b6097d9bf761 main" #1 prio=5 os_prio=0 tid=0x00007f14f4013800 nid=0x3855 in Object.wait() [0x00007f14fd9b1000]
>    java.lang.Thread.State: WAITING (on object monitor)
> 	at java.lang.Object.wait(Native Method)
> 	- waiting on <0x00000000f6615550> (a io.netty.util.concurrent.DefaultPromise)
> 	at java.lang.Object.wait(Object.java:502)
> 	at io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:254)
> 	- locked <0x00000000f6615550> (a io.netty.util.concurrent.DefaultPromise)
> 	at io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:32)
> 	at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:31)
> 	at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:104)
> 	at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80)
> 	- locked <0x00000000f21b8e08> (a java.lang.Class for org.apache.hive.spark.client.SparkClientFactory)
> 	at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:99)
> 	at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:95)
> 	at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:67)
> 	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62)
> 	at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:114)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:136)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:89)
> 	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> 	at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1858)
> 	at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1562)
> 	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1313)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084)
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> 	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
> 	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335)
> 	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:742)
> 	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714)
> 	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
> 	at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)