You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by naresh gundla <na...@gmail.com> on 2017/02/16 19:24:34 UTC

Need inputs on configuring hive timeout + hive on spark : Job hasn't been submitted after 61s. Aborting it.

Hello,


i am facing this issue "Job hasn't been submitted after 61s. Aborting it."
when i am running multiple hive queries.

Details: (Hive on Spark)
I am using spark dynamic allocation and external shuffle service (yarn)

Assume one queries is using all of the resources in the cluster and when
the new querie launched then it throws with this error in hive log

2017-02-16 06:12:59,166 INFO  [main]: status.SparkJobMonitor
(RemoteSparkJobMonitor.java:startMonitor(67)) -* Job hasn't been submitted
after 61s. Aborting it.*
2017-02-16 06:12:59,166 ERROR [main]: status.SparkJobMonitor
(SessionState.java:printError(960)) - Status: SENT
2017-02-16 06:12:59,167 INFO  [main]: log.PerfLogger
(PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=SparkRunJob
start=1487254318158 end=1487254379167 duration=61009
from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor>
2017-02-16 06:12:59,183 ERROR [main]: ql.Driver
(SessionState.java:printError(960)) - FAILED: Execution Error, return code
2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
2017-02-16 06:12:59,184 INFO  [main]: log.PerfLogger
(PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute
start=1487254317999 end=1487254379184 duration=61185
from=org.apache.hadoop.hive.ql.Driver>
2017-02-16 06:12:59,184 INFO  [main]: log.PerfLogger
(PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks
from=org.apache.hadoop.hive.ql.Driver>
2017-02-16 06:12:59,184 INFO  [main]: log.PerfLogger
(PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks
start=1487254379184 end=1487254379184 duration=0
from=org.apache.hadoop.hive.ql.Driver>
2017-02-16 06:12:59,201 INFO  [main]: log.PerfLogger
(PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks
from=org.apache.hadoop.hive.ql.Driver>
2017-02-16 06:12:59,202 INFO  [main]: log.PerfLogger
(PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks
start=1487254379201 end=1487254379202 duration=1
from=org.apache.hadoop.hive.ql.Driver>

Is there any parameter to config , that the the query should wait until it
get the requried resources and it should not fail.


Thanks,
Naresh

Re: Need inputs on configuring hive timeout + hive on spark : Job hasn't been submitted after 61s. Aborting it.

Posted by Ian Cook <ic...@cloudera.com>.
Naresh,

The properties hive.spark.job.monitor.timeout and hive.spark.client.server.
connect.timeout in hive-site.xml control Hive on Spark timeouts. Details at
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-Spark

Ian Cook
Cloudera

On Thu, Feb 16, 2017 at 2:24 PM, naresh gundla <na...@gmail.com>
wrote:

> Hello,
>
>
> i am facing this issue "Job hasn't been submitted after 61s. Aborting it."
> when i am running multiple hive queries.
>
> Details: (Hive on Spark)
> I am using spark dynamic allocation and external shuffle service (yarn)
>
> Assume one queries is using all of the resources in the cluster and when
> the new querie launched then it throws with this error in hive log
>
> 2017-02-16 06:12:59,166 INFO  [main]: status.SparkJobMonitor
> (RemoteSparkJobMonitor.java:startMonitor(67)) -* Job hasn't been
> submitted after 61s. Aborting it.*
> 2017-02-16 06:12:59,166 ERROR [main]: status.SparkJobMonitor
> (SessionState.java:printError(960)) - Status: SENT
> 2017-02-16 06:12:59,167 INFO  [main]: log.PerfLogger
> (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=SparkRunJob
> start=1487254318158 end=1487254379167 duration=61009
> from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor>
> 2017-02-16 06:12:59,183 ERROR [main]: ql.Driver
> (SessionState.java:printError(960)) - FAILED: Execution Error, return
> code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
> 2017-02-16 06:12:59,184 INFO  [main]: log.PerfLogger
> (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute
> start=1487254317999 end=1487254379184 duration=61185
> from=org.apache.hadoop.hive.ql.Driver>
> 2017-02-16 06:12:59,184 INFO  [main]: log.PerfLogger
> (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks
> from=org.apache.hadoop.hive.ql.Driver>
> 2017-02-16 06:12:59,184 INFO  [main]: log.PerfLogger
> (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks
> start=1487254379184 end=1487254379184 duration=0
> from=org.apache.hadoop.hive.ql.Driver>
> 2017-02-16 06:12:59,201 INFO  [main]: log.PerfLogger
> (PerfLogger.java:PerfLogBegin(121)) - <PERFLOG method=releaseLocks
> from=org.apache.hadoop.hive.ql.Driver>
> 2017-02-16 06:12:59,202 INFO  [main]: log.PerfLogger
> (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=releaseLocks
> start=1487254379201 end=1487254379202 duration=1
> from=org.apache.hadoop.hive.ql.Driver>
>
> Is there any parameter to config , that the the query should wait until it
> get the requried resources and it should not fail.
>
>
> Thanks,
> Naresh
>