You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Mich Talebzadeh <mi...@peridale.co.uk> on 2015/04/23 17:19:17 UTC

org.apache.hadoop.hive.ql.lockmgr.LockException: No record of lock could be found, may have timed out

Hi all,

 

Trying to do a direct load from RDBMS to Hive (not using Sqoop).

 

It sends data in files of 9999 rows at a time.

 

Concurrency is enabled. Using Oracle database as metastore. Out of 300,000
rows managed to get in 227,106 rows

 

hive> select count(1) from t;

 

Total MapReduce CPU Time Spent: 8 seconds 650 msec

OK

227106

Time taken: 22.858 seconds, Fetched: 1 row(s)

 

The successful load go through as

 

Loading data to table asehadoop.rs_temp__0xe9bf1e0_t

Table asehadoop.rs_temp__0xe9bf1e0_t stats: [numFiles=1, numRows=0,
totalSize=22361277, rawDataSize=0]

OK

Loading data to table asehadoop.rs_temp__0x2aaab40aade0_t

Loading data to table asehadoop.rs_temp__0x2aaab015d670_t

Table asehadoop.rs_temp__0x2aaab40aade0_t stats: [numFiles=1, numRows=0,
totalSize=22362397, rawDataSize=0]

OK

Query ID = hduser_20150423005858_6f616eb1-f0b3-4462-957d-a39ccc6393d9

Total jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there's no reduce operator

Table asehadoop.rs_temp__0x2aaab015d670_t stats: [numFiles=1, numRows=0,
totalSize=22363290, rawDataSize=0]

 

However, I am getting the following hive error

 

Query ID = hduser_20150423005858_6f1472d0-184d-42e9-a709-43cf63e709f0

Total jobs = 3

Launching Job 1 out of 3

Number of reduce tasks is set to 0 since there's no reduce operator

Starting Job = job_1429714224771_0041, Tracking URL =
http://rhes564:8088/proxy/application_1429714224771_0041/

Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill
job_1429714224771_0041

java.io.IOException: org.apache.hadoop.hive.ql.lockmgr.LockException: No
record of lock could be found, may have timed out

 

 

Anyone has seen this error please?

 

Thanks

 

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Author of the books "A Practitioner's Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Ltd, its
subsidiaries or their employees, unless expressly so stated. It is the
responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 


Re: org.apache.hadoop.hive.ql.lockmgr.LockException: No record of lock could be found, may have timed out

Posted by Mich Talebzadeh <mi...@peridale.co.uk>.
Hi,

I am using Zookeeper

Thanks
Let your email find you with BlackBerry from Vodafone

-----Original Message-----
From: Alan Gates <al...@gmail.com>
Date: Thu, 23 Apr 2015 08:48:01 
To: <us...@hive.apache.org>
Reply-To: user@hive.apache.org
Subject: Re: org.apache.hadoop.hive.ql.lockmgr.LockException: No record of
 lock could be found, may have timed out

What lock or transaction manager are you using?

Alan.

> Mich Talebzadeh <ma...@peridale.co.uk>
> April 23, 2015 at 8:19
>
> Hi all,
>
> Trying to do a direct load from RDBMS to Hive (not using Sqoop).
>
> It sends data in files of 9999 rows at a time.
>
> Concurrency is enabled. Using Oracle database as metastore. Out of 
> 300,000 rows managed to get in 227,106 rows
>
> hive> select count(1) from t;
>
> Total MapReduce CPU Time Spent: 8 seconds 650 msec
>
> OK
>
> 227106
>
> Time taken: 22.858 seconds, Fetched: 1 row(s)
>
> The successful load go through as
>
> Loading data to table asehadoop.rs_temp__0xe9bf1e0_t
>
> Table asehadoop.rs_temp__0xe9bf1e0_t stats: [numFiles=1, numRows=0, 
> totalSize=22361277, rawDataSize=0]
>
> OK
>
> Loading data to table asehadoop.rs_temp__0x2aaab40aade0_t
>
> Loading data to table asehadoop.rs_temp__0x2aaab015d670_t
>
> Table asehadoop.rs_temp__0x2aaab40aade0_t stats: [numFiles=1, 
> numRows=0, totalSize=22362397, rawDataSize=0]
>
> OK
>
> Query ID = hduser_20150423005858_6f616eb1-f0b3-4462-957d-a39ccc6393d9
>
> Total jobs = 3
>
> Launching Job 1 out of 3
>
> Number of reduce tasks is set to 0 since there's no reduce operator
>
> Table asehadoop.rs_temp__0x2aaab015d670_t stats: [numFiles=1, 
> numRows=0, totalSize=22363290, rawDataSize=0]
>
> However, I am getting the following hive error
>
> Query ID = hduser_20150423005858_6f1472d0-184d-42e9-a709-43cf63e709f0
>
> Total jobs = 3
>
> Launching Job 1 out of 3
>
> Number of reduce tasks is set to 0 since there's no reduce operator
>
> Starting Job = job_1429714224771_0041, Tracking URL = 
> http://rhes564:8088/proxy/application_1429714224771_0041/
>
> Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill 
> job_1429714224771_0041
>
> java.io.IOException: org.apache.hadoop.hive.ql.lockmgr.LockException: 
> No record of lock could be found, may have timed out
>
> Anyone has seen this error please?
>
> Thanks
>
> Mich Talebzadeh
>
> http://talebzadehmich.wordpress.com
>
> __
>
> Author of the books*"A Practitioner's Guide to Upgrading to 
> Sybase**ASE 15", **ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN 
> 978-0-9759693-0-4*
>
> _Publications due shortly:_
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen 
> and Coherence Cache*
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN:978-0-9563693-1-4, 
> volume one out shortly
>
> NOTE: The information in this email is proprietary and confidential. 
> This message is for the designated recipient only, if you are not the 
> intended recipient, you should destroy it immediately. Any information 
> in this message shall not be understood as given or endorsed by 
> Peridale Ltd, its subsidiaries or their employees, unless expressly so 
> stated. It is the responsibility of the recipient to ensure that this 
> email is virus free, therefore neither Peridale Ltd, its subsidiaries 
> nor their employees accept any responsibility.
>


Re: org.apache.hadoop.hive.ql.lockmgr.LockException: No record of lock could be found, may have timed out

Posted by Alan Gates <al...@gmail.com>.
What lock or transaction manager are you using?

Alan.

> Mich Talebzadeh <ma...@peridale.co.uk>
> April 23, 2015 at 8:19
>
> Hi all,
>
> Trying to do a direct load from RDBMS to Hive (not using Sqoop).
>
> It sends data in files of 9999 rows at a time.
>
> Concurrency is enabled. Using Oracle database as metastore. Out of 
> 300,000 rows managed to get in 227,106 rows
>
> hive> select count(1) from t;
>
> Total MapReduce CPU Time Spent: 8 seconds 650 msec
>
> OK
>
> 227106
>
> Time taken: 22.858 seconds, Fetched: 1 row(s)
>
> The successful load go through as
>
> Loading data to table asehadoop.rs_temp__0xe9bf1e0_t
>
> Table asehadoop.rs_temp__0xe9bf1e0_t stats: [numFiles=1, numRows=0, 
> totalSize=22361277, rawDataSize=0]
>
> OK
>
> Loading data to table asehadoop.rs_temp__0x2aaab40aade0_t
>
> Loading data to table asehadoop.rs_temp__0x2aaab015d670_t
>
> Table asehadoop.rs_temp__0x2aaab40aade0_t stats: [numFiles=1, 
> numRows=0, totalSize=22362397, rawDataSize=0]
>
> OK
>
> Query ID = hduser_20150423005858_6f616eb1-f0b3-4462-957d-a39ccc6393d9
>
> Total jobs = 3
>
> Launching Job 1 out of 3
>
> Number of reduce tasks is set to 0 since there's no reduce operator
>
> Table asehadoop.rs_temp__0x2aaab015d670_t stats: [numFiles=1, 
> numRows=0, totalSize=22363290, rawDataSize=0]
>
> However, I am getting the following hive error
>
> Query ID = hduser_20150423005858_6f1472d0-184d-42e9-a709-43cf63e709f0
>
> Total jobs = 3
>
> Launching Job 1 out of 3
>
> Number of reduce tasks is set to 0 since there's no reduce operator
>
> Starting Job = job_1429714224771_0041, Tracking URL = 
> http://rhes564:8088/proxy/application_1429714224771_0041/
>
> Kill Command = /home/hduser/hadoop/hadoop-2.6.0/bin/hadoop job  -kill 
> job_1429714224771_0041
>
> java.io.IOException: org.apache.hadoop.hive.ql.lockmgr.LockException: 
> No record of lock could be found, may have timed out
>
> Anyone has seen this error please?
>
> Thanks
>
> Mich Talebzadeh
>
> http://talebzadehmich.wordpress.com
>
> __
>
> Author of the books*"A Practitioner's Guide to Upgrading to 
> Sybase**ASE 15", **ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN 
> 978-0-9759693-0-4*
>
> _Publications due shortly:_
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen 
> and Coherence Cache*
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN:978-0-9563693-1-4, 
> volume one out shortly
>
> NOTE: The information in this email is proprietary and confidential. 
> This message is for the designated recipient only, if you are not the 
> intended recipient, you should destroy it immediately. Any information 
> in this message shall not be understood as given or endorsed by 
> Peridale Ltd, its subsidiaries or their employees, unless expressly so 
> stated. It is the responsibility of the recipient to ensure that this 
> email is virus free, therefore neither Peridale Ltd, its subsidiaries 
> nor their employees accept any responsibility.
>