You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by djames <dj...@supinfo.com> on 2007/01/28 12:04:33 UTC
Lease expired exception
Hello,
During the parse of a fetch of 600 000 pages in a cluster of 5 box,The job
failed with this error message on 2 box :
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.dfs.LeaseExpiredException: No lease on
/user/nutch/crawl/segments/20070127060350/crawl_parse/part-00001 at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:448)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:184) at
sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at
java.lang.reflect.Method.invoke(Unknown Source) at
org.apache.hadoop.ipc.RPC$Server.call(RPC.java:243) at
org.apache.hadoop.ipc.Server$Handler.run(Server.java:469) at
org.apache.hadoop.ipc.Client$Connection.run(Client.java:159)
Did someone already have this probleme and can give me a solution?
--
View this message in context: http://www.nabble.com/Lease-expired-exception-tf3130730.html#a8674514
Sent from the Nutch - User mailing list archive at Nabble.com.
Re: Lease expired exception
Posted by Dennis Kubes <nu...@dragonflymc.com>.
We had a similar problem with the date being off and this causing some
errors but I don't think it was causing this error. Run this search on
Google to find the threads talking about the lease expired issue
http://www.google.com/search?hl=en&q=%22Race+Condition+with+Renewing+Leases+and+RPC+Calls%22
That being said it is important to have the time synchronized between
the machines and there are other errors (mostly stalls) that will occur
if they are not synchronized.
Dennis Kubes
djames wrote:
> Thanks a lot for your response,
>
> I'm using nutch 0.8.1.
> I will rebuid hadoop with the patch...
> but i notice something, i'm running tasktracker on different VMware and the
> date is not strictly the same with diference of 3 or five minutes. could it
> be the reason of the buf?
Re: Lease expired exception
Posted by djames <dj...@supinfo.com>.
Thanks a lot for your response,
I'm using nutch 0.8.1.
I will rebuid hadoop with the patch...
but i notice something, i'm running tasktracker on different VMware and the
date is not strictly the same with diference of 3 or five minutes. could it
be the reason of the buf?
--
View this message in context: http://www.nabble.com/Lease-expired-exception-tf3130730.html#a8680431
Sent from the Nutch - User mailing list archive at Nabble.com.
Re: Lease expired exception
Posted by Dennis Kubes <nu...@dragonflymc.com>.
There was some work done on this problem in hadoop a while back so my
guess is you are probably using a version of Nutch 0.8? Take a look at
HADOOP-563 in the Jira
Denns Kubes
djames wrote:
> Hello,
>
> During the parse of a fetch of 600 000 pages in a cluster of 5 box,The job
> failed with this error message on 2 box :
>
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.dfs.LeaseExpiredException: No lease on
> /user/nutch/crawl/segments/20070127060350/crawl_parse/part-00001 at
> org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:448)
> at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:184) at
> sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source) at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at
> java.lang.reflect.Method.invoke(Unknown Source) at
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:243) at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:469) at
> org.apache.hadoop.ipc.Client$Connection.run(Client.java:159)
>
> Did someone already have this probleme and can give me a solution?