You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@pig.apache.org by Gert Pfeifer <pf...@se.inf.tu-dresden.de> on 2008/07/02 10:49:52 UTC

upgrade problem (16.4 -> 17.1)

I did the upgrade now, but there might be a problem... or it just takes
very long time.

The name node is running in safe mode, but there are no data nodes.

I tried this:
$ ./hadoop dfsadmin -upgradeProgress status

Upgrade for version -13 has been completed.
Upgrade is not finalized.

The datanodes say:
----------------------
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = dns-file-1/10.3.61.211
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.17.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
669344; compiled by 'hadoopqa' on Thu Jun 19 01:18:25 UTC 2008
************************************************************/
2008-07-02 10:34:56,154 INFO org.apache.hadoop.dfs.Storage: Recovering
storage directory /var/data/hadoop/dfs/data from previous upgrade.
2008-07-02 10:35:07,105 INFO org.apache.hadoop.dfs.Storage: Upgrading
storage directory /var/tmp/hadoop/dfs/data.
   old LV = -11; old CTime = 0.
   new LV = -13; new CTime = 1214984938285
----------------------

The namenode loops in the log with:
----------------------
The ratio of reported blocks 0.0000 has not reached the threshold
0.9990. Safe mode will be turned off automatically.
org.apache.hadoop.dfs.SafeModeException: Cannot renew lease for
DFSClient_-18804611. Name node is in safe mode.
----------------------

Why is there no datanode connecting? Did I forget something? I did not
try to finalize the upgrade yet. I guess that wouldn't be a good idea.

My problem might be similar to this:
http://mail-archives.apache.org/mod_mbox/hadoop-core-user/200709.mbox/<fb...@mail.gmail.com>
 - but it doesn't give me a solution

Thanks for any hints,
Gert

Gert Pfeifer wrote:
> I'm using 16.4.
> 
> Gert
> 
> Olga Natkovich wrote:
>> The latest code works with Hadoop 17. What version do you have?
>>
>> Olga 
>>
>>> -----Original Message-----
>>> From: Gert Pfeifer [mailto:pfeifer@se.inf.tu-dresden.de] 
>>> Sent: Tuesday, July 01, 2008 11:01 AM
>>> To: pig-user@incubator.apache.org
>>> Subject: Re: connecting to cluster with bin/pig
>>>
>>> Hi,
>>> exectype is not touched, so it defaults to mapreduce (correct 
>>> me if I am
>>> wrong)
>>>
>>> My mistake was to put the hadoop-site.xml into the path. 
>>> Thank you very much!!!
>>>
>>> Now I found this:
>>> Protocol org.apache.hadoop.dfs.ClientProtocol version 
>>> mismatch. (client = 29, server = 23)
>>>
>>> Is there a compatibility mode, or do I have to update the server?
>>>
>>> Gert
>>>
>>> Alan Gates schrieb:
>>>> Are you setting exectype to 'mapreduce'?  It looks like you have it 
>>>> set to 'local'.  Also, make sure you include the directory 
>>> that your 
>>>> hadoop-site.xml is in in the class path, not the file itself.
>>>>
>>>> If neither of those resolve your issue, please post your whole 
>>>> pig.properties file.
>>>>
>>>> Alan.
>>>>
>>>> Gert Pfeifer wrote:
>>>>> Hi,
>>>>> I am trying to start pig for the first time, so here is a 
>>> beginner's 
>>>>> question.
>>>>>
>>>>> How do I tell the bin/pig shell script where the cluster 
>>> can be found?
>>>>> I used the conf/pic.properties as follows:
>>>>>
>>>>> # clustername, name of the hadoop jobtracker. If no port 
>>> is defined 
>>>>> port 50020 will be used.
>>>>> cluster=<ip address of the job tracker>
>>>>>
>>>>> Then I get a message:
>>>>> [main] INFO
>>>>> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine  - 
>>>>> Connecting to hadoop file system at: file:///
>>>>>
>>>>> Then I get the grunt shell on the local file system, which is not 
>>>>> quite what I wanted.
>>>>>
>>>>> I also tried this:
>>>>> java -cp pig.jar:../../path/to/hadoop-site.xml  org.apache.pig.Main
>>>>>
>>>>> But I saw the same result. So how do I connect to the name 
>>> node and 
>>>>> the job tracker? I guess I need both, don't I?
>>>>>
>>>>> Thanks for any hints,
>>>>> Gert
>>>>>   
> 

-- 
Tel.:	+49 351 463 38795         Fax:	+49 351 463 39710
PGP: 0xA964DE41 - 012A D7A3 AF82 25BA 6FA2 DB14 FE11 1E26 A964 DE41

Visiting Address:
	Technische Universitaet Dresden
	Noethnitzer Str. 46, Room 3045
	D-01187 Dresden
	Germany

Re: upgrade problem (16.4 -> 17.1)

Posted by Gert Pfeifer <pf...@se.inf.tu-dresden.de>.
It is running now. The thing is, that the data nodes seem to convert
their data into a new format/version or something...
This took a very long time. It would have been nice to see some hint in
the log of the data nodes, like "please wait while upgrading".

Gert

Olga Natkovich schrieb:
> Try and restart your cluster. If this does not help, post a message to
> core-user@hadoop.apache.org. 
> 
> Olga 
> 
>> -----Original Message-----
>> From: Gert Pfeifer [mailto:pfeifer@se.inf.tu-dresden.de] 
>> Sent: Wednesday, July 02, 2008 1:50 AM
>> To: pig-user@incubator.apache.org
>> Subject: upgrade problem (16.4 -> 17.1)
>>
>> I did the upgrade now, but there might be a problem... or it 
>> just takes very long time.
>>
>> The name node is running in safe mode, but there are no data nodes.
>>
>> I tried this:
>> $ ./hadoop dfsadmin -upgradeProgress status
>>
>> Upgrade for version -13 has been completed.
>> Upgrade is not finalized.
>>
>> The datanodes say:
>> ----------------------
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = dns-file-1/10.3.61.211
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.17.1
>> STARTUP_MSG:   build =
>> http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.
>> 17 -r 669344; compiled by 'hadoopqa' on Thu Jun 19 01:18:25 
>> UTC 2008 ************************************************************/
>> 2008-07-02 10:34:56,154 INFO org.apache.hadoop.dfs.Storage: 
>> Recovering storage directory /var/data/hadoop/dfs/data from 
>> previous upgrade.
>> 2008-07-02 10:35:07,105 INFO org.apache.hadoop.dfs.Storage: 
>> Upgrading storage directory /var/tmp/hadoop/dfs/data.
>>    old LV = -11; old CTime = 0.
>>    new LV = -13; new CTime = 1214984938285
>> ----------------------
>>
>> The namenode loops in the log with:
>> ----------------------
>> The ratio of reported blocks 0.0000 has not reached the 
>> threshold 0.9990. Safe mode will be turned off automatically.
>> org.apache.hadoop.dfs.SafeModeException: Cannot renew lease 
>> for DFSClient_-18804611. Name node is in safe mode.
>> ----------------------
>>
>> Why is there no datanode connecting? Did I forget something? 
>> I did not try to finalize the upgrade yet. I guess that 
>> wouldn't be a good idea.
>>
>> My problem might be similar to this:
>> http://mail-archives.apache.org/mod_mbox/hadoop-core-user/2007
>> 09.mbox/<fb...@mail.gmail.com>
>>  - but it doesn't give me a solution
>>
>> Thanks for any hints,
>> Gert
>>
>> Gert Pfeifer wrote:
>>> I'm using 16.4.
>>>
>>> Gert
>>>
>>> Olga Natkovich wrote:
>>>> The latest code works with Hadoop 17. What version do you have?
>>>>
>>>> Olga
>>>>
>>>>> -----Original Message-----
>>>>> From: Gert Pfeifer [mailto:pfeifer@se.inf.tu-dresden.de]
>>>>> Sent: Tuesday, July 01, 2008 11:01 AM
>>>>> To: pig-user@incubator.apache.org
>>>>> Subject: Re: connecting to cluster with bin/pig
>>>>>
>>>>> Hi,
>>>>> exectype is not touched, so it defaults to mapreduce 
>> (correct me if 
>>>>> I am
>>>>> wrong)
>>>>>
>>>>> My mistake was to put the hadoop-site.xml into the path. 
>>>>> Thank you very much!!!
>>>>>
>>>>> Now I found this:
>>>>> Protocol org.apache.hadoop.dfs.ClientProtocol version mismatch. 
>>>>> (client = 29, server = 23)
>>>>>
>>>>> Is there a compatibility mode, or do I have to update the server?
>>>>>
>>>>> Gert
>>>>>
>>>>> Alan Gates schrieb:
>>>>>> Are you setting exectype to 'mapreduce'?  It looks like 
>> you have it 
>>>>>> set to 'local'.  Also, make sure you include the directory
>>>>> that your
>>>>>> hadoop-site.xml is in in the class path, not the file itself.
>>>>>>
>>>>>> If neither of those resolve your issue, please post your whole 
>>>>>> pig.properties file.
>>>>>>
>>>>>> Alan.
>>>>>>
>>>>>> Gert Pfeifer wrote:
>>>>>>> Hi,
>>>>>>> I am trying to start pig for the first time, so here is a
>>>>> beginner's
>>>>>>> question.
>>>>>>>
>>>>>>> How do I tell the bin/pig shell script where the cluster
>>>>> can be found?
>>>>>>> I used the conf/pic.properties as follows:
>>>>>>>
>>>>>>> # clustername, name of the hadoop jobtracker. If no port
>>>>> is defined
>>>>>>> port 50020 will be used.
>>>>>>> cluster=<ip address of the job tracker>
>>>>>>>
>>>>>>> Then I get a message:
>>>>>>> [main] INFO
>>>>>>>
>> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine  - 
>>>>>>> Connecting to hadoop file system at: file:///
>>>>>>>
>>>>>>> Then I get the grunt shell on the local file system, 
>> which is not 
>>>>>>> quite what I wanted.
>>>>>>>
>>>>>>> I also tried this:
>>>>>>> java -cp pig.jar:../../path/to/hadoop-site.xml  
>>>>>>> org.apache.pig.Main
>>>>>>>
>>>>>>> But I saw the same result. So how do I connect to the name
>>>>> node and
>>>>>>> the job tracker? I guess I need both, don't I?
>>>>>>>
>>>>>>> Thanks for any hints,
>>>>>>> Gert
>>>>>>>   
>>


RE: upgrade problem (16.4 -> 17.1)

Posted by Olga Natkovich <ol...@yahoo-inc.com>.
Try and restart your cluster. If this does not help, post a message to
core-user@hadoop.apache.org. 

Olga 

> -----Original Message-----
> From: Gert Pfeifer [mailto:pfeifer@se.inf.tu-dresden.de] 
> Sent: Wednesday, July 02, 2008 1:50 AM
> To: pig-user@incubator.apache.org
> Subject: upgrade problem (16.4 -> 17.1)
> 
> I did the upgrade now, but there might be a problem... or it 
> just takes very long time.
> 
> The name node is running in safe mode, but there are no data nodes.
> 
> I tried this:
> $ ./hadoop dfsadmin -upgradeProgress status
> 
> Upgrade for version -13 has been completed.
> Upgrade is not finalized.
> 
> The datanodes say:
> ----------------------
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = dns-file-1/10.3.61.211
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.17.1
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.
> 17 -r 669344; compiled by 'hadoopqa' on Thu Jun 19 01:18:25 
> UTC 2008 ************************************************************/
> 2008-07-02 10:34:56,154 INFO org.apache.hadoop.dfs.Storage: 
> Recovering storage directory /var/data/hadoop/dfs/data from 
> previous upgrade.
> 2008-07-02 10:35:07,105 INFO org.apache.hadoop.dfs.Storage: 
> Upgrading storage directory /var/tmp/hadoop/dfs/data.
>    old LV = -11; old CTime = 0.
>    new LV = -13; new CTime = 1214984938285
> ----------------------
> 
> The namenode loops in the log with:
> ----------------------
> The ratio of reported blocks 0.0000 has not reached the 
> threshold 0.9990. Safe mode will be turned off automatically.
> org.apache.hadoop.dfs.SafeModeException: Cannot renew lease 
> for DFSClient_-18804611. Name node is in safe mode.
> ----------------------
> 
> Why is there no datanode connecting? Did I forget something? 
> I did not try to finalize the upgrade yet. I guess that 
> wouldn't be a good idea.
> 
> My problem might be similar to this:
> http://mail-archives.apache.org/mod_mbox/hadoop-core-user/2007
> 09.mbox/<fb...@mail.gmail.com>
>  - but it doesn't give me a solution
> 
> Thanks for any hints,
> Gert
> 
> Gert Pfeifer wrote:
> > I'm using 16.4.
> > 
> > Gert
> > 
> > Olga Natkovich wrote:
> >> The latest code works with Hadoop 17. What version do you have?
> >>
> >> Olga
> >>
> >>> -----Original Message-----
> >>> From: Gert Pfeifer [mailto:pfeifer@se.inf.tu-dresden.de]
> >>> Sent: Tuesday, July 01, 2008 11:01 AM
> >>> To: pig-user@incubator.apache.org
> >>> Subject: Re: connecting to cluster with bin/pig
> >>>
> >>> Hi,
> >>> exectype is not touched, so it defaults to mapreduce 
> (correct me if 
> >>> I am
> >>> wrong)
> >>>
> >>> My mistake was to put the hadoop-site.xml into the path. 
> >>> Thank you very much!!!
> >>>
> >>> Now I found this:
> >>> Protocol org.apache.hadoop.dfs.ClientProtocol version mismatch. 
> >>> (client = 29, server = 23)
> >>>
> >>> Is there a compatibility mode, or do I have to update the server?
> >>>
> >>> Gert
> >>>
> >>> Alan Gates schrieb:
> >>>> Are you setting exectype to 'mapreduce'?  It looks like 
> you have it 
> >>>> set to 'local'.  Also, make sure you include the directory
> >>> that your
> >>>> hadoop-site.xml is in in the class path, not the file itself.
> >>>>
> >>>> If neither of those resolve your issue, please post your whole 
> >>>> pig.properties file.
> >>>>
> >>>> Alan.
> >>>>
> >>>> Gert Pfeifer wrote:
> >>>>> Hi,
> >>>>> I am trying to start pig for the first time, so here is a
> >>> beginner's
> >>>>> question.
> >>>>>
> >>>>> How do I tell the bin/pig shell script where the cluster
> >>> can be found?
> >>>>> I used the conf/pic.properties as follows:
> >>>>>
> >>>>> # clustername, name of the hadoop jobtracker. If no port
> >>> is defined
> >>>>> port 50020 will be used.
> >>>>> cluster=<ip address of the job tracker>
> >>>>>
> >>>>> Then I get a message:
> >>>>> [main] INFO
> >>>>> 
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine  - 
> >>>>> Connecting to hadoop file system at: file:///
> >>>>>
> >>>>> Then I get the grunt shell on the local file system, 
> which is not 
> >>>>> quite what I wanted.
> >>>>>
> >>>>> I also tried this:
> >>>>> java -cp pig.jar:../../path/to/hadoop-site.xml  
> >>>>> org.apache.pig.Main
> >>>>>
> >>>>> But I saw the same result. So how do I connect to the name
> >>> node and
> >>>>> the job tracker? I guess I need both, don't I?
> >>>>>
> >>>>> Thanks for any hints,
> >>>>> Gert
> >>>>>   
> > 
> 
> -- 
> Tel.:	+49 351 463 38795         Fax:	+49 351 463 39710
> PGP: 0xA964DE41 - 012A D7A3 AF82 25BA 6FA2 DB14 FE11 1E26 A964 DE41
> 
> Visiting Address:
> 	Technische Universitaet Dresden
> 	Noethnitzer Str. 46, Room 3045
> 	D-01187 Dresden
> 	Germany
>