You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by yogesh dhari <yo...@live.com> on 2012/11/23 02:55:29 UTC

HADOOP UPGRADE ERROR

Hi All,

I am trying to upgrade hadoop-0.20.2 to hadoop-1.0.4.
I used command 

hadoop namenode -upgrade

after that if I start cluster by command 

Start-all.sh

the TT and DN doesn't starts.

1). Log file of  TT...

2012-11-23 07:15:54,399 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:yogesh cause:java.io.IOException: Call to localhost/127.0.0.1:9001 failed on local exception: java.io.IOException: Connection reset by peer
2012-11-23 07:15:54,400 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on local exception: java.io.IOException: Connection reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)



2).  Log file of DN...


2012-11-23 07:07:57,095 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage directory /opt/hadoop_newdata_dirr
2012-11-23 07:07:57,096 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /opt/hadoop_newdata_dirr does not exist.
2012-11-23 07:07:57,199 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: All specified directories are not accessible or do not exist.
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:139)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)



Although /opt/hadoop_new_dirr exists with file permission 755.


Please suggest

Thanks&Regards
Yogesh Kumar



 		 	   		  

Re: HADOOP UPGRADE ERROR

Posted by Marcos Ortiz <ml...@uci.cu>.
On 11/22/2012 08:55 PM, yogesh dhari wrote:
> Hi All,
>
> I am trying to upgrade hadoop-0.20.2 to hadoop-1.0.4.
> I used command
>
> *hadoop namenode -upgrade*
>
> after that if I start cluster by command
>
> *Start-all.sh
>
> the TT and DN doesn't starts.*
Which steps did you follow to perform the upgrade process?
In Tom White´s "Hadoop: The Definitive Guide" book, in the Chapter 10, 
there is a great
section dedicated to Upgrades, where he described the basic procedure to 
do this:

1. Make sure that any previous upgrade is finalized before proceeding 
with another
upgrade.

2. Shut down MapReduce, and kill any orphaned task processes on the 
tasktrackers.

3. Shut down HDFS, and back up the namenode directories.

4. Install new versions of Hadoop HDFS and MapReduce on the cluster and on
clients.

5. Start HDFS with the -upgrade option:

    $NEW_HADOOP_INSTALL/bin/start-dfs.sh -upgrade

6. Wait until the upgrade is complete:
    $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -upgradeProgress status

7. Perform some sanity checks on HDFS.

8. Start MapReduce:

9. Roll back or finalize the upgrade (optional):
     $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -finalizeUpgrade
     $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -upgradeProgress status


>
> 1). Log file of  TT...
>
> 2012-11-23 07:15:54,399 ERROR 
> org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:yogesh cause:java.io.IOException: Call 
> to localhost/127.0.0.1:9001 failed on local exception: 
> java.io.IOException: Connection reset by peer
> 2012-11-23 07:15:54,400 ERROR org.apache.hadoop.mapred.TaskTracker: 
> Can not start task tracker because java.io.IOException: Call to 
> localhost/127.0.0.1:9001 failed on local exception: 
> java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>
Which mode are you have enabled in your Hadoop cluster?
>
>
> 2).  Log file of DN...
>
>
> 2012-11-23 07:07:57,095 INFO 
> org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage 
> directory /opt/hadoop_newdata_dirr
> 2012-11-23 07:07:57,096 INFO 
> org.apache.hadoop.hdfs.server.common.Storage: Storage directory 
> /opt/hadoop_newdata_dirr does not exist.
> 2012-11-23 07:07:57,199 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: 
> All specified directories are not accessible or do not exist.
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:139)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>
>
>
> Although /opt/hadoop_new_dirr exists with file permission 755.
The user yogesh has all privileges in that directory?
/opt/hadoop_new_dirr is not the same that /opt/hadoop_newdata_dirr


>
>
> Please suggest
>
> Thanks&Regards
> Yogesh Kumar
>
>
>
>
>
> <http://www.uci.cu/>

-- 

Marcos Luis Ortíz Valmaseda
about.me/marcosortiz <http://about.me/marcosortiz>
@marcosluis2186 <http://twitter.com/marcosluis2186>



10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci

Re: HADOOP UPGRADE ERROR

Posted by Marcos Ortiz <ml...@uci.cu>.
On 11/22/2012 08:55 PM, yogesh dhari wrote:
> Hi All,
>
> I am trying to upgrade hadoop-0.20.2 to hadoop-1.0.4.
> I used command
>
> *hadoop namenode -upgrade*
>
> after that if I start cluster by command
>
> *Start-all.sh
>
> the TT and DN doesn't starts.*
Which steps did you follow to perform the upgrade process?
In Tom White´s "Hadoop: The Definitive Guide" book, in the Chapter 10, 
there is a great
section dedicated to Upgrades, where he described the basic procedure to 
do this:

1. Make sure that any previous upgrade is finalized before proceeding 
with another
upgrade.

2. Shut down MapReduce, and kill any orphaned task processes on the 
tasktrackers.

3. Shut down HDFS, and back up the namenode directories.

4. Install new versions of Hadoop HDFS and MapReduce on the cluster and on
clients.

5. Start HDFS with the -upgrade option:

    $NEW_HADOOP_INSTALL/bin/start-dfs.sh -upgrade

6. Wait until the upgrade is complete:
    $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -upgradeProgress status

7. Perform some sanity checks on HDFS.

8. Start MapReduce:

9. Roll back or finalize the upgrade (optional):
     $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -finalizeUpgrade
     $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -upgradeProgress status


>
> 1). Log file of  TT...
>
> 2012-11-23 07:15:54,399 ERROR 
> org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:yogesh cause:java.io.IOException: Call 
> to localhost/127.0.0.1:9001 failed on local exception: 
> java.io.IOException: Connection reset by peer
> 2012-11-23 07:15:54,400 ERROR org.apache.hadoop.mapred.TaskTracker: 
> Can not start task tracker because java.io.IOException: Call to 
> localhost/127.0.0.1:9001 failed on local exception: 
> java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>
Which mode are you have enabled in your Hadoop cluster?
>
>
> 2).  Log file of DN...
>
>
> 2012-11-23 07:07:57,095 INFO 
> org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage 
> directory /opt/hadoop_newdata_dirr
> 2012-11-23 07:07:57,096 INFO 
> org.apache.hadoop.hdfs.server.common.Storage: Storage directory 
> /opt/hadoop_newdata_dirr does not exist.
> 2012-11-23 07:07:57,199 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: 
> All specified directories are not accessible or do not exist.
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:139)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>
>
>
> Although /opt/hadoop_new_dirr exists with file permission 755.
The user yogesh has all privileges in that directory?
/opt/hadoop_new_dirr is not the same that /opt/hadoop_newdata_dirr


>
>
> Please suggest
>
> Thanks&Regards
> Yogesh Kumar
>
>
>
>
>
> <http://www.uci.cu/>

-- 

Marcos Luis Ortíz Valmaseda
about.me/marcosortiz <http://about.me/marcosortiz>
@marcosluis2186 <http://twitter.com/marcosluis2186>



10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci

Re: HADOOP UPGRADE ERROR

Posted by Marcos Ortiz <ml...@uci.cu>.
On 11/22/2012 08:55 PM, yogesh dhari wrote:
> Hi All,
>
> I am trying to upgrade hadoop-0.20.2 to hadoop-1.0.4.
> I used command
>
> *hadoop namenode -upgrade*
>
> after that if I start cluster by command
>
> *Start-all.sh
>
> the TT and DN doesn't starts.*
Which steps did you follow to perform the upgrade process?
In Tom White´s "Hadoop: The Definitive Guide" book, in the Chapter 10, 
there is a great
section dedicated to Upgrades, where he described the basic procedure to 
do this:

1. Make sure that any previous upgrade is finalized before proceeding 
with another
upgrade.

2. Shut down MapReduce, and kill any orphaned task processes on the 
tasktrackers.

3. Shut down HDFS, and back up the namenode directories.

4. Install new versions of Hadoop HDFS and MapReduce on the cluster and on
clients.

5. Start HDFS with the -upgrade option:

    $NEW_HADOOP_INSTALL/bin/start-dfs.sh -upgrade

6. Wait until the upgrade is complete:
    $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -upgradeProgress status

7. Perform some sanity checks on HDFS.

8. Start MapReduce:

9. Roll back or finalize the upgrade (optional):
     $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -finalizeUpgrade
     $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -upgradeProgress status


>
> 1). Log file of  TT...
>
> 2012-11-23 07:15:54,399 ERROR 
> org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:yogesh cause:java.io.IOException: Call 
> to localhost/127.0.0.1:9001 failed on local exception: 
> java.io.IOException: Connection reset by peer
> 2012-11-23 07:15:54,400 ERROR org.apache.hadoop.mapred.TaskTracker: 
> Can not start task tracker because java.io.IOException: Call to 
> localhost/127.0.0.1:9001 failed on local exception: 
> java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>
Which mode are you have enabled in your Hadoop cluster?
>
>
> 2).  Log file of DN...
>
>
> 2012-11-23 07:07:57,095 INFO 
> org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage 
> directory /opt/hadoop_newdata_dirr
> 2012-11-23 07:07:57,096 INFO 
> org.apache.hadoop.hdfs.server.common.Storage: Storage directory 
> /opt/hadoop_newdata_dirr does not exist.
> 2012-11-23 07:07:57,199 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: 
> All specified directories are not accessible or do not exist.
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:139)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>
>
>
> Although /opt/hadoop_new_dirr exists with file permission 755.
The user yogesh has all privileges in that directory?
/opt/hadoop_new_dirr is not the same that /opt/hadoop_newdata_dirr


>
>
> Please suggest
>
> Thanks&Regards
> Yogesh Kumar
>
>
>
>
>
> <http://www.uci.cu/>

-- 

Marcos Luis Ortíz Valmaseda
about.me/marcosortiz <http://about.me/marcosortiz>
@marcosluis2186 <http://twitter.com/marcosluis2186>



10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci

Re: HADOOP UPGRADE ERROR

Posted by Marcos Ortiz <ml...@uci.cu>.
On 11/22/2012 08:55 PM, yogesh dhari wrote:
> Hi All,
>
> I am trying to upgrade hadoop-0.20.2 to hadoop-1.0.4.
> I used command
>
> *hadoop namenode -upgrade*
>
> after that if I start cluster by command
>
> *Start-all.sh
>
> the TT and DN doesn't starts.*
Which steps did you follow to perform the upgrade process?
In Tom White´s "Hadoop: The Definitive Guide" book, in the Chapter 10, 
there is a great
section dedicated to Upgrades, where he described the basic procedure to 
do this:

1. Make sure that any previous upgrade is finalized before proceeding 
with another
upgrade.

2. Shut down MapReduce, and kill any orphaned task processes on the 
tasktrackers.

3. Shut down HDFS, and back up the namenode directories.

4. Install new versions of Hadoop HDFS and MapReduce on the cluster and on
clients.

5. Start HDFS with the -upgrade option:

    $NEW_HADOOP_INSTALL/bin/start-dfs.sh -upgrade

6. Wait until the upgrade is complete:
    $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -upgradeProgress status

7. Perform some sanity checks on HDFS.

8. Start MapReduce:

9. Roll back or finalize the upgrade (optional):
     $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -finalizeUpgrade
     $NEW_HADOOP_INSTALL/bin/hadoop dfsadmin -upgradeProgress status


>
> 1). Log file of  TT...
>
> 2012-11-23 07:15:54,399 ERROR 
> org.apache.hadoop.security.UserGroupInformation: 
> PriviledgedActionException as:yogesh cause:java.io.IOException: Call 
> to localhost/127.0.0.1:9001 failed on local exception: 
> java.io.IOException: Connection reset by peer
> 2012-11-23 07:15:54,400 ERROR org.apache.hadoop.mapred.TaskTracker: 
> Can not start task tracker because java.io.IOException: Call to 
> localhost/127.0.0.1:9001 failed on local exception: 
> java.io.IOException: Connection reset by peer
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>
Which mode are you have enabled in your Hadoop cluster?
>
>
> 2).  Log file of DN...
>
>
> 2012-11-23 07:07:57,095 INFO 
> org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage 
> directory /opt/hadoop_newdata_dirr
> 2012-11-23 07:07:57,096 INFO 
> org.apache.hadoop.hdfs.server.common.Storage: Storage directory 
> /opt/hadoop_newdata_dirr does not exist.
> 2012-11-23 07:07:57,199 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: 
> All specified directories are not accessible or do not exist.
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:139)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
>
>
>
> Although /opt/hadoop_new_dirr exists with file permission 755.
The user yogesh has all privileges in that directory?
/opt/hadoop_new_dirr is not the same that /opt/hadoop_newdata_dirr


>
>
> Please suggest
>
> Thanks&Regards
> Yogesh Kumar
>
>
>
>
>
> <http://www.uci.cu/>

-- 

Marcos Luis Ortíz Valmaseda
about.me/marcosortiz <http://about.me/marcosortiz>
@marcosluis2186 <http://twitter.com/marcosluis2186>



10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci