You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Max Schmidt <ma...@datapath.io> on 2016/03/31 12:09:28 UTC
How to move a namenode to a new host properly
Hi there,
what are the correct steps to move a primary Hadoop DFS namenode from
one host to another?
I use the version 2.7.1 of hadoop on Ubuntu 14.04.3 LTS (without YARN).
Steps done:
* Copy the whole hadoop directory to the new host
* set the new master in $hadoop_home/etc/hadoop/master
* updated the fs.default.name tag in $hadoop_home/etc/hadoop/core-site.xml
* formatted the new namenode with the ClusterID of the old namenode:
$hadoop_home//bin/hadoop namenode -format -custerId $CLUSTER_ID (I
removed the slaves from the config just to be sure that none of the
slaves are affected; maybe that is a problem?)
Problem is that the datanodes still don't come up because of the
mismatch of the clusterid:
|2016-03-30 16:20:28,718 WARN
org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException:
Incompatible clusterIDs in /storage/data: namenode clusterID =
CID-c19c691d-10da-4449-a7b6-c953465ce237; datanode clusterID =
CID-af87cb62-d806-41d6-9638-e9e559dd3ed7 2016-03-30 16:20:28,718 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
for Block pool <registering> (Datanode Uuid unassigned) service to
XXXXXXXXXXXXXX. Exiting. java.io.IOException: All specified directories
are failed to load. |
Any suggestions? Do I have to add the BlockPool-ID as well?
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org