You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Leo Alekseyev <dn...@gmail.com> on 2010/08/19 00:04:43 UTC

HDFS got messed up after changing dfs.name.dir in configs -- how to fix?

we are running Hadoop from Cloudera (CDH3b2), and we recently
streamlined some configuration management.
one of the changes we made relocated dfs.name.dir to a new location.

upon cluster restart, we see that:

1) namenode remains locked in safe mode, reporting the following:
The reported blocks 0 needs additional 656829 blocks to reach the
threshold 0.9990 of total blocks 657487. Safe mode will be turned off
automatically.

2) all data on datanodes has been relocated to $dfs.data.dir/toBeDeleted

we have complete archives of old fsimage and edits files.

what is the best way to put the data back in the correct place on
datanodes, and have them report the correct number of blocks to
namenode?

In addition, what was the mistake made here?..  Should we have copied
over ${dfs.name.dir} contents to the new location before specifying it
in the config?..