You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Sergey Bartunov <sb...@gmail.com> on 2011/07/02 12:42:17 UTC

Problems with namenode on pseudo-distributed configuration

Hello. I'm running hadoop 0.20.203 in psedo-distributed mode with
standard configuration from
http://hadoop.apache.org/common/docs/stable/single_node_setup.html
Everything worked fine, but today I booted on my Ubuntu 10.10 and
suddenly got "Could only be replicated to 0 nodes instead of 1". I
found several workaround in the internet and just removed all hadoop
directories, re-formatted namenode and restarted hadoop.
So I could upload several files to hdfs but all my code did't work.

I always opened files on HDFS by this code:

          FileSystem inputFS = FileSystem.get(URI.create(input), configuration);
          Path inputPath = new Path(input);
          inputFS.doSomeThings

and relative paths to some files on HDFS, i.e. "some/data" were
treated as HDFS paths (to "/user/sbos/some/data"), but now inputFS has
LocalFileSystem type.

Could someone advice me how to force hadoop to treat my paths as HDFS
paths or just help me understand what happened?