You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Eddie C <ed...@gmail.com> on 2008/02/26 05:59:13 UTC

hadoop access from outside the file cluster

I have successfully setup a two node hadoop cluster an two vmware
machines and have ran the wordcount example. Other then map-reduce
programs I am interested in using hadoop like a general purpose
filesystem. I loaded all my jar files on my windows machine and
created a netbeans project and tried to accomplish the following code:

Configuration conf = new Configuration();
URI i = URI.create("hdfs://192.168.220.200:54310");
FileSystem fs = FileSystem.get(i,conf);

Exception in thread "main" java.io.IOException: Login failed: Cannot
run program "whoami": CreateProcess error=2, The system cannot find
the file specified
        at org.apache.hadoop.dfs.DFSClient.createNamenode(DFSClient.java:124)
        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:143)
        at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:65)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)

I have found examples of people using cygwin, but I am unsure if that
is what I want. I am not intrested in runnning a namenode on windows,
just accessing the distributed file system. Can that be done?

Also is this just a windows issue of no whoami? I notice the tutorial
runs the wordcount example like so   'bin/hadoop jar example.jar
wordcount file1 file2' Does this mean that any process wishing to
access the system must be run from inside hadoop like .... bin/hadoop
myprogram.jar?

Sorry for a RTFM question. I did read around considerably, but it
seems like not many are accessing HDFS this way (maybe because it can
not be done)