You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Ryan Wang <ry...@gmail.com> on 2007/12/01 05:20:37 UTC

Re: Any one can tell me about how to write to HDFS?

Hope this version can attract other's attention

Hadoop Version:  0.15.0
JDK version: Sun JDK 6.0.3
Platform: Ubuntu 7.10
IDE:   Eclipse 3.2
Code :
public class HadoopWrite {

    /**
     * @param args
     */
    public static void main(String[] args) throws IOException{
        Configuration dfsconf = new Configuration();
        FileSystem dfs;
        dfs = FileSystem.get(dfsconf);
        Path inFile = new Path("/nutch/out");
        Path outFile = new Path("ryan/test");
        dfs.copyFromLocalFile(inFile, outFile);

    }

}

Exception is below:

Exception in thread "main" org.apache.hadoop.ipc.RemoteException:
java.io.IOException: DIR* NameSystem.startFile: Unable to add file to
namespace.
    at org.apache.hadoop.dfs.FSNamesystem.startFileInternal(
FSNamesystem.java:931)
    at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:806)
    at org.apache.hadoop.dfs.NameNode.create(NameNode.java:276)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)

    at org.apache.hadoop.ipc.Client.call(Client.java:482)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
    at org.apache.hadoop.dfs.$Proxy0.create(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
RetryInvocationHandler.java:59)
    at org.apache.hadoop.dfs.$Proxy0.create(Unknown Source)
    at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.<init>(DFSClient.java
:1432)
    at org.apache.hadoop.dfs.DFSClient.create(DFSClient.java:376)
    at org.apache.hadoop.dfs.DistributedFileSystem.create(
DistributedFileSystem.java:121)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:353)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:260)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:139)
    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java
:826)
    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java
:814)
    at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java
:795)
    at edu.insun.HadoopWrite.main(HadoopWrite.java:20)

Hadoopsite.xml:

> <configuration>
>
> <property>
>   <name>fs.default.name</name>
>   <value>hdfs://node01:9000</value>
>   <description>
>     The name of the default file system. Either the literal string
>     "local" or a host:port for NDFS.
>   </description>
> </property>
>
> <property>
>   <name>mapred.job.tracker </name>
>   <value>node01:9001</value>
>   <description>
>     The host and port that the MapReduce job tracker runs at. If
>     "local", then jobs are run in-process as a single map and
>     reduce task.
>   </description>
> </property>
>
> <property>
>   <name>mapred.map.tasks</name>
>   <value>4</value>
>   <description>
>     define mapred.map tasks to be number of slave hosts
>   </description>
> </property>
>
> <property>
>   <name>mapred.reduce.tasks</name>
>   <value>4</value>
>   <description>
>     define mapred.reduce tasks to be number of slave hosts
>   </description>
> </property>
>
> <property>
>   <name>dfs.name.dir</name>
>   <value>/nutch/hdfs/name</value>
> </property>
>
> <property>
>   <name>dfs.data.dir</name>
>   <value>/nutch/hdfs/data</value>
> </property>
>
> <property>
>   <name>mapred.system.dir</name>
>   <value>/nutch/hdfs/mapreduce/system</value>
> </property>
>
> <property>
>   <name>mapred.local.dir</name>
>   <value>/nutch/hdfs/mapreduce/local</value>
> </property>
>
> <property>
>   <name>dfs.replication</name>
>   <value>1</value>
> </property>
>
> </configuration>
>
>
>
> On Nov 30, 2007 10:57 PM, Arun C Murthy < arunc@yahoo-inc.com> wrote:
>
> > Ryan,
> >
> > On Fri, Nov 30, 2007 at 10:48:30PM +0800, Ryan Wang wrote:
> > >Hi,
> > >I can communicate with the file system via shell command, and it worked
> > >corretly.
> > >But when I try to write program to write file to the file system, it
> > failed.
> > >
> >
> > Could you provide more info on the errors, your configuration,
> > hadoop-version etc.?
> >
> > http://wiki.apache.org/lucene-hadoop/Help
> >
> > Arun
> > >public class HadoopDFSFileReadWrite {
> > >
> > >
> > >    public static void main(String[] argv) throws IOException {
> > >
> > >        Configuration dfsconf = new Configuration();
> > >        FileSystem dfs = FileSystem.get(dfsconf);
> > >
> > >        Path inFile = new Path(argv[0]);
> > >        Path outFile = new Path(argv[1]);
> > >
> > >        dfs.copyFromLocalFile(inFile, outFile);
> > >    }
> > >}
> > >
> > >argv[0]=nutch/search/bin/javalibTest.tar.gz argv[1]=ryan/test.tar.gz
> > >The program write the javalibTest.tar.gz  to the Project's
> > >Dir/ryan/test.tar.gz
> > >I also placed the file modified hadoop-site.xml to  the Project 's
> > Path?
> > >I don't know why? anyone could help me out ?
> > >
> > >Thanks
> > >Ryan
> >
>
>

Re: Any one can tell me about how to write to HDFS?

Posted by Ryan Wang <ry...@gmail.com>.
I got it correctly.
when we are programming in client node:
(1) place a configuration xml hadoop-site.xml in the project path, Take
Eclispe for example , play hadoop-site.xml to Project's src diractory
(2) It seems DFS has directory control access.  This  needs others  test.
(3) FS Shell 's path is a little strange, we can not start with '/', like
/test/test.out is not illegal, test/test.out is OK

Thanks  for all!
It's a pleasure to learn hadoop!

Ryan

On Dec 1, 2007 12:55 PM, dhruba Borthakur <dh...@yahoo-inc.com> wrote:

> There is a dfs shell utility to do this too. Please see if this utility
> works on ur cluster. It can be used as follows:
>
> Bin/hadoop dfs -copyFromLocal <localfile> <dfs path>
>
> Please see if it works on you cluster. If it does, then please compare
> the method org.apache.hadoop.fs.FsShell.copyFromLocal() with your code.
>
> Thanks,
> dhruba
>
> -----Original Message-----
> From: Raghu Angadi [mailto:rangadi@yahoo-inc.com]
> Sent: Friday, November 30, 2007 8:49 PM
> To: hadoop-user@lucene.apache.org
> Subject: Re: Any one can tell me about how to write to HDFS?
>
>
> try 'Path outFile = new Path("/ryan/test");'
> also check if there is any usefule message on Namenode log.
>
> Raghu.
>
> Ryan Wang wrote:
> > Hope this version can attract other's attention
> >
> > Hadoop Version:  0.15.0
> > JDK version: Sun JDK 6.0.3
> > Platform: Ubuntu 7.10
> > IDE:   Eclipse 3.2
> > Code :
> > public class HadoopWrite {
> >
> >     /**
> >      * @param args
> >      */
> >     public static void main(String[] args) throws IOException{
> >         Configuration dfsconf = new Configuration();
> >         FileSystem dfs;
> >         dfs = FileSystem.get(dfsconf);
> >         Path inFile = new Path("/nutch/out");
> >         Path outFile = new Path("ryan/test");
> >         dfs.copyFromLocalFile(inFile, outFile);
> >
> >     }
> >
> > }
>

RE: Any one can tell me about how to write to HDFS?

Posted by dhruba Borthakur <dh...@yahoo-inc.com>.
There is a dfs shell utility to do this too. Please see if this utility
works on ur cluster. It can be used as follows:

Bin/hadoop dfs -copyFromLocal <localfile> <dfs path>

Please see if it works on you cluster. If it does, then please compare
the method org.apache.hadoop.fs.FsShell.copyFromLocal() with your code.

Thanks,
dhruba 

-----Original Message-----
From: Raghu Angadi [mailto:rangadi@yahoo-inc.com] 
Sent: Friday, November 30, 2007 8:49 PM
To: hadoop-user@lucene.apache.org
Subject: Re: Any one can tell me about how to write to HDFS?


try 'Path outFile = new Path("/ryan/test");'
also check if there is any usefule message on Namenode log.

Raghu.

Ryan Wang wrote:
> Hope this version can attract other's attention
> 
> Hadoop Version:  0.15.0
> JDK version: Sun JDK 6.0.3
> Platform: Ubuntu 7.10
> IDE:   Eclipse 3.2
> Code :
> public class HadoopWrite {
> 
>     /**
>      * @param args
>      */
>     public static void main(String[] args) throws IOException{
>         Configuration dfsconf = new Configuration();
>         FileSystem dfs;
>         dfs = FileSystem.get(dfsconf);
>         Path inFile = new Path("/nutch/out");
>         Path outFile = new Path("ryan/test");
>         dfs.copyFromLocalFile(inFile, outFile);
> 
>     }
> 
> }

Re: Any one can tell me about how to write to HDFS?

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
try 'Path outFile = new Path("/ryan/test");'
also check if there is any usefule message on Namenode log.

Raghu.

Ryan Wang wrote:
> Hope this version can attract other's attention
> 
> Hadoop Version:  0.15.0
> JDK version: Sun JDK 6.0.3
> Platform: Ubuntu 7.10
> IDE:   Eclipse 3.2
> Code :
> public class HadoopWrite {
> 
>     /**
>      * @param args
>      */
>     public static void main(String[] args) throws IOException{
>         Configuration dfsconf = new Configuration();
>         FileSystem dfs;
>         dfs = FileSystem.get(dfsconf);
>         Path inFile = new Path("/nutch/out");
>         Path outFile = new Path("ryan/test");
>         dfs.copyFromLocalFile(inFile, outFile);
> 
>     }
> 
> }