You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Richard Zhang <ri...@gmail.com> on 2010/12/08 22:37:58 UTC

urgent, error: java.io.IOException: Cannot create directory

Hi Guys:
I am just installation the hadoop 0.21.0 in a single node cluster.
I encounter the following error when I run bin/hadoop namenode -format

10/12/08 16:27:22 ERROR namenode.NameNode:
java.io.IOException: Cannot create directory
/your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
        at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:312)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1425)
        at
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1242)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)


Below is my core-site.xml

<configuration>
<!-- In: conf/core-site.xml -->
<property>
  <name>hadoop.tmp.dir</name>
  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>


Below is my hdfs-site.xml
*<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- In: conf/hdfs-site.xml -->
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is
created.
  The default is used if replication is not specified in create time.
  </description>
</property>

</configuration>


below is my mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<!-- In: conf/mapred-site.xml -->
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>

</configuration>


Thanks.
Richard
*

Re: urgent, error: java.io.IOException: Cannot create directory

Posted by Richard Zhang <ri...@gmail.com>.
oh, sorry. I corrected that typo
hadoop$ ls tmp/dir/hadoop-hadoop/dfs/name/current -l
total 0
hadoop$ ls tmp/dir/hadoop-hadoop/dfs/name -l
total 4
drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 22:17 current

Even I remove the tmp I manually created and set all the Hadoop package to
be 777. Then I run the hadoop again and it is still the same.

Richard.

On Wed, Dec 8, 2010 at 7:55 PM, Konstantin Boudnik <co...@apache.org> wrote:

> Yeah, I figured that match. What I was referring to is the ending of the
> paths:
> .../hadoop-hadoop/dfs/name/current
> .../hadoop-hadoop/dfs/hadoop
> They are different
> --
>   Take care,
> Konstantin (Cos) Boudnik
>
>
>
> On Wed, Dec 8, 2010 at 15:55, Richard Zhang <ri...@gmail.com>
> wrote:
> > Hi:
> > "/your/path/to/hadoop"  represents the location where hadoop is
> installed.
> > BTW, I believe this is a file writing permission problem. Because I use
> the
> > same *-site.xml setting to install with root and it works.
> > But when I use the dedicated user hadoop, it always introduces this
> problem.
> >
> > But I do created manually the directory path and grant with 755.
> > Weird....
> > Richard.
> >
> > On Wed, Dec 8, 2010 at 6:51 PM, Konstantin Boudnik <co...@apache.org>
> wrote:
> >
> >> it seems that you are looking at 2 different directories:
> >>
> >> first post: /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
> >> second: ls -l
>  tmp/dir/hadoop-hadoop/dfs/hadoop
> >> --
> >>   Take care,
> >> Konstantin (Cos) Boudnik
> >>
> >>
> >>
> >> On Wed, Dec 8, 2010 at 14:19, Richard Zhang <ri...@gmail.com>
> >> wrote:
> >> > would that be the reason that 54310 port is not open?
> >> > I just used
> >> > * iptables -A INPUT -p tcp --dport 54310 -j ACCEPT
> >> > to open the port.
> >> > But it seems the same erorr exists.
> >> > Richard
> >> > *
> >> > On Wed, Dec 8, 2010 at 4:56 PM, Richard Zhang <
> richardtechzh@gmail.com
> >> >wrote:
> >> >
> >> >> Hi James:
> >> >> I verified that I have the following permission set for the path:
> >> >>
> >> >> ls -l tmp/dir/hadoop-hadoop/dfs/hadoop
> >> >> total 4
> >> >> drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 15:56 current
> >> >> Thanks.
> >> >> Richard
> >> >>
> >> >>
> >> >>
> >> >> On Wed, Dec 8, 2010 at 4:50 PM, james warren <ja...@rockyou.com>
> wrote:
> >> >>
> >> >>> Hi Richard -
> >> >>>
> >> >>> First thing that comes to mind is a permissions issue.  Can you
> verify
> >> >>> that
> >> >>> your directories along the desired namenode path are writable by the
> >> >>> appropriate user(s)?
> >> >>>
> >> >>> HTH,
> >> >>> -James
> >> >>>
> >> >>> On Wed, Dec 8, 2010 at 1:37 PM, Richard Zhang <
> richardtechzh@gmail.com
> >> >>> >wrote:
> >> >>>
> >> >>> > Hi Guys:
> >> >>> > I am just installation the hadoop 0.21.0 in a single node cluster.
> >> >>> > I encounter the following error when I run bin/hadoop namenode
> >> -format
> >> >>> >
> >> >>> > 10/12/08 16:27:22 ERROR namenode.NameNode:
> >> >>> > java.io.IOException: Cannot create directory
> >> >>> > /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
> >> >>> >        at
> >> >>> >
> >> >>> >
> >> >>>
> >>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:312)
> >> >>> >        at
> >> >>> >
> >> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1425)
> >> >>> >        at
> >> >>> >
> >> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
> >> >>> >        at
> >> >>> >
> >> >>>
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1242)
> >> >>> >        at
> >> >>> >
> >> >>> >
> >> >>>
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
> >> >>> >        at
> >> >>> >
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> >> >>> >
> >> >>> >
> >> >>> > Below is my core-site.xml
> >> >>> >
> >> >>> > <configuration>
> >> >>> > <!-- In: conf/core-site.xml -->
> >> >>> > <property>
> >> >>> >  <name>hadoop.tmp.dir</name>
> >> >>> >  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
> >> >>> >  <description>A base for other temporary
> directories.</description>
> >> >>> > </property>
> >> >>> >
> >> >>> > <property>
> >> >>> >  <name>fs.default.name</name>
> >> >>> >  <value>hdfs://localhost:54310</value>
> >> >>> >  <description>The name of the default file system.  A URI whose
> >> >>> >  scheme and authority determine the FileSystem implementation.
>  The
> >> >>> >  uri's scheme determines the config property (fs.SCHEME.impl)
> naming
> >> >>> >  the FileSystem implementation class.  The uri's authority is used
> to
> >> >>> >  determine the host, port, etc. for a filesystem.</description>
> >> >>> > </property>
> >> >>> > </configuration>
> >> >>> >
> >> >>> >
> >> >>> > Below is my hdfs-site.xml
> >> >>> > *<?xml version="1.0"?>
> >> >>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >> >>> >
> >> >>> > <!-- Put site-specific property overrides in this file. -->
> >> >>> >
> >> >>> > <configuration>
> >> >>> > <!-- In: conf/hdfs-site.xml -->
> >> >>> > <property>
> >> >>> >  <name>dfs.replication</name>
> >> >>> >  <value>1</value>
> >> >>> >  <description>Default block replication.
> >> >>> >  The actual number of replications can be specified when the file
> is
> >> >>> > created.
> >> >>> >  The default is used if replication is not specified in create
> time.
> >> >>> >  </description>
> >> >>> > </property>
> >> >>> >
> >> >>> > </configuration>
> >> >>> >
> >> >>> >
> >> >>> > below is my mapred-site.xml:
> >> >>> > <?xml version="1.0"?>
> >> >>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >> >>> >
> >> >>> > <!-- Put site-specific property overrides in this file. -->
> >> >>> >
> >> >>> > <configuration>
> >> >>> >
> >> >>> > <!-- In: conf/mapred-site.xml -->
> >> >>> > <property>
> >> >>> >  <name>mapred.job.tracker</name>
> >> >>> >  <value>localhost:54311</value>
> >> >>> >  <description>The host and port that the MapReduce job tracker
> runs
> >> >>> >  at.  If "local", then jobs are run in-process as a single map
> >> >>> >  and reduce task.
> >> >>> >  </description>
> >> >>> > </property>
> >> >>> >
> >> >>> > </configuration>
> >> >>> >
> >> >>> >
> >> >>> > Thanks.
> >> >>> > Richard
> >> >>> > *
> >> >>> >
> >> >>>
> >> >>
> >> >>
> >> >
> >>
> >
>

Re: urgent, error: java.io.IOException: Cannot create directory

Posted by Konstantin Boudnik <co...@apache.org>.
Yeah, I figured that match. What I was referring to is the ending of the paths:
.../hadoop-hadoop/dfs/name/current
.../hadoop-hadoop/dfs/hadoop
They are different
--
  Take care,
Konstantin (Cos) Boudnik



On Wed, Dec 8, 2010 at 15:55, Richard Zhang <ri...@gmail.com> wrote:
> Hi:
> "/your/path/to/hadoop"  represents the location where hadoop is installed.
> BTW, I believe this is a file writing permission problem. Because I use the
> same *-site.xml setting to install with root and it works.
> But when I use the dedicated user hadoop, it always introduces this problem.
>
> But I do created manually the directory path and grant with 755.
> Weird....
> Richard.
>
> On Wed, Dec 8, 2010 at 6:51 PM, Konstantin Boudnik <co...@apache.org> wrote:
>
>> it seems that you are looking at 2 different directories:
>>
>> first post: /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
>> second: ls -l                              tmp/dir/hadoop-hadoop/dfs/hadoop
>> --
>>   Take care,
>> Konstantin (Cos) Boudnik
>>
>>
>>
>> On Wed, Dec 8, 2010 at 14:19, Richard Zhang <ri...@gmail.com>
>> wrote:
>> > would that be the reason that 54310 port is not open?
>> > I just used
>> > * iptables -A INPUT -p tcp --dport 54310 -j ACCEPT
>> > to open the port.
>> > But it seems the same erorr exists.
>> > Richard
>> > *
>> > On Wed, Dec 8, 2010 at 4:56 PM, Richard Zhang <richardtechzh@gmail.com
>> >wrote:
>> >
>> >> Hi James:
>> >> I verified that I have the following permission set for the path:
>> >>
>> >> ls -l tmp/dir/hadoop-hadoop/dfs/hadoop
>> >> total 4
>> >> drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 15:56 current
>> >> Thanks.
>> >> Richard
>> >>
>> >>
>> >>
>> >> On Wed, Dec 8, 2010 at 4:50 PM, james warren <ja...@rockyou.com> wrote:
>> >>
>> >>> Hi Richard -
>> >>>
>> >>> First thing that comes to mind is a permissions issue.  Can you verify
>> >>> that
>> >>> your directories along the desired namenode path are writable by the
>> >>> appropriate user(s)?
>> >>>
>> >>> HTH,
>> >>> -James
>> >>>
>> >>> On Wed, Dec 8, 2010 at 1:37 PM, Richard Zhang <richardtechzh@gmail.com
>> >>> >wrote:
>> >>>
>> >>> > Hi Guys:
>> >>> > I am just installation the hadoop 0.21.0 in a single node cluster.
>> >>> > I encounter the following error when I run bin/hadoop namenode
>> -format
>> >>> >
>> >>> > 10/12/08 16:27:22 ERROR namenode.NameNode:
>> >>> > java.io.IOException: Cannot create directory
>> >>> > /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
>> >>> >        at
>> >>> >
>> >>> >
>> >>>
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:312)
>> >>> >        at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1425)
>> >>> >        at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
>> >>> >        at
>> >>> >
>> >>>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1242)
>> >>> >        at
>> >>> >
>> >>> >
>> >>>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
>> >>> >        at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>> >>> >
>> >>> >
>> >>> > Below is my core-site.xml
>> >>> >
>> >>> > <configuration>
>> >>> > <!-- In: conf/core-site.xml -->
>> >>> > <property>
>> >>> >  <name>hadoop.tmp.dir</name>
>> >>> >  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
>> >>> >  <description>A base for other temporary directories.</description>
>> >>> > </property>
>> >>> >
>> >>> > <property>
>> >>> >  <name>fs.default.name</name>
>> >>> >  <value>hdfs://localhost:54310</value>
>> >>> >  <description>The name of the default file system.  A URI whose
>> >>> >  scheme and authority determine the FileSystem implementation.  The
>> >>> >  uri's scheme determines the config property (fs.SCHEME.impl) naming
>> >>> >  the FileSystem implementation class.  The uri's authority is used to
>> >>> >  determine the host, port, etc. for a filesystem.</description>
>> >>> > </property>
>> >>> > </configuration>
>> >>> >
>> >>> >
>> >>> > Below is my hdfs-site.xml
>> >>> > *<?xml version="1.0"?>
>> >>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>> >>> >
>> >>> > <!-- Put site-specific property overrides in this file. -->
>> >>> >
>> >>> > <configuration>
>> >>> > <!-- In: conf/hdfs-site.xml -->
>> >>> > <property>
>> >>> >  <name>dfs.replication</name>
>> >>> >  <value>1</value>
>> >>> >  <description>Default block replication.
>> >>> >  The actual number of replications can be specified when the file is
>> >>> > created.
>> >>> >  The default is used if replication is not specified in create time.
>> >>> >  </description>
>> >>> > </property>
>> >>> >
>> >>> > </configuration>
>> >>> >
>> >>> >
>> >>> > below is my mapred-site.xml:
>> >>> > <?xml version="1.0"?>
>> >>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>> >>> >
>> >>> > <!-- Put site-specific property overrides in this file. -->
>> >>> >
>> >>> > <configuration>
>> >>> >
>> >>> > <!-- In: conf/mapred-site.xml -->
>> >>> > <property>
>> >>> >  <name>mapred.job.tracker</name>
>> >>> >  <value>localhost:54311</value>
>> >>> >  <description>The host and port that the MapReduce job tracker runs
>> >>> >  at.  If "local", then jobs are run in-process as a single map
>> >>> >  and reduce task.
>> >>> >  </description>
>> >>> > </property>
>> >>> >
>> >>> > </configuration>
>> >>> >
>> >>> >
>> >>> > Thanks.
>> >>> > Richard
>> >>> > *
>> >>> >
>> >>>
>> >>
>> >>
>> >
>>
>

Re: urgent, error: java.io.IOException: Cannot create directory

Posted by Richard Zhang <ri...@gmail.com>.
Hi:
"/your/path/to/hadoop"  represents the location where hadoop is installed.
BTW, I believe this is a file writing permission problem. Because I use the
same *-site.xml setting to install with root and it works.
But when I use the dedicated user hadoop, it always introduces this problem.

But I do created manually the directory path and grant with 755.
Weird....
Richard.

On Wed, Dec 8, 2010 at 6:51 PM, Konstantin Boudnik <co...@apache.org> wrote:

> it seems that you are looking at 2 different directories:
>
> first post: /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
> second: ls -l                              tmp/dir/hadoop-hadoop/dfs/hadoop
> --
>   Take care,
> Konstantin (Cos) Boudnik
>
>
>
> On Wed, Dec 8, 2010 at 14:19, Richard Zhang <ri...@gmail.com>
> wrote:
> > would that be the reason that 54310 port is not open?
> > I just used
> > * iptables -A INPUT -p tcp --dport 54310 -j ACCEPT
> > to open the port.
> > But it seems the same erorr exists.
> > Richard
> > *
> > On Wed, Dec 8, 2010 at 4:56 PM, Richard Zhang <richardtechzh@gmail.com
> >wrote:
> >
> >> Hi James:
> >> I verified that I have the following permission set for the path:
> >>
> >> ls -l tmp/dir/hadoop-hadoop/dfs/hadoop
> >> total 4
> >> drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 15:56 current
> >> Thanks.
> >> Richard
> >>
> >>
> >>
> >> On Wed, Dec 8, 2010 at 4:50 PM, james warren <ja...@rockyou.com> wrote:
> >>
> >>> Hi Richard -
> >>>
> >>> First thing that comes to mind is a permissions issue.  Can you verify
> >>> that
> >>> your directories along the desired namenode path are writable by the
> >>> appropriate user(s)?
> >>>
> >>> HTH,
> >>> -James
> >>>
> >>> On Wed, Dec 8, 2010 at 1:37 PM, Richard Zhang <richardtechzh@gmail.com
> >>> >wrote:
> >>>
> >>> > Hi Guys:
> >>> > I am just installation the hadoop 0.21.0 in a single node cluster.
> >>> > I encounter the following error when I run bin/hadoop namenode
> -format
> >>> >
> >>> > 10/12/08 16:27:22 ERROR namenode.NameNode:
> >>> > java.io.IOException: Cannot create directory
> >>> > /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
> >>> >        at
> >>> >
> >>> >
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:312)
> >>> >        at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1425)
> >>> >        at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
> >>> >        at
> >>> >
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1242)
> >>> >        at
> >>> >
> >>> >
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
> >>> >        at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> >>> >
> >>> >
> >>> > Below is my core-site.xml
> >>> >
> >>> > <configuration>
> >>> > <!-- In: conf/core-site.xml -->
> >>> > <property>
> >>> >  <name>hadoop.tmp.dir</name>
> >>> >  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
> >>> >  <description>A base for other temporary directories.</description>
> >>> > </property>
> >>> >
> >>> > <property>
> >>> >  <name>fs.default.name</name>
> >>> >  <value>hdfs://localhost:54310</value>
> >>> >  <description>The name of the default file system.  A URI whose
> >>> >  scheme and authority determine the FileSystem implementation.  The
> >>> >  uri's scheme determines the config property (fs.SCHEME.impl) naming
> >>> >  the FileSystem implementation class.  The uri's authority is used to
> >>> >  determine the host, port, etc. for a filesystem.</description>
> >>> > </property>
> >>> > </configuration>
> >>> >
> >>> >
> >>> > Below is my hdfs-site.xml
> >>> > *<?xml version="1.0"?>
> >>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >>> >
> >>> > <!-- Put site-specific property overrides in this file. -->
> >>> >
> >>> > <configuration>
> >>> > <!-- In: conf/hdfs-site.xml -->
> >>> > <property>
> >>> >  <name>dfs.replication</name>
> >>> >  <value>1</value>
> >>> >  <description>Default block replication.
> >>> >  The actual number of replications can be specified when the file is
> >>> > created.
> >>> >  The default is used if replication is not specified in create time.
> >>> >  </description>
> >>> > </property>
> >>> >
> >>> > </configuration>
> >>> >
> >>> >
> >>> > below is my mapred-site.xml:
> >>> > <?xml version="1.0"?>
> >>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >>> >
> >>> > <!-- Put site-specific property overrides in this file. -->
> >>> >
> >>> > <configuration>
> >>> >
> >>> > <!-- In: conf/mapred-site.xml -->
> >>> > <property>
> >>> >  <name>mapred.job.tracker</name>
> >>> >  <value>localhost:54311</value>
> >>> >  <description>The host and port that the MapReduce job tracker runs
> >>> >  at.  If "local", then jobs are run in-process as a single map
> >>> >  and reduce task.
> >>> >  </description>
> >>> > </property>
> >>> >
> >>> > </configuration>
> >>> >
> >>> >
> >>> > Thanks.
> >>> > Richard
> >>> > *
> >>> >
> >>>
> >>
> >>
> >
>

Re: urgent, error: java.io.IOException: Cannot create directory

Posted by Konstantin Boudnik <co...@apache.org>.
it seems that you are looking at 2 different directories:

first post: /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
second: ls -l                              tmp/dir/hadoop-hadoop/dfs/hadoop
--
  Take care,
Konstantin (Cos) Boudnik



On Wed, Dec 8, 2010 at 14:19, Richard Zhang <ri...@gmail.com> wrote:
> would that be the reason that 54310 port is not open?
> I just used
> * iptables -A INPUT -p tcp --dport 54310 -j ACCEPT
> to open the port.
> But it seems the same erorr exists.
> Richard
> *
> On Wed, Dec 8, 2010 at 4:56 PM, Richard Zhang <ri...@gmail.com>wrote:
>
>> Hi James:
>> I verified that I have the following permission set for the path:
>>
>> ls -l tmp/dir/hadoop-hadoop/dfs/hadoop
>> total 4
>> drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 15:56 current
>> Thanks.
>> Richard
>>
>>
>>
>> On Wed, Dec 8, 2010 at 4:50 PM, james warren <ja...@rockyou.com> wrote:
>>
>>> Hi Richard -
>>>
>>> First thing that comes to mind is a permissions issue.  Can you verify
>>> that
>>> your directories along the desired namenode path are writable by the
>>> appropriate user(s)?
>>>
>>> HTH,
>>> -James
>>>
>>> On Wed, Dec 8, 2010 at 1:37 PM, Richard Zhang <richardtechzh@gmail.com
>>> >wrote:
>>>
>>> > Hi Guys:
>>> > I am just installation the hadoop 0.21.0 in a single node cluster.
>>> > I encounter the following error when I run bin/hadoop namenode -format
>>> >
>>> > 10/12/08 16:27:22 ERROR namenode.NameNode:
>>> > java.io.IOException: Cannot create directory
>>> > /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:312)
>>> >        at
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1425)
>>> >        at
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
>>> >        at
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1242)
>>> >        at
>>> >
>>> >
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
>>> >        at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>>> >
>>> >
>>> > Below is my core-site.xml
>>> >
>>> > <configuration>
>>> > <!-- In: conf/core-site.xml -->
>>> > <property>
>>> >  <name>hadoop.tmp.dir</name>
>>> >  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
>>> >  <description>A base for other temporary directories.</description>
>>> > </property>
>>> >
>>> > <property>
>>> >  <name>fs.default.name</name>
>>> >  <value>hdfs://localhost:54310</value>
>>> >  <description>The name of the default file system.  A URI whose
>>> >  scheme and authority determine the FileSystem implementation.  The
>>> >  uri's scheme determines the config property (fs.SCHEME.impl) naming
>>> >  the FileSystem implementation class.  The uri's authority is used to
>>> >  determine the host, port, etc. for a filesystem.</description>
>>> > </property>
>>> > </configuration>
>>> >
>>> >
>>> > Below is my hdfs-site.xml
>>> > *<?xml version="1.0"?>
>>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>> >
>>> > <!-- Put site-specific property overrides in this file. -->
>>> >
>>> > <configuration>
>>> > <!-- In: conf/hdfs-site.xml -->
>>> > <property>
>>> >  <name>dfs.replication</name>
>>> >  <value>1</value>
>>> >  <description>Default block replication.
>>> >  The actual number of replications can be specified when the file is
>>> > created.
>>> >  The default is used if replication is not specified in create time.
>>> >  </description>
>>> > </property>
>>> >
>>> > </configuration>
>>> >
>>> >
>>> > below is my mapred-site.xml:
>>> > <?xml version="1.0"?>
>>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>> >
>>> > <!-- Put site-specific property overrides in this file. -->
>>> >
>>> > <configuration>
>>> >
>>> > <!-- In: conf/mapred-site.xml -->
>>> > <property>
>>> >  <name>mapred.job.tracker</name>
>>> >  <value>localhost:54311</value>
>>> >  <description>The host and port that the MapReduce job tracker runs
>>> >  at.  If "local", then jobs are run in-process as a single map
>>> >  and reduce task.
>>> >  </description>
>>> > </property>
>>> >
>>> > </configuration>
>>> >
>>> >
>>> > Thanks.
>>> > Richard
>>> > *
>>> >
>>>
>>
>>
>

Re: urgent, error: java.io.IOException: Cannot create directory

Posted by Richard Zhang <ri...@gmail.com>.
would that be the reason that 54310 port is not open?
I just used
* iptables -A INPUT -p tcp --dport 54310 -j ACCEPT
to open the port.
But it seems the same erorr exists.
Richard
*
On Wed, Dec 8, 2010 at 4:56 PM, Richard Zhang <ri...@gmail.com>wrote:

> Hi James:
> I verified that I have the following permission set for the path:
>
> ls -l tmp/dir/hadoop-hadoop/dfs/hadoop
> total 4
> drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 15:56 current
> Thanks.
> Richard
>
>
>
> On Wed, Dec 8, 2010 at 4:50 PM, james warren <ja...@rockyou.com> wrote:
>
>> Hi Richard -
>>
>> First thing that comes to mind is a permissions issue.  Can you verify
>> that
>> your directories along the desired namenode path are writable by the
>> appropriate user(s)?
>>
>> HTH,
>> -James
>>
>> On Wed, Dec 8, 2010 at 1:37 PM, Richard Zhang <richardtechzh@gmail.com
>> >wrote:
>>
>> > Hi Guys:
>> > I am just installation the hadoop 0.21.0 in a single node cluster.
>> > I encounter the following error when I run bin/hadoop namenode -format
>> >
>> > 10/12/08 16:27:22 ERROR namenode.NameNode:
>> > java.io.IOException: Cannot create directory
>> > /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
>> >        at
>> >
>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:312)
>> >        at
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1425)
>> >        at
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
>> >        at
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1242)
>> >        at
>> >
>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
>> >        at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>> >
>> >
>> > Below is my core-site.xml
>> >
>> > <configuration>
>> > <!-- In: conf/core-site.xml -->
>> > <property>
>> >  <name>hadoop.tmp.dir</name>
>> >  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
>> >  <description>A base for other temporary directories.</description>
>> > </property>
>> >
>> > <property>
>> >  <name>fs.default.name</name>
>> >  <value>hdfs://localhost:54310</value>
>> >  <description>The name of the default file system.  A URI whose
>> >  scheme and authority determine the FileSystem implementation.  The
>> >  uri's scheme determines the config property (fs.SCHEME.impl) naming
>> >  the FileSystem implementation class.  The uri's authority is used to
>> >  determine the host, port, etc. for a filesystem.</description>
>> > </property>
>> > </configuration>
>> >
>> >
>> > Below is my hdfs-site.xml
>> > *<?xml version="1.0"?>
>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>> >
>> > <!-- Put site-specific property overrides in this file. -->
>> >
>> > <configuration>
>> > <!-- In: conf/hdfs-site.xml -->
>> > <property>
>> >  <name>dfs.replication</name>
>> >  <value>1</value>
>> >  <description>Default block replication.
>> >  The actual number of replications can be specified when the file is
>> > created.
>> >  The default is used if replication is not specified in create time.
>> >  </description>
>> > </property>
>> >
>> > </configuration>
>> >
>> >
>> > below is my mapred-site.xml:
>> > <?xml version="1.0"?>
>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>> >
>> > <!-- Put site-specific property overrides in this file. -->
>> >
>> > <configuration>
>> >
>> > <!-- In: conf/mapred-site.xml -->
>> > <property>
>> >  <name>mapred.job.tracker</name>
>> >  <value>localhost:54311</value>
>> >  <description>The host and port that the MapReduce job tracker runs
>> >  at.  If "local", then jobs are run in-process as a single map
>> >  and reduce task.
>> >  </description>
>> > </property>
>> >
>> > </configuration>
>> >
>> >
>> > Thanks.
>> > Richard
>> > *
>> >
>>
>
>

Re: urgent, error: java.io.IOException: Cannot create directory

Posted by Richard Zhang <ri...@gmail.com>.
Hi James:
I verified that I have the following permission set for the path:

ls -l tmp/dir/hadoop-hadoop/dfs/hadoop
total 4
drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 15:56 current
Thanks.
Richard


On Wed, Dec 8, 2010 at 4:50 PM, james warren <ja...@rockyou.com> wrote:

> Hi Richard -
>
> First thing that comes to mind is a permissions issue.  Can you verify that
> your directories along the desired namenode path are writable by the
> appropriate user(s)?
>
> HTH,
> -James
>
> On Wed, Dec 8, 2010 at 1:37 PM, Richard Zhang <richardtechzh@gmail.com
> >wrote:
>
> > Hi Guys:
> > I am just installation the hadoop 0.21.0 in a single node cluster.
> > I encounter the following error when I run bin/hadoop namenode -format
> >
> > 10/12/08 16:27:22 ERROR namenode.NameNode:
> > java.io.IOException: Cannot create directory
> > /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:312)
> >        at
> > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1425)
> >        at
> > org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
> >        at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1242)
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
> >        at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> >
> >
> > Below is my core-site.xml
> >
> > <configuration>
> > <!-- In: conf/core-site.xml -->
> > <property>
> >  <name>hadoop.tmp.dir</name>
> >  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
> >  <description>A base for other temporary directories.</description>
> > </property>
> >
> > <property>
> >  <name>fs.default.name</name>
> >  <value>hdfs://localhost:54310</value>
> >  <description>The name of the default file system.  A URI whose
> >  scheme and authority determine the FileSystem implementation.  The
> >  uri's scheme determines the config property (fs.SCHEME.impl) naming
> >  the FileSystem implementation class.  The uri's authority is used to
> >  determine the host, port, etc. for a filesystem.</description>
> > </property>
> > </configuration>
> >
> >
> > Below is my hdfs-site.xml
> > *<?xml version="1.0"?>
> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >
> > <!-- Put site-specific property overrides in this file. -->
> >
> > <configuration>
> > <!-- In: conf/hdfs-site.xml -->
> > <property>
> >  <name>dfs.replication</name>
> >  <value>1</value>
> >  <description>Default block replication.
> >  The actual number of replications can be specified when the file is
> > created.
> >  The default is used if replication is not specified in create time.
> >  </description>
> > </property>
> >
> > </configuration>
> >
> >
> > below is my mapred-site.xml:
> > <?xml version="1.0"?>
> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> >
> > <!-- Put site-specific property overrides in this file. -->
> >
> > <configuration>
> >
> > <!-- In: conf/mapred-site.xml -->
> > <property>
> >  <name>mapred.job.tracker</name>
> >  <value>localhost:54311</value>
> >  <description>The host and port that the MapReduce job tracker runs
> >  at.  If "local", then jobs are run in-process as a single map
> >  and reduce task.
> >  </description>
> > </property>
> >
> > </configuration>
> >
> >
> > Thanks.
> > Richard
> > *
> >
>

Re: urgent, error: java.io.IOException: Cannot create directory

Posted by james warren <ja...@rockyou.com>.
Hi Richard -

First thing that comes to mind is a permissions issue.  Can you verify that
your directories along the desired namenode path are writable by the
appropriate user(s)?

HTH,
-James

On Wed, Dec 8, 2010 at 1:37 PM, Richard Zhang <ri...@gmail.com>wrote:

> Hi Guys:
> I am just installation the hadoop 0.21.0 in a single node cluster.
> I encounter the following error when I run bin/hadoop namenode -format
>
> 10/12/08 16:27:22 ERROR namenode.NameNode:
> java.io.IOException: Cannot create directory
> /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
>        at
>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:312)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1425)
>        at
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1444)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1242)
>        at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
>
>
> Below is my core-site.xml
>
> <configuration>
> <!-- In: conf/core-site.xml -->
> <property>
>  <name>hadoop.tmp.dir</name>
>  <value>/your/path/to/hadoop/tmp/dir/hadoop-${user.name}</value>
>  <description>A base for other temporary directories.</description>
> </property>
>
> <property>
>  <name>fs.default.name</name>
>  <value>hdfs://localhost:54310</value>
>  <description>The name of the default file system.  A URI whose
>  scheme and authority determine the FileSystem implementation.  The
>  uri's scheme determines the config property (fs.SCHEME.impl) naming
>  the FileSystem implementation class.  The uri's authority is used to
>  determine the host, port, etc. for a filesystem.</description>
> </property>
> </configuration>
>
>
> Below is my hdfs-site.xml
> *<?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <!-- In: conf/hdfs-site.xml -->
> <property>
>  <name>dfs.replication</name>
>  <value>1</value>
>  <description>Default block replication.
>  The actual number of replications can be specified when the file is
> created.
>  The default is used if replication is not specified in create time.
>  </description>
> </property>
>
> </configuration>
>
>
> below is my mapred-site.xml:
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>
> <!-- In: conf/mapred-site.xml -->
> <property>
>  <name>mapred.job.tracker</name>
>  <value>localhost:54311</value>
>  <description>The host and port that the MapReduce job tracker runs
>  at.  If "local", then jobs are run in-process as a single map
>  and reduce task.
>  </description>
> </property>
>
> </configuration>
>
>
> Thanks.
> Richard
> *
>