You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "Smith, Joshua D." <Jo...@gd-ais.com> on 2013/08/26 21:18:04 UTC

HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

When I try to start HDFS I get an error in the log that says...

org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: Invalid configuration: a shared edits dir must not be specified if HA is not enabled.

I have the following properties configured as per page 12 of the CDH4 High Availability Guide...
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/PDF/CDH4-High-Availability-Guide.pdf

<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>nn.domain:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>snn.domain:8020</value>
</property>

When I look at the Hadoop source code that generates the error message I can see that it's failing because it's looking for dfs.namenode.rpc-address without the suffix. I'm assuming that the suffix gets lopped off at some point before it gets pulled up and the property is checked for, so maybe I have the suffix wrong?

In any case I can't get HDFS to start because it's looking for a property that I don't have in the truncated for and it doesn't seem to be finding the form of it with the suffix. Any assistance would be most appreciated.

Thanks,
Josh

RE: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

Posted by "Smith, Joshua D." <Jo...@gd-ais.com>.
Harsh-

Yes, I intend to use HA. That's what I'm trying to configure right now.

Unfortunately I cannot share my complete configuration files. They're on a disconnected network. Are there any configuration items that you'd like me to post my settings for?

The deployment is CDH 4.3 on a brand new cluster. There are 3 master nodes (NameNode, StandbyNameNode, JobTracker/ResourceManager) and 7 slave nodes. Each of the master nodes is configured to be a Zookeeper node as well as a Journal node. The HA configuration that I'm striving toward is the automatic fail-over with Zookeeper.

Does that help?
Josh

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Monday, August 26, 2013 6:11 PM
To: <us...@hadoop.apache.org>
Subject: Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

It is not quite from your post, so a Q: Do you intend to use HA or not?

Can you share your complete core-site.xml and hdfs-site.xml along with a brief note on the deployment?

On Tue, Aug 27, 2013 at 12:48 AM, Smith, Joshua D.
<Jo...@gd-ais.com> wrote:
> When I try to start HDFS I get an error in the log that says...
>
>
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
>
> java.io.IOException: Invalid configuration: a shared edits dir must 
> not be specified if HA is not enabled.
>
>
>
> I have the following properties configured as per page 12 of the CDH4 
> High Availability Guide...
>
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/la
> test/PDF/CDH4-High-Availability-Guide.pdf
>
>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>
> <value>nn.domain:8020</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>
> <value>snn.domain:8020</value>
>
> </property>
>
>
>
> When I look at the Hadoop source code that generates the error message 
> I can see that it's failing because it's looking for 
> dfs.namenode.rpc-address without the suffix. I'm assuming that the 
> suffix gets lopped off at some point before it gets pulled up and the 
> property is checked for, so maybe I have the suffix wrong?
>
>
>
> In any case I can't get HDFS to start because it's looking for a 
> property that I don't have in the truncated for and it doesn't seem to 
> be finding the form of it with the suffix. Any assistance would be most appreciated.
>
>
>
> Thanks,
>
> Josh



--
Harsh J

RE: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

Posted by "Smith, Joshua D." <Jo...@gd-ais.com>.
Harsh-

Yes, I intend to use HA. That's what I'm trying to configure right now.

Unfortunately I cannot share my complete configuration files. They're on a disconnected network. Are there any configuration items that you'd like me to post my settings for?

The deployment is CDH 4.3 on a brand new cluster. There are 3 master nodes (NameNode, StandbyNameNode, JobTracker/ResourceManager) and 7 slave nodes. Each of the master nodes is configured to be a Zookeeper node as well as a Journal node. The HA configuration that I'm striving toward is the automatic fail-over with Zookeeper.

Does that help?
Josh

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Monday, August 26, 2013 6:11 PM
To: <us...@hadoop.apache.org>
Subject: Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

It is not quite from your post, so a Q: Do you intend to use HA or not?

Can you share your complete core-site.xml and hdfs-site.xml along with a brief note on the deployment?

On Tue, Aug 27, 2013 at 12:48 AM, Smith, Joshua D.
<Jo...@gd-ais.com> wrote:
> When I try to start HDFS I get an error in the log that says...
>
>
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
>
> java.io.IOException: Invalid configuration: a shared edits dir must 
> not be specified if HA is not enabled.
>
>
>
> I have the following properties configured as per page 12 of the CDH4 
> High Availability Guide...
>
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/la
> test/PDF/CDH4-High-Availability-Guide.pdf
>
>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>
> <value>nn.domain:8020</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>
> <value>snn.domain:8020</value>
>
> </property>
>
>
>
> When I look at the Hadoop source code that generates the error message 
> I can see that it's failing because it's looking for 
> dfs.namenode.rpc-address without the suffix. I'm assuming that the 
> suffix gets lopped off at some point before it gets pulled up and the 
> property is checked for, so maybe I have the suffix wrong?
>
>
>
> In any case I can't get HDFS to start because it's looking for a 
> property that I don't have in the truncated for and it doesn't seem to 
> be finding the form of it with the suffix. Any assistance would be most appreciated.
>
>
>
> Thanks,
>
> Josh



--
Harsh J

RE: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

Posted by "Smith, Joshua D." <Jo...@gd-ais.com>.
Harsh-

Yes, I intend to use HA. That's what I'm trying to configure right now.

Unfortunately I cannot share my complete configuration files. They're on a disconnected network. Are there any configuration items that you'd like me to post my settings for?

The deployment is CDH 4.3 on a brand new cluster. There are 3 master nodes (NameNode, StandbyNameNode, JobTracker/ResourceManager) and 7 slave nodes. Each of the master nodes is configured to be a Zookeeper node as well as a Journal node. The HA configuration that I'm striving toward is the automatic fail-over with Zookeeper.

Does that help?
Josh

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Monday, August 26, 2013 6:11 PM
To: <us...@hadoop.apache.org>
Subject: Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

It is not quite from your post, so a Q: Do you intend to use HA or not?

Can you share your complete core-site.xml and hdfs-site.xml along with a brief note on the deployment?

On Tue, Aug 27, 2013 at 12:48 AM, Smith, Joshua D.
<Jo...@gd-ais.com> wrote:
> When I try to start HDFS I get an error in the log that says...
>
>
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
>
> java.io.IOException: Invalid configuration: a shared edits dir must 
> not be specified if HA is not enabled.
>
>
>
> I have the following properties configured as per page 12 of the CDH4 
> High Availability Guide...
>
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/la
> test/PDF/CDH4-High-Availability-Guide.pdf
>
>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>
> <value>nn.domain:8020</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>
> <value>snn.domain:8020</value>
>
> </property>
>
>
>
> When I look at the Hadoop source code that generates the error message 
> I can see that it's failing because it's looking for 
> dfs.namenode.rpc-address without the suffix. I'm assuming that the 
> suffix gets lopped off at some point before it gets pulled up and the 
> property is checked for, so maybe I have the suffix wrong?
>
>
>
> In any case I can't get HDFS to start because it's looking for a 
> property that I don't have in the truncated for and it doesn't seem to 
> be finding the form of it with the suffix. Any assistance would be most appreciated.
>
>
>
> Thanks,
>
> Josh



--
Harsh J

RE: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

Posted by "Smith, Joshua D." <Jo...@gd-ais.com>.
Harsh-

Yes, I intend to use HA. That's what I'm trying to configure right now.

Unfortunately I cannot share my complete configuration files. They're on a disconnected network. Are there any configuration items that you'd like me to post my settings for?

The deployment is CDH 4.3 on a brand new cluster. There are 3 master nodes (NameNode, StandbyNameNode, JobTracker/ResourceManager) and 7 slave nodes. Each of the master nodes is configured to be a Zookeeper node as well as a Journal node. The HA configuration that I'm striving toward is the automatic fail-over with Zookeeper.

Does that help?
Josh

-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Monday, August 26, 2013 6:11 PM
To: <us...@hadoop.apache.org>
Subject: Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

It is not quite from your post, so a Q: Do you intend to use HA or not?

Can you share your complete core-site.xml and hdfs-site.xml along with a brief note on the deployment?

On Tue, Aug 27, 2013 at 12:48 AM, Smith, Joshua D.
<Jo...@gd-ais.com> wrote:
> When I try to start HDFS I get an error in the log that says...
>
>
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
>
> java.io.IOException: Invalid configuration: a shared edits dir must 
> not be specified if HA is not enabled.
>
>
>
> I have the following properties configured as per page 12 of the CDH4 
> High Availability Guide...
>
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/la
> test/PDF/CDH4-High-Availability-Guide.pdf
>
>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>
> <value>nn.domain:8020</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>
> <value>snn.domain:8020</value>
>
> </property>
>
>
>
> When I look at the Hadoop source code that generates the error message 
> I can see that it's failing because it's looking for 
> dfs.namenode.rpc-address without the suffix. I'm assuming that the 
> suffix gets lopped off at some point before it gets pulled up and the 
> property is checked for, so maybe I have the suffix wrong?
>
>
>
> In any case I can't get HDFS to start because it's looking for a 
> property that I don't have in the truncated for and it doesn't seem to 
> be finding the form of it with the suffix. Any assistance would be most appreciated.
>
>
>
> Thanks,
>
> Josh



--
Harsh J

Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

Posted by Harsh J <ha...@cloudera.com>.
It is not quite from your post, so a Q: Do you intend to use HA or not?

Can you share your complete core-site.xml and hdfs-site.xml along with
a brief note on the deployment?

On Tue, Aug 27, 2013 at 12:48 AM, Smith, Joshua D.
<Jo...@gd-ais.com> wrote:
> When I try to start HDFS I get an error in the log that says…
>
>
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
>
> java.io.IOException: Invalid configuration: a shared edits dir must not be
> specified if HA is not enabled.
>
>
>
> I have the following properties configured as per page 12 of the CDH4 High
> Availability Guide…
>
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/PDF/CDH4-High-Availability-Guide.pdf
>
>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>
> <value>nn.domain:8020</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>
> <value>snn.domain:8020</value>
>
> </property>
>
>
>
> When I look at the Hadoop source code that generates the error message I can
> see that it’s failing because it’s looking for dfs.namenode.rpc-address
> without the suffix. I’m assuming that the suffix gets lopped off at some
> point before it gets pulled up and the property is checked for, so maybe I
> have the suffix wrong?
>
>
>
> In any case I can’t get HDFS to start because it’s looking for a property
> that I don’t have in the truncated for and it doesn’t seem to be finding the
> form of it with the suffix. Any assistance would be most appreciated.
>
>
>
> Thanks,
>
> Josh



-- 
Harsh J

Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

Posted by Harsh J <ha...@cloudera.com>.
It is not quite from your post, so a Q: Do you intend to use HA or not?

Can you share your complete core-site.xml and hdfs-site.xml along with
a brief note on the deployment?

On Tue, Aug 27, 2013 at 12:48 AM, Smith, Joshua D.
<Jo...@gd-ais.com> wrote:
> When I try to start HDFS I get an error in the log that says…
>
>
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
>
> java.io.IOException: Invalid configuration: a shared edits dir must not be
> specified if HA is not enabled.
>
>
>
> I have the following properties configured as per page 12 of the CDH4 High
> Availability Guide…
>
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/PDF/CDH4-High-Availability-Guide.pdf
>
>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>
> <value>nn.domain:8020</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>
> <value>snn.domain:8020</value>
>
> </property>
>
>
>
> When I look at the Hadoop source code that generates the error message I can
> see that it’s failing because it’s looking for dfs.namenode.rpc-address
> without the suffix. I’m assuming that the suffix gets lopped off at some
> point before it gets pulled up and the property is checked for, so maybe I
> have the suffix wrong?
>
>
>
> In any case I can’t get HDFS to start because it’s looking for a property
> that I don’t have in the truncated for and it doesn’t seem to be finding the
> form of it with the suffix. Any assistance would be most appreciated.
>
>
>
> Thanks,
>
> Josh



-- 
Harsh J

Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

Posted by Harsh J <ha...@cloudera.com>.
It is not quite from your post, so a Q: Do you intend to use HA or not?

Can you share your complete core-site.xml and hdfs-site.xml along with
a brief note on the deployment?

On Tue, Aug 27, 2013 at 12:48 AM, Smith, Joshua D.
<Jo...@gd-ais.com> wrote:
> When I try to start HDFS I get an error in the log that says…
>
>
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
>
> java.io.IOException: Invalid configuration: a shared edits dir must not be
> specified if HA is not enabled.
>
>
>
> I have the following properties configured as per page 12 of the CDH4 High
> Availability Guide…
>
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/PDF/CDH4-High-Availability-Guide.pdf
>
>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>
> <value>nn.domain:8020</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>
> <value>snn.domain:8020</value>
>
> </property>
>
>
>
> When I look at the Hadoop source code that generates the error message I can
> see that it’s failing because it’s looking for dfs.namenode.rpc-address
> without the suffix. I’m assuming that the suffix gets lopped off at some
> point before it gets pulled up and the property is checked for, so maybe I
> have the suffix wrong?
>
>
>
> In any case I can’t get HDFS to start because it’s looking for a property
> that I don’t have in the truncated for and it doesn’t seem to be finding the
> form of it with the suffix. Any assistance would be most appreciated.
>
>
>
> Thanks,
>
> Josh



-- 
Harsh J

Re: HDFS Startup Failure due to dfs.namenode.rpc-address and Shared Edits Directory

Posted by Harsh J <ha...@cloudera.com>.
It is not quite from your post, so a Q: Do you intend to use HA or not?

Can you share your complete core-site.xml and hdfs-site.xml along with
a brief note on the deployment?

On Tue, Aug 27, 2013 at 12:48 AM, Smith, Joshua D.
<Jo...@gd-ais.com> wrote:
> When I try to start HDFS I get an error in the log that says…
>
>
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
>
> java.io.IOException: Invalid configuration: a shared edits dir must not be
> specified if HA is not enabled.
>
>
>
> I have the following properties configured as per page 12 of the CDH4 High
> Availability Guide…
>
> http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/PDF/CDH4-High-Availability-Guide.pdf
>
>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn1</name>
>
> <value>nn.domain:8020</value>
>
> </property>
>
> <property>
>
> <name>dfs.namenode.rpc-address.mycluster.nn2</name>
>
> <value>snn.domain:8020</value>
>
> </property>
>
>
>
> When I look at the Hadoop source code that generates the error message I can
> see that it’s failing because it’s looking for dfs.namenode.rpc-address
> without the suffix. I’m assuming that the suffix gets lopped off at some
> point before it gets pulled up and the property is checked for, so maybe I
> have the suffix wrong?
>
>
>
> In any case I can’t get HDFS to start because it’s looking for a property
> that I don’t have in the truncated for and it doesn’t seem to be finding the
> form of it with the suffix. Any assistance would be most appreciated.
>
>
>
> Thanks,
>
> Josh



-- 
Harsh J