You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Vishnu Viswanath <vi...@gmail.com> on 2013/12/25 14:31:50 UTC
DataNode not starting in slave machine
Hi,
I am getting this error while starting the datanode in my slave system.
I read the JIRA HDFS-2515 <https://issues.apache.org/jira/browse/HDFS-2515>,
it says it is because hadoop is using wrong conf file.
13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at
10 second(s).
13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
started
13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
registered.
13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
exists!
13/12/24 15:57:15 ERROR datanode.DataNode:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: file:///
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
But how do i check which conf file hadoop is using? or how do i set it?
These are my configurations:
core-site.xml
------------------
<configuration>
<property>
<name>fs.defualt.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/vishnu/hadoop-tmp</value>
</property>
</configuration>
hdfs-site.xml
--------------------
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
mared-site.xml
--------------------
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
</property>
</configuration>
any help,
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
ohh! i didn't see that :(.
On Wed, Dec 25, 2013 at 9:38 PM, Chris Mawata <ch...@gmail.com>wrote:
> Spelling of 'default' is probably the issue.
> Chris
> On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <vi...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>>
>>
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
ohh! i didn't see that :(.
On Wed, Dec 25, 2013 at 9:38 PM, Chris Mawata <ch...@gmail.com>wrote:
> Spelling of 'default' is probably the issue.
> Chris
> On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <vi...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>>
>>
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
ohh! i didn't see that :(.
On Wed, Dec 25, 2013 at 9:38 PM, Chris Mawata <ch...@gmail.com>wrote:
> Spelling of 'default' is probably the issue.
> Chris
> On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <vi...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>>
>>
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
ohh! i didn't see that :(.
On Wed, Dec 25, 2013 at 9:38 PM, Chris Mawata <ch...@gmail.com>wrote:
> Spelling of 'default' is probably the issue.
> Chris
> On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <vi...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>>
>>
Re: DataNode not starting in slave machine
Posted by Chris Mawata <ch...@gmail.com>.
Spelling of 'default' is probably the issue.
Chris
On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <vi...@gmail.com>
wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
>
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
Thanks every one.
I downloaded hadoop-1.2.1 again and set all the conf-* files and now it
worked fine.
I don't know why it didn't work in the first place, the properties that i
set now were exactly the same i did last time.
Regards
Vishnu
On Wed, Dec 25, 2013 at 8:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> It is running on local file system file:///
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
> <vi...@gmail.com> wrote:
> > Hi,
> >
> > I am getting this error while starting the datanode in my slave system.
> >
> > I read the JIRA HDFS-2515, it says it is because hadoop is using wrong
> conf
> > file.
> >
> > 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> > MetricsSystem,sub=Stats registered.
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at
> > 10 second(s).
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> > started
> > 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> > registered.
> > 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> > exists!
> > 13/12/24 15:57:15 ERROR datanode.DataNode:
> > java.lang.IllegalArgumentException: Does not contain a valid host:port
> > authority: file:///
> > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> > at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> > at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
> >
> > But how do i check which conf file hadoop is using? or how do i set it?
> >
> > These are my configurations:
> >
> > core-site.xml
> > ------------------
> > <configuration>
> > <property>
> > <name>fs.defualt.name</name>
> > <value>hdfs://master:9000</value>
> > </property>
> >
> > <property>
> > <name>hadoop.tmp.dir</name>
> > <value>/home/vishnu/hadoop-tmp</value>
> > </property>
> > </configuration>
> >
> > hdfs-site.xml
> > --------------------
> > <configuration>
> > <property>
> > <name>dfs.replication</name>
> > <value>2</value>
> > </property>
> > </configuration>
> >
> > mared-site.xml
> > --------------------
> > <configuration>
> > <property>
> > <name>mapred.job.tracker</name>
> > <value>master:9001</value>
> > </property>
> > </configuration>
> >
> > any help,
> >
>
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
Thanks every one.
I downloaded hadoop-1.2.1 again and set all the conf-* files and now it
worked fine.
I don't know why it didn't work in the first place, the properties that i
set now were exactly the same i did last time.
Regards
Vishnu
On Wed, Dec 25, 2013 at 8:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> It is running on local file system file:///
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
> <vi...@gmail.com> wrote:
> > Hi,
> >
> > I am getting this error while starting the datanode in my slave system.
> >
> > I read the JIRA HDFS-2515, it says it is because hadoop is using wrong
> conf
> > file.
> >
> > 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> > MetricsSystem,sub=Stats registered.
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at
> > 10 second(s).
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> > started
> > 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> > registered.
> > 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> > exists!
> > 13/12/24 15:57:15 ERROR datanode.DataNode:
> > java.lang.IllegalArgumentException: Does not contain a valid host:port
> > authority: file:///
> > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> > at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> > at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
> >
> > But how do i check which conf file hadoop is using? or how do i set it?
> >
> > These are my configurations:
> >
> > core-site.xml
> > ------------------
> > <configuration>
> > <property>
> > <name>fs.defualt.name</name>
> > <value>hdfs://master:9000</value>
> > </property>
> >
> > <property>
> > <name>hadoop.tmp.dir</name>
> > <value>/home/vishnu/hadoop-tmp</value>
> > </property>
> > </configuration>
> >
> > hdfs-site.xml
> > --------------------
> > <configuration>
> > <property>
> > <name>dfs.replication</name>
> > <value>2</value>
> > </property>
> > </configuration>
> >
> > mared-site.xml
> > --------------------
> > <configuration>
> > <property>
> > <name>mapred.job.tracker</name>
> > <value>master:9001</value>
> > </property>
> > </configuration>
> >
> > any help,
> >
>
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
Thanks every one.
I downloaded hadoop-1.2.1 again and set all the conf-* files and now it
worked fine.
I don't know why it didn't work in the first place, the properties that i
set now were exactly the same i did last time.
Regards
Vishnu
On Wed, Dec 25, 2013 at 8:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> It is running on local file system file:///
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
> <vi...@gmail.com> wrote:
> > Hi,
> >
> > I am getting this error while starting the datanode in my slave system.
> >
> > I read the JIRA HDFS-2515, it says it is because hadoop is using wrong
> conf
> > file.
> >
> > 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> > MetricsSystem,sub=Stats registered.
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at
> > 10 second(s).
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> > started
> > 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> > registered.
> > 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> > exists!
> > 13/12/24 15:57:15 ERROR datanode.DataNode:
> > java.lang.IllegalArgumentException: Does not contain a valid host:port
> > authority: file:///
> > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> > at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> > at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
> >
> > But how do i check which conf file hadoop is using? or how do i set it?
> >
> > These are my configurations:
> >
> > core-site.xml
> > ------------------
> > <configuration>
> > <property>
> > <name>fs.defualt.name</name>
> > <value>hdfs://master:9000</value>
> > </property>
> >
> > <property>
> > <name>hadoop.tmp.dir</name>
> > <value>/home/vishnu/hadoop-tmp</value>
> > </property>
> > </configuration>
> >
> > hdfs-site.xml
> > --------------------
> > <configuration>
> > <property>
> > <name>dfs.replication</name>
> > <value>2</value>
> > </property>
> > </configuration>
> >
> > mared-site.xml
> > --------------------
> > <configuration>
> > <property>
> > <name>mapred.job.tracker</name>
> > <value>master:9001</value>
> > </property>
> > </configuration>
> >
> > any help,
> >
>
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
Thanks every one.
I downloaded hadoop-1.2.1 again and set all the conf-* files and now it
worked fine.
I don't know why it didn't work in the first place, the properties that i
set now were exactly the same i did last time.
Regards
Vishnu
On Wed, Dec 25, 2013 at 8:20 PM, Shekhar Sharma <sh...@gmail.com>wrote:
> It is running on local file system file:///
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
> <vi...@gmail.com> wrote:
> > Hi,
> >
> > I am getting this error while starting the datanode in my slave system.
> >
> > I read the JIRA HDFS-2515, it says it is because hadoop is using wrong
> conf
> > file.
> >
> > 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> > MetricsSystem,sub=Stats registered.
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at
> > 10 second(s).
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> > started
> > 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> > registered.
> > 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> > exists!
> > 13/12/24 15:57:15 ERROR datanode.DataNode:
> > java.lang.IllegalArgumentException: Does not contain a valid host:port
> > authority: file:///
> > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> > at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> > at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> > at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
> >
> > But how do i check which conf file hadoop is using? or how do i set it?
> >
> > These are my configurations:
> >
> > core-site.xml
> > ------------------
> > <configuration>
> > <property>
> > <name>fs.defualt.name</name>
> > <value>hdfs://master:9000</value>
> > </property>
> >
> > <property>
> > <name>hadoop.tmp.dir</name>
> > <value>/home/vishnu/hadoop-tmp</value>
> > </property>
> > </configuration>
> >
> > hdfs-site.xml
> > --------------------
> > <configuration>
> > <property>
> > <name>dfs.replication</name>
> > <value>2</value>
> > </property>
> > </configuration>
> >
> > mared-site.xml
> > --------------------
> > <configuration>
> > <property>
> > <name>mapred.job.tracker</name>
> > <value>master:9001</value>
> > </property>
> > </configuration>
> >
> > any help,
> >
>
Re: DataNode not starting in slave machine
Posted by Shekhar Sharma <sh...@gmail.com>.
It is running on local file system file:///
Regards,
Som Shekhar Sharma
+91-8197243810
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
<vi...@gmail.com> wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf
> file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at
> 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
Re: DataNode not starting in slave machine
Posted by Shekhar Sharma <sh...@gmail.com>.
It is running on local file system file:///
Regards,
Som Shekhar Sharma
+91-8197243810
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
<vi...@gmail.com> wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf
> file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at
> 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
Re: DataNode not starting in slave machine
Posted by Chris Mawata <ch...@gmail.com>.
Spelling of 'default' is probably the issue.
Chris
On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <vi...@gmail.com>
wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
>
Re: DataNode not starting in slave machine
Posted by Shekhar Sharma <sh...@gmail.com>.
It is running on local file system file:///
Regards,
Som Shekhar Sharma
+91-8197243810
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
<vi...@gmail.com> wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf
> file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at
> 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
Re: DataNode not starting in slave machine
Posted by Azuryy <az...@gmail.com>.
Did you add master in the hosts?
Sent from my iPhone5s
> On 2013��12��25��, at 22:11, Vishnu Viswanath <vi...@gmail.com> wrote:
>
> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ? I am not running in pseudo-distributed mode. I am having two systems one is master and the other is slave.
>
> Vishnu Viswanath
>
>> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com> wrote:
>>
>> Replace hdfs:// to file:/// in fs.default.name property.
>>
>>
>>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vi...@gmail.com> wrote:
>>> Hi,
>>>
>>> I am getting this error while starting the datanode in my slave system.
>>>
>>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.
>>>
>>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
>>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>>
>>> But how do i check which conf file hadoop is using? or how do i set it?
>>>
>>> These are my configurations:
>>>
>>> core-site.xml
>>> ------------------
>>> <configuration>
>>> <property>
>>> <name>fs.defualt.name</name>
>>> <value>hdfs://master:9000</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/home/vishnu/hadoop-tmp</value>
>>> </property>
>>> </configuration>
>>>
>>> hdfs-site.xml
>>> --------------------
>>> <configuration>
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>> </configuration>
>>>
>>> mared-site.xml
>>> --------------------
>>> <configuration>
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>master:9001</value>
>>> </property>
>>> </configuration>
>>>
>>> any help,
>>
>>
>>
>> --
>> Thanks,
>> Kishore.
Re: DataNode not starting in slave machine
Posted by kishore alajangi <al...@gmail.com>.
change mapred.job.tracker property to http://master:9101 in mapred-site.xml
On Wed, Dec 25, 2013 at 7:41 PM, Vishnu Viswanath <
vishnu.viswanath25@gmail.com> wrote:
> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ? I am not running in
> pseudo-distributed mode. I am having two systems one is master and the
> other is slave.
>
> Vishnu Viswanath
>
> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com>
> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
> vishnu.viswanath25@gmail.com> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>>
>>
>
>
> --
> Thanks,
> Kishore.
>
>
--
Thanks,
Kishore.
Re: DataNode not starting in slave machine
Posted by kishore alajangi <al...@gmail.com>.
change mapred.job.tracker property to http://master:9101 in mapred-site.xml
On Wed, Dec 25, 2013 at 7:41 PM, Vishnu Viswanath <
vishnu.viswanath25@gmail.com> wrote:
> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ? I am not running in
> pseudo-distributed mode. I am having two systems one is master and the
> other is slave.
>
> Vishnu Viswanath
>
> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com>
> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
> vishnu.viswanath25@gmail.com> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>>
>>
>
>
> --
> Thanks,
> Kishore.
>
>
--
Thanks,
Kishore.
Re: DataNode not starting in slave machine
Posted by kishore alajangi <al...@gmail.com>.
change mapred.job.tracker property to http://master:9101 in mapred-site.xml
On Wed, Dec 25, 2013 at 7:41 PM, Vishnu Viswanath <
vishnu.viswanath25@gmail.com> wrote:
> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ? I am not running in
> pseudo-distributed mode. I am having two systems one is master and the
> other is slave.
>
> Vishnu Viswanath
>
> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com>
> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
> vishnu.viswanath25@gmail.com> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>>
>>
>
>
> --
> Thanks,
> Kishore.
>
>
--
Thanks,
Kishore.
Re: DataNode not starting in slave machine
Posted by kishore alajangi <al...@gmail.com>.
change mapred.job.tracker property to http://master:9101 in mapred-site.xml
On Wed, Dec 25, 2013 at 7:41 PM, Vishnu Viswanath <
vishnu.viswanath25@gmail.com> wrote:
> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ? I am not running in
> pseudo-distributed mode. I am having two systems one is master and the
> other is slave.
>
> Vishnu Viswanath
>
> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com>
> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
> vishnu.viswanath25@gmail.com> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>>
>>
>
>
> --
> Thanks,
> Kishore.
>
>
--
Thanks,
Kishore.
Re: DataNode not starting in slave machine
Posted by Azuryy <az...@gmail.com>.
Did you add master in the hosts?
Sent from my iPhone5s
> On 2013年12月25日, at 22:11, Vishnu Viswanath <vi...@gmail.com> wrote:
>
> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ? I am not running in pseudo-distributed mode. I am having two systems one is master and the other is slave.
>
> Vishnu Viswanath
>
>> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com> wrote:
>>
>> Replace hdfs:// to file:/// in fs.default.name property.
>>
>>
>>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vi...@gmail.com> wrote:
>>> Hi,
>>>
>>> I am getting this error while starting the datanode in my slave system.
>>>
>>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.
>>>
>>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
>>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>>
>>> But how do i check which conf file hadoop is using? or how do i set it?
>>>
>>> These are my configurations:
>>>
>>> core-site.xml
>>> ------------------
>>> <configuration>
>>> <property>
>>> <name>fs.defualt.name</name>
>>> <value>hdfs://master:9000</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/home/vishnu/hadoop-tmp</value>
>>> </property>
>>> </configuration>
>>>
>>> hdfs-site.xml
>>> --------------------
>>> <configuration>
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>> </configuration>
>>>
>>> mared-site.xml
>>> --------------------
>>> <configuration>
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>master:9001</value>
>>> </property>
>>> </configuration>
>>>
>>> any help,
>>
>>
>>
>> --
>> Thanks,
>> Kishore.
Re: DataNode not starting in slave machine
Posted by Azuryy <az...@gmail.com>.
Did you add master in the hosts?
Sent from my iPhone5s
> On 2013年12月25日, at 22:11, Vishnu Viswanath <vi...@gmail.com> wrote:
>
> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ? I am not running in pseudo-distributed mode. I am having two systems one is master and the other is slave.
>
> Vishnu Viswanath
>
>> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com> wrote:
>>
>> Replace hdfs:// to file:/// in fs.default.name property.
>>
>>
>>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vi...@gmail.com> wrote:
>>> Hi,
>>>
>>> I am getting this error while starting the datanode in my slave system.
>>>
>>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.
>>>
>>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
>>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>>
>>> But how do i check which conf file hadoop is using? or how do i set it?
>>>
>>> These are my configurations:
>>>
>>> core-site.xml
>>> ------------------
>>> <configuration>
>>> <property>
>>> <name>fs.defualt.name</name>
>>> <value>hdfs://master:9000</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/home/vishnu/hadoop-tmp</value>
>>> </property>
>>> </configuration>
>>>
>>> hdfs-site.xml
>>> --------------------
>>> <configuration>
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>> </configuration>
>>>
>>> mared-site.xml
>>> --------------------
>>> <configuration>
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>master:9001</value>
>>> </property>
>>> </configuration>
>>>
>>> any help,
>>
>>
>>
>> --
>> Thanks,
>> Kishore.
Re: DataNode not starting in slave machine
Posted by Azuryy <az...@gmail.com>.
Did you add master in the hosts?
Sent from my iPhone5s
> On 2013��12��25��, at 22:11, Vishnu Viswanath <vi...@gmail.com> wrote:
>
> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ? I am not running in pseudo-distributed mode. I am having two systems one is master and the other is slave.
>
> Vishnu Viswanath
>
>> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com> wrote:
>>
>> Replace hdfs:// to file:/// in fs.default.name property.
>>
>>
>>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vi...@gmail.com> wrote:
>>> Hi,
>>>
>>> I am getting this error while starting the datanode in my slave system.
>>>
>>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.
>>>
>>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
>>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>>
>>> But how do i check which conf file hadoop is using? or how do i set it?
>>>
>>> These are my configurations:
>>>
>>> core-site.xml
>>> ------------------
>>> <configuration>
>>> <property>
>>> <name>fs.defualt.name</name>
>>> <value>hdfs://master:9000</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/home/vishnu/hadoop-tmp</value>
>>> </property>
>>> </configuration>
>>>
>>> hdfs-site.xml
>>> --------------------
>>> <configuration>
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>> </configuration>
>>>
>>> mared-site.xml
>>> --------------------
>>> <configuration>
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>master:9001</value>
>>> </property>
>>> </configuration>
>>>
>>> any help,
>>
>>
>>
>> --
>> Thanks,
>> Kishore.
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
Made that change . Still the same error.
And why should fs.default.name set to file:/// ? I am not running in pseudo-distributed mode. I am having two systems one is master and the other is slave.
Vishnu Viswanath
> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vi...@gmail.com> wrote:
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>
>
>
> --
> Thanks,
> Kishore.
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
Made that change . Still the same error.
And why should fs.default.name set to file:/// ? I am not running in pseudo-distributed mode. I am having two systems one is master and the other is slave.
Vishnu Viswanath
> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vi...@gmail.com> wrote:
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>
>
>
> --
> Thanks,
> Kishore.
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
Made that change . Still the same error.
And why should fs.default.name set to file:/// ? I am not running in pseudo-distributed mode. I am having two systems one is master and the other is slave.
Vishnu Viswanath
> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vi...@gmail.com> wrote:
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>
>
>
> --
> Thanks,
> Kishore.
Re: DataNode not starting in slave machine
Posted by Vishnu Viswanath <vi...@gmail.com>.
Made that change . Still the same error.
And why should fs.default.name set to file:/// ? I am not running in pseudo-distributed mode. I am having two systems one is master and the other is slave.
Vishnu Viswanath
> On 25-Dec-2013, at 19:35, kishore alajangi <al...@gmail.com> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vi...@gmail.com> wrote:
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> ------------------
>> <configuration>
>> <property>
>> <name>fs.defualt.name</name>
>> <value>hdfs://master:9000</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/home/vishnu/hadoop-tmp</value>
>> </property>
>> </configuration>
>>
>> hdfs-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>> </configuration>
>>
>> mared-site.xml
>> --------------------
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master:9001</value>
>> </property>
>> </configuration>
>>
>> any help,
>
>
>
> --
> Thanks,
> Kishore.
Re: DataNode not starting in slave machine
Posted by kishore alajangi <al...@gmail.com>.
Replace hdfs:// to file:/// in fs.default.name property.
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
vishnu.viswanath25@gmail.com> wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
>
--
Thanks,
Kishore.
Re: DataNode not starting in slave machine
Posted by kishore alajangi <al...@gmail.com>.
Replace hdfs:// to file:/// in fs.default.name property.
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
vishnu.viswanath25@gmail.com> wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
>
--
Thanks,
Kishore.
Re: DataNode not starting in slave machine
Posted by Chris Mawata <ch...@gmail.com>.
Spelling of 'default' is probably the issue.
Chris
On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <vi...@gmail.com>
wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
>
Re: DataNode not starting in slave machine
Posted by Shekhar Sharma <sh...@gmail.com>.
It is running on local file system file:///
Regards,
Som Shekhar Sharma
+91-8197243810
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
<vi...@gmail.com> wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf
> file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at
> 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
Re: DataNode not starting in slave machine
Posted by kishore alajangi <al...@gmail.com>.
Replace hdfs:// to file:/// in fs.default.name property.
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
vishnu.viswanath25@gmail.com> wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
>
--
Thanks,
Kishore.
Re: DataNode not starting in slave machine
Posted by kishore alajangi <al...@gmail.com>.
Replace hdfs:// to file:/// in fs.default.name property.
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
vishnu.viswanath25@gmail.com> wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
>
--
Thanks,
Kishore.
Re: DataNode not starting in slave machine
Posted by Chris Mawata <ch...@gmail.com>.
Spelling of 'default' is probably the issue.
Chris
On Dec 25, 2013 7:32 AM, "Vishnu Viswanath" <vi...@gmail.com>
wrote:
> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> ------------------
> <configuration>
> <property>
> <name>fs.defualt.name</name>
> <value>hdfs://master:9000</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/vishnu/hadoop-tmp</value>
> </property>
> </configuration>
>
> hdfs-site.xml
> --------------------
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
> </configuration>
>
> mared-site.xml
> --------------------
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master:9001</value>
> </property>
> </configuration>
>
> any help,
>
>