You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Dan Dong <do...@gmail.com> on 2014/12/12 23:13:27 UTC

Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Hi,
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

and in hdfs-site.xml:
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
  <final>true</final>
</property>
<property>
  <name>dfs.dataname.data.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
  <final>true</final>
</property>

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
I installed the hadoop by untar the hadoop-2.6.0.tar.gz, will check
further. Thanks.

2014-12-16 14:39 GMT-06:00 Jiayu Ji <ji...@gmail.com>:
>
> The cluster is running hadoop 2.0 while you client side in under hadoop
> 1.0.
>
> I would guess you have installed 1.0 on your client machine before and
> your env variable is still pointing to it.
>
> On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong <do...@gmail.com> wrote:
>>
>> Thanks, the error now changes to the following:
>> $ hadoop dfsadmin -report
>> report: Server IPC version 9 cannot communicate with client version 4
>>
>> Not clear which Server and which client are conflicting. All hadoop
>> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>>
>>> Give complete hostname with domain name not just master-node.
>>>
>>> <property>
>>>   <name>fs.defaultFS</name>
>>>   <value>hdfs://master-node.domain.name:9000</value>
>>> </property>
>>>
>>> Else give IP address also
>>>
>>>
>>> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
>>> > Hi, Johny,
>>> >   Yes, they have been turned off from the beginning. Guess the problem
>>> is
>>> > still in the conf files, it would be helpful if some example *.xml
>>> could be
>>> > shown.
>>> >
>>> >   Cheers,
>>> >   Dan
>>> >
>>> >
>>> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>>> >>
>>> >> do you have selinux and iptables turned off?
>>> >>
>>> >>  ------------------------------
>>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> From: dongdan39@gmail.com
>>> >> To: user@hadoop.apache.org
>>> >>
>>> >>
>>> >>   Found in the log file:
>>> >> 2014-12-12 15:51:10,434 ERROR
>>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>>> >> authority: file:///
>>> >>         at
>>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>> >>
>>> >> But I have set it in core-site.xml already:
>>> >> <property>
>>> >>   <name>fs.defaultFS</name>
>>> >>   <value>hdfs://master-node:9000</value>
>>> >> </property>
>>> >>
>>> >> Other settings:
>>> >> $ cat mapred-site.xml
>>> >> <configuration>
>>> >> <property>
>>> >> <name>mapred.job.tracker</name>
>>> >> <value>master-node:9002</value>
>>> >> </property>
>>> >> <property>
>>> >> <name>mapreduce.jobhistory.address</name>
>>> >> <value>master-node:10020</value>
>>> >> </property>
>>> >> <property>
>>> >> <name>mapreduce.jobhistory.webapp.address</name>
>>> >> <value>master-node:19888</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >> $ cat yarn-site.xml
>>> >> <configuration>
>>> >>
>>> >> <!-- Site specific YARN configuration properties -->
>>> >> <property>
>>> >>    <name>mapreduce.framework.name</name>
>>> >>    <value>yarn</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.address</name>
>>> >>    <value>master-node:18040</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.scheduler.address</name>
>>> >>    <value>master-node:18030</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.webapp.address</name>
>>> >>    <value>master-node:18088</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>> >>    <value>master-node:18025</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.admin.address</name>
>>> >>    <value>master-node:18141</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.nodemanager.aux-services</name>
>>> >>    <value>mapreduce_shuffle</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >> Cheers,
>>> >> Dan
>>> >>
>>> >>
>>> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>>> >>
>>> >> Thank you all, but still the same after change file:/ to file://, and
>>> >> HADOOP_CONF_DIR points to the correct position already:
>>> >> $ echo $HADOOP_CONF_DIR
>>> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>> >>
>>> >>
>>> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>>> >>
>>> >>  Don't you have to use file:// instead of just one /?
>>> >>
>>> >>  ------------------------------
>>> >> From: brahmareddy.battula@huawei.com
>>> >> To: user@hadoop.apache.org
>>> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>> >>
>>> >>
>>> >> Hi Dong,
>>> >>
>>> >> HADOOP_CONF_DIR might be referring to default..you can export
>>> >> HADOOP_CONF_DIR where following configuration files are present..
>>> >>
>>> >> Thanks & Regards
>>> >> Brahma Reddy Battula
>>> >>
>>> >>
>>> >>  ------------------------------
>>> >> *From:* Dan Dong [dongdan39@gmail.com]
>>> >> *Sent:* Saturday, December 13, 2014 3:43 AM
>>> >> *To:* user@hadoop.apache.org
>>> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >>
>>> >>     Hi,
>>> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
>>> following
>>> >> error when I run:
>>> >> $hadoop dfsadmin -report
>>> >> FileSystem file:/// is not a distributed file system
>>> >>
>>> >> What this mean? I have set it in core-site.xml already:
>>> >> <property>
>>> >>   <name>fs.defaultFS</name>
>>> >>   <value>hdfs://master-node:9000</value>
>>> >> </property>
>>> >>
>>> >> and in hdfs-site.xml:
>>> >> <property>
>>> >>   <name>dfs.namenode.name.dir</name>
>>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>> >>   <final>true</final>
>>> >> </property>
>>> >> <property>
>>> >>   <name>dfs.dataname.data.dir</name>
>>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>> >>   <final>true</final>
>>> >> </property>
>>> >>
>>> >> The java process are running on master as:
>>> >> 10479 SecondaryNameNode
>>> >> 10281 NameNode
>>> >> 10628 ResourceManager
>>> >>
>>> >> and on slave:
>>> >> 22870 DataNode
>>> >> 22991 NodeManager
>>> >>
>>> >> Any hints? Thanks!
>>> >>
>>> >> Cheers,
>>> >> Dan
>>> >>
>>> >>
>>> >>
>>> >
>>>
>>
>
> --
> Regards,
>
> Jiayu (James) Ji,
>
> Cell: (312)823-7393
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
I installed the hadoop by untar the hadoop-2.6.0.tar.gz, will check
further. Thanks.

2014-12-16 14:39 GMT-06:00 Jiayu Ji <ji...@gmail.com>:
>
> The cluster is running hadoop 2.0 while you client side in under hadoop
> 1.0.
>
> I would guess you have installed 1.0 on your client machine before and
> your env variable is still pointing to it.
>
> On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong <do...@gmail.com> wrote:
>>
>> Thanks, the error now changes to the following:
>> $ hadoop dfsadmin -report
>> report: Server IPC version 9 cannot communicate with client version 4
>>
>> Not clear which Server and which client are conflicting. All hadoop
>> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>>
>>> Give complete hostname with domain name not just master-node.
>>>
>>> <property>
>>>   <name>fs.defaultFS</name>
>>>   <value>hdfs://master-node.domain.name:9000</value>
>>> </property>
>>>
>>> Else give IP address also
>>>
>>>
>>> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
>>> > Hi, Johny,
>>> >   Yes, they have been turned off from the beginning. Guess the problem
>>> is
>>> > still in the conf files, it would be helpful if some example *.xml
>>> could be
>>> > shown.
>>> >
>>> >   Cheers,
>>> >   Dan
>>> >
>>> >
>>> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>>> >>
>>> >> do you have selinux and iptables turned off?
>>> >>
>>> >>  ------------------------------
>>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> From: dongdan39@gmail.com
>>> >> To: user@hadoop.apache.org
>>> >>
>>> >>
>>> >>   Found in the log file:
>>> >> 2014-12-12 15:51:10,434 ERROR
>>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>>> >> authority: file:///
>>> >>         at
>>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>> >>
>>> >> But I have set it in core-site.xml already:
>>> >> <property>
>>> >>   <name>fs.defaultFS</name>
>>> >>   <value>hdfs://master-node:9000</value>
>>> >> </property>
>>> >>
>>> >> Other settings:
>>> >> $ cat mapred-site.xml
>>> >> <configuration>
>>> >> <property>
>>> >> <name>mapred.job.tracker</name>
>>> >> <value>master-node:9002</value>
>>> >> </property>
>>> >> <property>
>>> >> <name>mapreduce.jobhistory.address</name>
>>> >> <value>master-node:10020</value>
>>> >> </property>
>>> >> <property>
>>> >> <name>mapreduce.jobhistory.webapp.address</name>
>>> >> <value>master-node:19888</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >> $ cat yarn-site.xml
>>> >> <configuration>
>>> >>
>>> >> <!-- Site specific YARN configuration properties -->
>>> >> <property>
>>> >>    <name>mapreduce.framework.name</name>
>>> >>    <value>yarn</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.address</name>
>>> >>    <value>master-node:18040</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.scheduler.address</name>
>>> >>    <value>master-node:18030</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.webapp.address</name>
>>> >>    <value>master-node:18088</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>> >>    <value>master-node:18025</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.admin.address</name>
>>> >>    <value>master-node:18141</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.nodemanager.aux-services</name>
>>> >>    <value>mapreduce_shuffle</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >> Cheers,
>>> >> Dan
>>> >>
>>> >>
>>> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>>> >>
>>> >> Thank you all, but still the same after change file:/ to file://, and
>>> >> HADOOP_CONF_DIR points to the correct position already:
>>> >> $ echo $HADOOP_CONF_DIR
>>> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>> >>
>>> >>
>>> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>>> >>
>>> >>  Don't you have to use file:// instead of just one /?
>>> >>
>>> >>  ------------------------------
>>> >> From: brahmareddy.battula@huawei.com
>>> >> To: user@hadoop.apache.org
>>> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>> >>
>>> >>
>>> >> Hi Dong,
>>> >>
>>> >> HADOOP_CONF_DIR might be referring to default..you can export
>>> >> HADOOP_CONF_DIR where following configuration files are present..
>>> >>
>>> >> Thanks & Regards
>>> >> Brahma Reddy Battula
>>> >>
>>> >>
>>> >>  ------------------------------
>>> >> *From:* Dan Dong [dongdan39@gmail.com]
>>> >> *Sent:* Saturday, December 13, 2014 3:43 AM
>>> >> *To:* user@hadoop.apache.org
>>> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >>
>>> >>     Hi,
>>> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
>>> following
>>> >> error when I run:
>>> >> $hadoop dfsadmin -report
>>> >> FileSystem file:/// is not a distributed file system
>>> >>
>>> >> What this mean? I have set it in core-site.xml already:
>>> >> <property>
>>> >>   <name>fs.defaultFS</name>
>>> >>   <value>hdfs://master-node:9000</value>
>>> >> </property>
>>> >>
>>> >> and in hdfs-site.xml:
>>> >> <property>
>>> >>   <name>dfs.namenode.name.dir</name>
>>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>> >>   <final>true</final>
>>> >> </property>
>>> >> <property>
>>> >>   <name>dfs.dataname.data.dir</name>
>>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>> >>   <final>true</final>
>>> >> </property>
>>> >>
>>> >> The java process are running on master as:
>>> >> 10479 SecondaryNameNode
>>> >> 10281 NameNode
>>> >> 10628 ResourceManager
>>> >>
>>> >> and on slave:
>>> >> 22870 DataNode
>>> >> 22991 NodeManager
>>> >>
>>> >> Any hints? Thanks!
>>> >>
>>> >> Cheers,
>>> >> Dan
>>> >>
>>> >>
>>> >>
>>> >
>>>
>>
>
> --
> Regards,
>
> Jiayu (James) Ji,
>
> Cell: (312)823-7393
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
I installed the hadoop by untar the hadoop-2.6.0.tar.gz, will check
further. Thanks.

2014-12-16 14:39 GMT-06:00 Jiayu Ji <ji...@gmail.com>:
>
> The cluster is running hadoop 2.0 while you client side in under hadoop
> 1.0.
>
> I would guess you have installed 1.0 on your client machine before and
> your env variable is still pointing to it.
>
> On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong <do...@gmail.com> wrote:
>>
>> Thanks, the error now changes to the following:
>> $ hadoop dfsadmin -report
>> report: Server IPC version 9 cannot communicate with client version 4
>>
>> Not clear which Server and which client are conflicting. All hadoop
>> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>>
>>> Give complete hostname with domain name not just master-node.
>>>
>>> <property>
>>>   <name>fs.defaultFS</name>
>>>   <value>hdfs://master-node.domain.name:9000</value>
>>> </property>
>>>
>>> Else give IP address also
>>>
>>>
>>> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
>>> > Hi, Johny,
>>> >   Yes, they have been turned off from the beginning. Guess the problem
>>> is
>>> > still in the conf files, it would be helpful if some example *.xml
>>> could be
>>> > shown.
>>> >
>>> >   Cheers,
>>> >   Dan
>>> >
>>> >
>>> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>>> >>
>>> >> do you have selinux and iptables turned off?
>>> >>
>>> >>  ------------------------------
>>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> From: dongdan39@gmail.com
>>> >> To: user@hadoop.apache.org
>>> >>
>>> >>
>>> >>   Found in the log file:
>>> >> 2014-12-12 15:51:10,434 ERROR
>>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>>> >> authority: file:///
>>> >>         at
>>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>> >>
>>> >> But I have set it in core-site.xml already:
>>> >> <property>
>>> >>   <name>fs.defaultFS</name>
>>> >>   <value>hdfs://master-node:9000</value>
>>> >> </property>
>>> >>
>>> >> Other settings:
>>> >> $ cat mapred-site.xml
>>> >> <configuration>
>>> >> <property>
>>> >> <name>mapred.job.tracker</name>
>>> >> <value>master-node:9002</value>
>>> >> </property>
>>> >> <property>
>>> >> <name>mapreduce.jobhistory.address</name>
>>> >> <value>master-node:10020</value>
>>> >> </property>
>>> >> <property>
>>> >> <name>mapreduce.jobhistory.webapp.address</name>
>>> >> <value>master-node:19888</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >> $ cat yarn-site.xml
>>> >> <configuration>
>>> >>
>>> >> <!-- Site specific YARN configuration properties -->
>>> >> <property>
>>> >>    <name>mapreduce.framework.name</name>
>>> >>    <value>yarn</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.address</name>
>>> >>    <value>master-node:18040</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.scheduler.address</name>
>>> >>    <value>master-node:18030</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.webapp.address</name>
>>> >>    <value>master-node:18088</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>> >>    <value>master-node:18025</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.admin.address</name>
>>> >>    <value>master-node:18141</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.nodemanager.aux-services</name>
>>> >>    <value>mapreduce_shuffle</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >> Cheers,
>>> >> Dan
>>> >>
>>> >>
>>> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>>> >>
>>> >> Thank you all, but still the same after change file:/ to file://, and
>>> >> HADOOP_CONF_DIR points to the correct position already:
>>> >> $ echo $HADOOP_CONF_DIR
>>> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>> >>
>>> >>
>>> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>>> >>
>>> >>  Don't you have to use file:// instead of just one /?
>>> >>
>>> >>  ------------------------------
>>> >> From: brahmareddy.battula@huawei.com
>>> >> To: user@hadoop.apache.org
>>> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>> >>
>>> >>
>>> >> Hi Dong,
>>> >>
>>> >> HADOOP_CONF_DIR might be referring to default..you can export
>>> >> HADOOP_CONF_DIR where following configuration files are present..
>>> >>
>>> >> Thanks & Regards
>>> >> Brahma Reddy Battula
>>> >>
>>> >>
>>> >>  ------------------------------
>>> >> *From:* Dan Dong [dongdan39@gmail.com]
>>> >> *Sent:* Saturday, December 13, 2014 3:43 AM
>>> >> *To:* user@hadoop.apache.org
>>> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >>
>>> >>     Hi,
>>> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
>>> following
>>> >> error when I run:
>>> >> $hadoop dfsadmin -report
>>> >> FileSystem file:/// is not a distributed file system
>>> >>
>>> >> What this mean? I have set it in core-site.xml already:
>>> >> <property>
>>> >>   <name>fs.defaultFS</name>
>>> >>   <value>hdfs://master-node:9000</value>
>>> >> </property>
>>> >>
>>> >> and in hdfs-site.xml:
>>> >> <property>
>>> >>   <name>dfs.namenode.name.dir</name>
>>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>> >>   <final>true</final>
>>> >> </property>
>>> >> <property>
>>> >>   <name>dfs.dataname.data.dir</name>
>>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>> >>   <final>true</final>
>>> >> </property>
>>> >>
>>> >> The java process are running on master as:
>>> >> 10479 SecondaryNameNode
>>> >> 10281 NameNode
>>> >> 10628 ResourceManager
>>> >>
>>> >> and on slave:
>>> >> 22870 DataNode
>>> >> 22991 NodeManager
>>> >>
>>> >> Any hints? Thanks!
>>> >>
>>> >> Cheers,
>>> >> Dan
>>> >>
>>> >>
>>> >>
>>> >
>>>
>>
>
> --
> Regards,
>
> Jiayu (James) Ji,
>
> Cell: (312)823-7393
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
I installed the hadoop by untar the hadoop-2.6.0.tar.gz, will check
further. Thanks.

2014-12-16 14:39 GMT-06:00 Jiayu Ji <ji...@gmail.com>:
>
> The cluster is running hadoop 2.0 while you client side in under hadoop
> 1.0.
>
> I would guess you have installed 1.0 on your client machine before and
> your env variable is still pointing to it.
>
> On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong <do...@gmail.com> wrote:
>>
>> Thanks, the error now changes to the following:
>> $ hadoop dfsadmin -report
>> report: Server IPC version 9 cannot communicate with client version 4
>>
>> Not clear which Server and which client are conflicting. All hadoop
>> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>>
>>> Give complete hostname with domain name not just master-node.
>>>
>>> <property>
>>>   <name>fs.defaultFS</name>
>>>   <value>hdfs://master-node.domain.name:9000</value>
>>> </property>
>>>
>>> Else give IP address also
>>>
>>>
>>> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
>>> > Hi, Johny,
>>> >   Yes, they have been turned off from the beginning. Guess the problem
>>> is
>>> > still in the conf files, it would be helpful if some example *.xml
>>> could be
>>> > shown.
>>> >
>>> >   Cheers,
>>> >   Dan
>>> >
>>> >
>>> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>>> >>
>>> >> do you have selinux and iptables turned off?
>>> >>
>>> >>  ------------------------------
>>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> From: dongdan39@gmail.com
>>> >> To: user@hadoop.apache.org
>>> >>
>>> >>
>>> >>   Found in the log file:
>>> >> 2014-12-12 15:51:10,434 ERROR
>>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>>> >> authority: file:///
>>> >>         at
>>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>> >>         at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>> >>
>>> >> But I have set it in core-site.xml already:
>>> >> <property>
>>> >>   <name>fs.defaultFS</name>
>>> >>   <value>hdfs://master-node:9000</value>
>>> >> </property>
>>> >>
>>> >> Other settings:
>>> >> $ cat mapred-site.xml
>>> >> <configuration>
>>> >> <property>
>>> >> <name>mapred.job.tracker</name>
>>> >> <value>master-node:9002</value>
>>> >> </property>
>>> >> <property>
>>> >> <name>mapreduce.jobhistory.address</name>
>>> >> <value>master-node:10020</value>
>>> >> </property>
>>> >> <property>
>>> >> <name>mapreduce.jobhistory.webapp.address</name>
>>> >> <value>master-node:19888</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >> $ cat yarn-site.xml
>>> >> <configuration>
>>> >>
>>> >> <!-- Site specific YARN configuration properties -->
>>> >> <property>
>>> >>    <name>mapreduce.framework.name</name>
>>> >>    <value>yarn</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.address</name>
>>> >>    <value>master-node:18040</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.scheduler.address</name>
>>> >>    <value>master-node:18030</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.webapp.address</name>
>>> >>    <value>master-node:18088</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>> >>    <value>master-node:18025</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.resourcemanager.admin.address</name>
>>> >>    <value>master-node:18141</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.nodemanager.aux-services</name>
>>> >>    <value>mapreduce_shuffle</value>
>>> >> </property>
>>> >> <property>
>>> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>> >> </property>
>>> >> </configuration>
>>> >>
>>> >> Cheers,
>>> >> Dan
>>> >>
>>> >>
>>> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>>> >>
>>> >> Thank you all, but still the same after change file:/ to file://, and
>>> >> HADOOP_CONF_DIR points to the correct position already:
>>> >> $ echo $HADOOP_CONF_DIR
>>> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>> >>
>>> >>
>>> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>>> >>
>>> >>  Don't you have to use file:// instead of just one /?
>>> >>
>>> >>  ------------------------------
>>> >> From: brahmareddy.battula@huawei.com
>>> >> To: user@hadoop.apache.org
>>> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>> >>
>>> >>
>>> >> Hi Dong,
>>> >>
>>> >> HADOOP_CONF_DIR might be referring to default..you can export
>>> >> HADOOP_CONF_DIR where following configuration files are present..
>>> >>
>>> >> Thanks & Regards
>>> >> Brahma Reddy Battula
>>> >>
>>> >>
>>> >>  ------------------------------
>>> >> *From:* Dan Dong [dongdan39@gmail.com]
>>> >> *Sent:* Saturday, December 13, 2014 3:43 AM
>>> >> *To:* user@hadoop.apache.org
>>> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>>> file
>>> >> system"
>>> >>
>>> >>     Hi,
>>> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
>>> following
>>> >> error when I run:
>>> >> $hadoop dfsadmin -report
>>> >> FileSystem file:/// is not a distributed file system
>>> >>
>>> >> What this mean? I have set it in core-site.xml already:
>>> >> <property>
>>> >>   <name>fs.defaultFS</name>
>>> >>   <value>hdfs://master-node:9000</value>
>>> >> </property>
>>> >>
>>> >> and in hdfs-site.xml:
>>> >> <property>
>>> >>   <name>dfs.namenode.name.dir</name>
>>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>> >>   <final>true</final>
>>> >> </property>
>>> >> <property>
>>> >>   <name>dfs.dataname.data.dir</name>
>>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>> >>   <final>true</final>
>>> >> </property>
>>> >>
>>> >> The java process are running on master as:
>>> >> 10479 SecondaryNameNode
>>> >> 10281 NameNode
>>> >> 10628 ResourceManager
>>> >>
>>> >> and on slave:
>>> >> 22870 DataNode
>>> >> 22991 NodeManager
>>> >>
>>> >> Any hints? Thanks!
>>> >>
>>> >> Cheers,
>>> >> Dan
>>> >>
>>> >>
>>> >>
>>> >
>>>
>>
>
> --
> Regards,
>
> Jiayu (James) Ji,
>
> Cell: (312)823-7393
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Jiayu Ji <ji...@gmail.com>.
The cluster is running hadoop 2.0 while you client side in under hadoop
1.0.

I would guess you have installed 1.0 on your client machine before and your
env variable is still pointing to it.

On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong <do...@gmail.com> wrote:
>
> Thanks, the error now changes to the following:
> $ hadoop dfsadmin -report
> report: Server IPC version 9 cannot communicate with client version 4
>
> Not clear which Server and which client are conflicting. All hadoop
> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>
> Cheers,
> Dan
>
>
> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
>> Give complete hostname with domain name not just master-node.
>>
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node.domain.name:9000</value>
>> </property>
>>
>> Else give IP address also
>>
>>
>> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
>> > Hi, Johny,
>> >   Yes, they have been turned off from the beginning. Guess the problem
>> is
>> > still in the conf files, it would be helpful if some example *.xml
>> could be
>> > shown.
>> >
>> >   Cheers,
>> >   Dan
>> >
>> >
>> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>> >>
>> >> do you have selinux and iptables turned off?
>> >>
>> >>  ------------------------------
>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> From: dongdan39@gmail.com
>> >> To: user@hadoop.apache.org
>> >>
>> >>
>> >>   Found in the log file:
>> >> 2014-12-12 15:51:10,434 ERROR
>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> >> authority: file:///
>> >>         at
>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>> >>
>> >> But I have set it in core-site.xml already:
>> >> <property>
>> >>   <name>fs.defaultFS</name>
>> >>   <value>hdfs://master-node:9000</value>
>> >> </property>
>> >>
>> >> Other settings:
>> >> $ cat mapred-site.xml
>> >> <configuration>
>> >> <property>
>> >> <name>mapred.job.tracker</name>
>> >> <value>master-node:9002</value>
>> >> </property>
>> >> <property>
>> >> <name>mapreduce.jobhistory.address</name>
>> >> <value>master-node:10020</value>
>> >> </property>
>> >> <property>
>> >> <name>mapreduce.jobhistory.webapp.address</name>
>> >> <value>master-node:19888</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >> $ cat yarn-site.xml
>> >> <configuration>
>> >>
>> >> <!-- Site specific YARN configuration properties -->
>> >> <property>
>> >>    <name>mapreduce.framework.name</name>
>> >>    <value>yarn</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.address</name>
>> >>    <value>master-node:18040</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.scheduler.address</name>
>> >>    <value>master-node:18030</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.webapp.address</name>
>> >>    <value>master-node:18088</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
>> >>    <value>master-node:18025</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.admin.address</name>
>> >>    <value>master-node:18141</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.nodemanager.aux-services</name>
>> >>    <value>mapreduce_shuffle</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
>> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>> >>
>> >> Thank you all, but still the same after change file:/ to file://, and
>> >> HADOOP_CONF_DIR points to the correct position already:
>> >> $ echo $HADOOP_CONF_DIR
>> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
>> >>
>> >>
>> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>> >>
>> >>  Don't you have to use file:// instead of just one /?
>> >>
>> >>  ------------------------------
>> >> From: brahmareddy.battula@huawei.com
>> >> To: user@hadoop.apache.org
>> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
>> >>
>> >>
>> >> Hi Dong,
>> >>
>> >> HADOOP_CONF_DIR might be referring to default..you can export
>> >> HADOOP_CONF_DIR where following configuration files are present..
>> >>
>> >> Thanks & Regards
>> >> Brahma Reddy Battula
>> >>
>> >>
>> >>  ------------------------------
>> >> *From:* Dan Dong [dongdan39@gmail.com]
>> >> *Sent:* Saturday, December 13, 2014 3:43 AM
>> >> *To:* user@hadoop.apache.org
>> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> >> system"
>> >>
>> >>     Hi,
>> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
>> following
>> >> error when I run:
>> >> $hadoop dfsadmin -report
>> >> FileSystem file:/// is not a distributed file system
>> >>
>> >> What this mean? I have set it in core-site.xml already:
>> >> <property>
>> >>   <name>fs.defaultFS</name>
>> >>   <value>hdfs://master-node:9000</value>
>> >> </property>
>> >>
>> >> and in hdfs-site.xml:
>> >> <property>
>> >>   <name>dfs.namenode.name.dir</name>
>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>> >>   <final>true</final>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.dataname.data.dir</name>
>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>> >>   <final>true</final>
>> >> </property>
>> >>
>> >> The java process are running on master as:
>> >> 10479 SecondaryNameNode
>> >> 10281 NameNode
>> >> 10628 ResourceManager
>> >>
>> >> and on slave:
>> >> 22870 DataNode
>> >> 22991 NodeManager
>> >>
>> >> Any hints? Thanks!
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
>> >>
>> >
>>
>

-- 
Regards,

Jiayu (James) Ji,

Cell: (312)823-7393

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Jiayu Ji <ji...@gmail.com>.
The cluster is running hadoop 2.0 while you client side in under hadoop
1.0.

I would guess you have installed 1.0 on your client machine before and your
env variable is still pointing to it.

On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong <do...@gmail.com> wrote:
>
> Thanks, the error now changes to the following:
> $ hadoop dfsadmin -report
> report: Server IPC version 9 cannot communicate with client version 4
>
> Not clear which Server and which client are conflicting. All hadoop
> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>
> Cheers,
> Dan
>
>
> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
>> Give complete hostname with domain name not just master-node.
>>
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node.domain.name:9000</value>
>> </property>
>>
>> Else give IP address also
>>
>>
>> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
>> > Hi, Johny,
>> >   Yes, they have been turned off from the beginning. Guess the problem
>> is
>> > still in the conf files, it would be helpful if some example *.xml
>> could be
>> > shown.
>> >
>> >   Cheers,
>> >   Dan
>> >
>> >
>> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>> >>
>> >> do you have selinux and iptables turned off?
>> >>
>> >>  ------------------------------
>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> From: dongdan39@gmail.com
>> >> To: user@hadoop.apache.org
>> >>
>> >>
>> >>   Found in the log file:
>> >> 2014-12-12 15:51:10,434 ERROR
>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> >> authority: file:///
>> >>         at
>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>> >>
>> >> But I have set it in core-site.xml already:
>> >> <property>
>> >>   <name>fs.defaultFS</name>
>> >>   <value>hdfs://master-node:9000</value>
>> >> </property>
>> >>
>> >> Other settings:
>> >> $ cat mapred-site.xml
>> >> <configuration>
>> >> <property>
>> >> <name>mapred.job.tracker</name>
>> >> <value>master-node:9002</value>
>> >> </property>
>> >> <property>
>> >> <name>mapreduce.jobhistory.address</name>
>> >> <value>master-node:10020</value>
>> >> </property>
>> >> <property>
>> >> <name>mapreduce.jobhistory.webapp.address</name>
>> >> <value>master-node:19888</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >> $ cat yarn-site.xml
>> >> <configuration>
>> >>
>> >> <!-- Site specific YARN configuration properties -->
>> >> <property>
>> >>    <name>mapreduce.framework.name</name>
>> >>    <value>yarn</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.address</name>
>> >>    <value>master-node:18040</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.scheduler.address</name>
>> >>    <value>master-node:18030</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.webapp.address</name>
>> >>    <value>master-node:18088</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
>> >>    <value>master-node:18025</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.admin.address</name>
>> >>    <value>master-node:18141</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.nodemanager.aux-services</name>
>> >>    <value>mapreduce_shuffle</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
>> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>> >>
>> >> Thank you all, but still the same after change file:/ to file://, and
>> >> HADOOP_CONF_DIR points to the correct position already:
>> >> $ echo $HADOOP_CONF_DIR
>> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
>> >>
>> >>
>> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>> >>
>> >>  Don't you have to use file:// instead of just one /?
>> >>
>> >>  ------------------------------
>> >> From: brahmareddy.battula@huawei.com
>> >> To: user@hadoop.apache.org
>> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
>> >>
>> >>
>> >> Hi Dong,
>> >>
>> >> HADOOP_CONF_DIR might be referring to default..you can export
>> >> HADOOP_CONF_DIR where following configuration files are present..
>> >>
>> >> Thanks & Regards
>> >> Brahma Reddy Battula
>> >>
>> >>
>> >>  ------------------------------
>> >> *From:* Dan Dong [dongdan39@gmail.com]
>> >> *Sent:* Saturday, December 13, 2014 3:43 AM
>> >> *To:* user@hadoop.apache.org
>> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> >> system"
>> >>
>> >>     Hi,
>> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
>> following
>> >> error when I run:
>> >> $hadoop dfsadmin -report
>> >> FileSystem file:/// is not a distributed file system
>> >>
>> >> What this mean? I have set it in core-site.xml already:
>> >> <property>
>> >>   <name>fs.defaultFS</name>
>> >>   <value>hdfs://master-node:9000</value>
>> >> </property>
>> >>
>> >> and in hdfs-site.xml:
>> >> <property>
>> >>   <name>dfs.namenode.name.dir</name>
>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>> >>   <final>true</final>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.dataname.data.dir</name>
>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>> >>   <final>true</final>
>> >> </property>
>> >>
>> >> The java process are running on master as:
>> >> 10479 SecondaryNameNode
>> >> 10281 NameNode
>> >> 10628 ResourceManager
>> >>
>> >> and on slave:
>> >> 22870 DataNode
>> >> 22991 NodeManager
>> >>
>> >> Any hints? Thanks!
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
>> >>
>> >
>>
>

-- 
Regards,

Jiayu (James) Ji,

Cell: (312)823-7393

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Jiayu Ji <ji...@gmail.com>.
The cluster is running hadoop 2.0 while you client side in under hadoop
1.0.

I would guess you have installed 1.0 on your client machine before and your
env variable is still pointing to it.

On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong <do...@gmail.com> wrote:
>
> Thanks, the error now changes to the following:
> $ hadoop dfsadmin -report
> report: Server IPC version 9 cannot communicate with client version 4
>
> Not clear which Server and which client are conflicting. All hadoop
> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>
> Cheers,
> Dan
>
>
> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
>> Give complete hostname with domain name not just master-node.
>>
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node.domain.name:9000</value>
>> </property>
>>
>> Else give IP address also
>>
>>
>> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
>> > Hi, Johny,
>> >   Yes, they have been turned off from the beginning. Guess the problem
>> is
>> > still in the conf files, it would be helpful if some example *.xml
>> could be
>> > shown.
>> >
>> >   Cheers,
>> >   Dan
>> >
>> >
>> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>> >>
>> >> do you have selinux and iptables turned off?
>> >>
>> >>  ------------------------------
>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> From: dongdan39@gmail.com
>> >> To: user@hadoop.apache.org
>> >>
>> >>
>> >>   Found in the log file:
>> >> 2014-12-12 15:51:10,434 ERROR
>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> >> authority: file:///
>> >>         at
>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>> >>
>> >> But I have set it in core-site.xml already:
>> >> <property>
>> >>   <name>fs.defaultFS</name>
>> >>   <value>hdfs://master-node:9000</value>
>> >> </property>
>> >>
>> >> Other settings:
>> >> $ cat mapred-site.xml
>> >> <configuration>
>> >> <property>
>> >> <name>mapred.job.tracker</name>
>> >> <value>master-node:9002</value>
>> >> </property>
>> >> <property>
>> >> <name>mapreduce.jobhistory.address</name>
>> >> <value>master-node:10020</value>
>> >> </property>
>> >> <property>
>> >> <name>mapreduce.jobhistory.webapp.address</name>
>> >> <value>master-node:19888</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >> $ cat yarn-site.xml
>> >> <configuration>
>> >>
>> >> <!-- Site specific YARN configuration properties -->
>> >> <property>
>> >>    <name>mapreduce.framework.name</name>
>> >>    <value>yarn</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.address</name>
>> >>    <value>master-node:18040</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.scheduler.address</name>
>> >>    <value>master-node:18030</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.webapp.address</name>
>> >>    <value>master-node:18088</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
>> >>    <value>master-node:18025</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.admin.address</name>
>> >>    <value>master-node:18141</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.nodemanager.aux-services</name>
>> >>    <value>mapreduce_shuffle</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
>> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>> >>
>> >> Thank you all, but still the same after change file:/ to file://, and
>> >> HADOOP_CONF_DIR points to the correct position already:
>> >> $ echo $HADOOP_CONF_DIR
>> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
>> >>
>> >>
>> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>> >>
>> >>  Don't you have to use file:// instead of just one /?
>> >>
>> >>  ------------------------------
>> >> From: brahmareddy.battula@huawei.com
>> >> To: user@hadoop.apache.org
>> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
>> >>
>> >>
>> >> Hi Dong,
>> >>
>> >> HADOOP_CONF_DIR might be referring to default..you can export
>> >> HADOOP_CONF_DIR where following configuration files are present..
>> >>
>> >> Thanks & Regards
>> >> Brahma Reddy Battula
>> >>
>> >>
>> >>  ------------------------------
>> >> *From:* Dan Dong [dongdan39@gmail.com]
>> >> *Sent:* Saturday, December 13, 2014 3:43 AM
>> >> *To:* user@hadoop.apache.org
>> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> >> system"
>> >>
>> >>     Hi,
>> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
>> following
>> >> error when I run:
>> >> $hadoop dfsadmin -report
>> >> FileSystem file:/// is not a distributed file system
>> >>
>> >> What this mean? I have set it in core-site.xml already:
>> >> <property>
>> >>   <name>fs.defaultFS</name>
>> >>   <value>hdfs://master-node:9000</value>
>> >> </property>
>> >>
>> >> and in hdfs-site.xml:
>> >> <property>
>> >>   <name>dfs.namenode.name.dir</name>
>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>> >>   <final>true</final>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.dataname.data.dir</name>
>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>> >>   <final>true</final>
>> >> </property>
>> >>
>> >> The java process are running on master as:
>> >> 10479 SecondaryNameNode
>> >> 10281 NameNode
>> >> 10628 ResourceManager
>> >>
>> >> and on slave:
>> >> 22870 DataNode
>> >> 22991 NodeManager
>> >>
>> >> Any hints? Thanks!
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
>> >>
>> >
>>
>

-- 
Regards,

Jiayu (James) Ji,

Cell: (312)823-7393

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Jiayu Ji <ji...@gmail.com>.
The cluster is running hadoop 2.0 while you client side in under hadoop
1.0.

I would guess you have installed 1.0 on your client machine before and your
env variable is still pointing to it.

On Tue, Dec 16, 2014 at 9:31 AM, Dan Dong <do...@gmail.com> wrote:
>
> Thanks, the error now changes to the following:
> $ hadoop dfsadmin -report
> report: Server IPC version 9 cannot communicate with client version 4
>
> Not clear which Server and which client are conflicting. All hadoop
> components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?
>
> Cheers,
> Dan
>
>
> 2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
>> Give complete hostname with domain name not just master-node.
>>
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node.domain.name:9000</value>
>> </property>
>>
>> Else give IP address also
>>
>>
>> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
>> > Hi, Johny,
>> >   Yes, they have been turned off from the beginning. Guess the problem
>> is
>> > still in the conf files, it would be helpful if some example *.xml
>> could be
>> > shown.
>> >
>> >   Cheers,
>> >   Dan
>> >
>> >
>> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>> >>
>> >> do you have selinux and iptables turned off?
>> >>
>> >>  ------------------------------
>> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> From: dongdan39@gmail.com
>> >> To: user@hadoop.apache.org
>> >>
>> >>
>> >>   Found in the log file:
>> >> 2014-12-12 15:51:10,434 ERROR
>> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> >> authority: file:///
>> >>         at
>> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>> >>         at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>> >>
>> >> But I have set it in core-site.xml already:
>> >> <property>
>> >>   <name>fs.defaultFS</name>
>> >>   <value>hdfs://master-node:9000</value>
>> >> </property>
>> >>
>> >> Other settings:
>> >> $ cat mapred-site.xml
>> >> <configuration>
>> >> <property>
>> >> <name>mapred.job.tracker</name>
>> >> <value>master-node:9002</value>
>> >> </property>
>> >> <property>
>> >> <name>mapreduce.jobhistory.address</name>
>> >> <value>master-node:10020</value>
>> >> </property>
>> >> <property>
>> >> <name>mapreduce.jobhistory.webapp.address</name>
>> >> <value>master-node:19888</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >> $ cat yarn-site.xml
>> >> <configuration>
>> >>
>> >> <!-- Site specific YARN configuration properties -->
>> >> <property>
>> >>    <name>mapreduce.framework.name</name>
>> >>    <value>yarn</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.address</name>
>> >>    <value>master-node:18040</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.scheduler.address</name>
>> >>    <value>master-node:18030</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.webapp.address</name>
>> >>    <value>master-node:18088</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
>> >>    <value>master-node:18025</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.resourcemanager.admin.address</name>
>> >>    <value>master-node:18141</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.nodemanager.aux-services</name>
>> >>    <value>mapreduce_shuffle</value>
>> >> </property>
>> >> <property>
>> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>> >> </property>
>> >> </configuration>
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
>> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>> >>
>> >> Thank you all, but still the same after change file:/ to file://, and
>> >> HADOOP_CONF_DIR points to the correct position already:
>> >> $ echo $HADOOP_CONF_DIR
>> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
>> >>
>> >>
>> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>> >>
>> >>  Don't you have to use file:// instead of just one /?
>> >>
>> >>  ------------------------------
>> >> From: brahmareddy.battula@huawei.com
>> >> To: user@hadoop.apache.org
>> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
>> file
>> >> system"
>> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
>> >>
>> >>
>> >> Hi Dong,
>> >>
>> >> HADOOP_CONF_DIR might be referring to default..you can export
>> >> HADOOP_CONF_DIR where following configuration files are present..
>> >>
>> >> Thanks & Regards
>> >> Brahma Reddy Battula
>> >>
>> >>
>> >>  ------------------------------
>> >> *From:* Dan Dong [dongdan39@gmail.com]
>> >> *Sent:* Saturday, December 13, 2014 3:43 AM
>> >> *To:* user@hadoop.apache.org
>> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> >> system"
>> >>
>> >>     Hi,
>> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
>> following
>> >> error when I run:
>> >> $hadoop dfsadmin -report
>> >> FileSystem file:/// is not a distributed file system
>> >>
>> >> What this mean? I have set it in core-site.xml already:
>> >> <property>
>> >>   <name>fs.defaultFS</name>
>> >>   <value>hdfs://master-node:9000</value>
>> >> </property>
>> >>
>> >> and in hdfs-site.xml:
>> >> <property>
>> >>   <name>dfs.namenode.name.dir</name>
>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>> >>   <final>true</final>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.dataname.data.dir</name>
>> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>> >>   <final>true</final>
>> >> </property>
>> >>
>> >> The java process are running on master as:
>> >> 10479 SecondaryNameNode
>> >> 10281 NameNode
>> >> 10628 ResourceManager
>> >>
>> >> and on slave:
>> >> 22870 DataNode
>> >> 22991 NodeManager
>> >>
>> >> Any hints? Thanks!
>> >>
>> >> Cheers,
>> >> Dan
>> >>
>> >>
>> >>
>> >
>>
>

-- 
Regards,

Jiayu (James) Ji,

Cell: (312)823-7393

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Thanks, the error now changes to the following:
$ hadoop dfsadmin -report
report: Server IPC version 9 cannot communicate with client version 4

Not clear which Server and which client are conflicting. All hadoop
components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?

Cheers,
Dan


2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
> Give complete hostname with domain name not just master-node.
>
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node.domain.name:9000</value>
> </property>
>
> Else give IP address also
>
>
> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
> > Hi, Johny,
> >   Yes, they have been turned off from the beginning. Guess the problem is
> > still in the conf files, it would be helpful if some example *.xml could
> be
> > shown.
> >
> >   Cheers,
> >   Dan
> >
> >
> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
> >>
> >> do you have selinux and iptables turned off?
> >>
> >>  ------------------------------
> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> From: dongdan39@gmail.com
> >> To: user@hadoop.apache.org
> >>
> >>
> >>   Found in the log file:
> >> 2014-12-12 15:51:10,434 ERROR
> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
> >> authority: file:///
> >>         at
> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
> >>         at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
> >>
> >> But I have set it in core-site.xml already:
> >> <property>
> >>   <name>fs.defaultFS</name>
> >>   <value>hdfs://master-node:9000</value>
> >> </property>
> >>
> >> Other settings:
> >> $ cat mapred-site.xml
> >> <configuration>
> >> <property>
> >> <name>mapred.job.tracker</name>
> >> <value>master-node:9002</value>
> >> </property>
> >> <property>
> >> <name>mapreduce.jobhistory.address</name>
> >> <value>master-node:10020</value>
> >> </property>
> >> <property>
> >> <name>mapreduce.jobhistory.webapp.address</name>
> >> <value>master-node:19888</value>
> >> </property>
> >> </configuration>
> >>
> >> $ cat yarn-site.xml
> >> <configuration>
> >>
> >> <!-- Site specific YARN configuration properties -->
> >> <property>
> >>    <name>mapreduce.framework.name</name>
> >>    <value>yarn</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.address</name>
> >>    <value>master-node:18040</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.scheduler.address</name>
> >>    <value>master-node:18030</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.webapp.address</name>
> >>    <value>master-node:18088</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
> >>    <value>master-node:18025</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.admin.address</name>
> >>    <value>master-node:18141</value>
> >> </property>
> >> <property>
> >>    <name>yarn.nodemanager.aux-services</name>
> >>    <value>mapreduce_shuffle</value>
> >> </property>
> >> <property>
> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> >> </property>
> >> </configuration>
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
> >>
> >> Thank you all, but still the same after change file:/ to file://, and
> >> HADOOP_CONF_DIR points to the correct position already:
> >> $ echo $HADOOP_CONF_DIR
> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
> >>
> >>
> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
> >>
> >>  Don't you have to use file:// instead of just one /?
> >>
> >>  ------------------------------
> >> From: brahmareddy.battula@huawei.com
> >> To: user@hadoop.apache.org
> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
> >>
> >>
> >> Hi Dong,
> >>
> >> HADOOP_CONF_DIR might be referring to default..you can export
> >> HADOOP_CONF_DIR where following configuration files are present..
> >>
> >> Thanks & Regards
> >> Brahma Reddy Battula
> >>
> >>
> >>  ------------------------------
> >> *From:* Dan Dong [dongdan39@gmail.com]
> >> *Sent:* Saturday, December 13, 2014 3:43 AM
> >> *To:* user@hadoop.apache.org
> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> >> system"
> >>
> >>     Hi,
> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
> following
> >> error when I run:
> >> $hadoop dfsadmin -report
> >> FileSystem file:/// is not a distributed file system
> >>
> >> What this mean? I have set it in core-site.xml already:
> >> <property>
> >>   <name>fs.defaultFS</name>
> >>   <value>hdfs://master-node:9000</value>
> >> </property>
> >>
> >> and in hdfs-site.xml:
> >> <property>
> >>   <name>dfs.namenode.name.dir</name>
> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
> >>   <final>true</final>
> >> </property>
> >> <property>
> >>   <name>dfs.dataname.data.dir</name>
> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
> >>   <final>true</final>
> >> </property>
> >>
> >> The java process are running on master as:
> >> 10479 SecondaryNameNode
> >> 10281 NameNode
> >> 10628 ResourceManager
> >>
> >> and on slave:
> >> 22870 DataNode
> >> 22991 NodeManager
> >>
> >> Any hints? Thanks!
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >>
> >
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Thanks, the error now changes to the following:
$ hadoop dfsadmin -report
report: Server IPC version 9 cannot communicate with client version 4

Not clear which Server and which client are conflicting. All hadoop
components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?

Cheers,
Dan


2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
> Give complete hostname with domain name not just master-node.
>
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node.domain.name:9000</value>
> </property>
>
> Else give IP address also
>
>
> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
> > Hi, Johny,
> >   Yes, they have been turned off from the beginning. Guess the problem is
> > still in the conf files, it would be helpful if some example *.xml could
> be
> > shown.
> >
> >   Cheers,
> >   Dan
> >
> >
> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
> >>
> >> do you have selinux and iptables turned off?
> >>
> >>  ------------------------------
> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> From: dongdan39@gmail.com
> >> To: user@hadoop.apache.org
> >>
> >>
> >>   Found in the log file:
> >> 2014-12-12 15:51:10,434 ERROR
> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
> >> authority: file:///
> >>         at
> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
> >>         at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
> >>
> >> But I have set it in core-site.xml already:
> >> <property>
> >>   <name>fs.defaultFS</name>
> >>   <value>hdfs://master-node:9000</value>
> >> </property>
> >>
> >> Other settings:
> >> $ cat mapred-site.xml
> >> <configuration>
> >> <property>
> >> <name>mapred.job.tracker</name>
> >> <value>master-node:9002</value>
> >> </property>
> >> <property>
> >> <name>mapreduce.jobhistory.address</name>
> >> <value>master-node:10020</value>
> >> </property>
> >> <property>
> >> <name>mapreduce.jobhistory.webapp.address</name>
> >> <value>master-node:19888</value>
> >> </property>
> >> </configuration>
> >>
> >> $ cat yarn-site.xml
> >> <configuration>
> >>
> >> <!-- Site specific YARN configuration properties -->
> >> <property>
> >>    <name>mapreduce.framework.name</name>
> >>    <value>yarn</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.address</name>
> >>    <value>master-node:18040</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.scheduler.address</name>
> >>    <value>master-node:18030</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.webapp.address</name>
> >>    <value>master-node:18088</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
> >>    <value>master-node:18025</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.admin.address</name>
> >>    <value>master-node:18141</value>
> >> </property>
> >> <property>
> >>    <name>yarn.nodemanager.aux-services</name>
> >>    <value>mapreduce_shuffle</value>
> >> </property>
> >> <property>
> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> >> </property>
> >> </configuration>
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
> >>
> >> Thank you all, but still the same after change file:/ to file://, and
> >> HADOOP_CONF_DIR points to the correct position already:
> >> $ echo $HADOOP_CONF_DIR
> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
> >>
> >>
> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
> >>
> >>  Don't you have to use file:// instead of just one /?
> >>
> >>  ------------------------------
> >> From: brahmareddy.battula@huawei.com
> >> To: user@hadoop.apache.org
> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
> >>
> >>
> >> Hi Dong,
> >>
> >> HADOOP_CONF_DIR might be referring to default..you can export
> >> HADOOP_CONF_DIR where following configuration files are present..
> >>
> >> Thanks & Regards
> >> Brahma Reddy Battula
> >>
> >>
> >>  ------------------------------
> >> *From:* Dan Dong [dongdan39@gmail.com]
> >> *Sent:* Saturday, December 13, 2014 3:43 AM
> >> *To:* user@hadoop.apache.org
> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> >> system"
> >>
> >>     Hi,
> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
> following
> >> error when I run:
> >> $hadoop dfsadmin -report
> >> FileSystem file:/// is not a distributed file system
> >>
> >> What this mean? I have set it in core-site.xml already:
> >> <property>
> >>   <name>fs.defaultFS</name>
> >>   <value>hdfs://master-node:9000</value>
> >> </property>
> >>
> >> and in hdfs-site.xml:
> >> <property>
> >>   <name>dfs.namenode.name.dir</name>
> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
> >>   <final>true</final>
> >> </property>
> >> <property>
> >>   <name>dfs.dataname.data.dir</name>
> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
> >>   <final>true</final>
> >> </property>
> >>
> >> The java process are running on master as:
> >> 10479 SecondaryNameNode
> >> 10281 NameNode
> >> 10628 ResourceManager
> >>
> >> and on slave:
> >> 22870 DataNode
> >> 22991 NodeManager
> >>
> >> Any hints? Thanks!
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >>
> >
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Thanks, the error now changes to the following:
$ hadoop dfsadmin -report
report: Server IPC version 9 cannot communicate with client version 4

Not clear which Server and which client are conflicting. All hadoop
components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?

Cheers,
Dan


2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
> Give complete hostname with domain name not just master-node.
>
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node.domain.name:9000</value>
> </property>
>
> Else give IP address also
>
>
> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
> > Hi, Johny,
> >   Yes, they have been turned off from the beginning. Guess the problem is
> > still in the conf files, it would be helpful if some example *.xml could
> be
> > shown.
> >
> >   Cheers,
> >   Dan
> >
> >
> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
> >>
> >> do you have selinux and iptables turned off?
> >>
> >>  ------------------------------
> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> From: dongdan39@gmail.com
> >> To: user@hadoop.apache.org
> >>
> >>
> >>   Found in the log file:
> >> 2014-12-12 15:51:10,434 ERROR
> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
> >> authority: file:///
> >>         at
> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
> >>         at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
> >>
> >> But I have set it in core-site.xml already:
> >> <property>
> >>   <name>fs.defaultFS</name>
> >>   <value>hdfs://master-node:9000</value>
> >> </property>
> >>
> >> Other settings:
> >> $ cat mapred-site.xml
> >> <configuration>
> >> <property>
> >> <name>mapred.job.tracker</name>
> >> <value>master-node:9002</value>
> >> </property>
> >> <property>
> >> <name>mapreduce.jobhistory.address</name>
> >> <value>master-node:10020</value>
> >> </property>
> >> <property>
> >> <name>mapreduce.jobhistory.webapp.address</name>
> >> <value>master-node:19888</value>
> >> </property>
> >> </configuration>
> >>
> >> $ cat yarn-site.xml
> >> <configuration>
> >>
> >> <!-- Site specific YARN configuration properties -->
> >> <property>
> >>    <name>mapreduce.framework.name</name>
> >>    <value>yarn</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.address</name>
> >>    <value>master-node:18040</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.scheduler.address</name>
> >>    <value>master-node:18030</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.webapp.address</name>
> >>    <value>master-node:18088</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
> >>    <value>master-node:18025</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.admin.address</name>
> >>    <value>master-node:18141</value>
> >> </property>
> >> <property>
> >>    <name>yarn.nodemanager.aux-services</name>
> >>    <value>mapreduce_shuffle</value>
> >> </property>
> >> <property>
> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> >> </property>
> >> </configuration>
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
> >>
> >> Thank you all, but still the same after change file:/ to file://, and
> >> HADOOP_CONF_DIR points to the correct position already:
> >> $ echo $HADOOP_CONF_DIR
> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
> >>
> >>
> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
> >>
> >>  Don't you have to use file:// instead of just one /?
> >>
> >>  ------------------------------
> >> From: brahmareddy.battula@huawei.com
> >> To: user@hadoop.apache.org
> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
> >>
> >>
> >> Hi Dong,
> >>
> >> HADOOP_CONF_DIR might be referring to default..you can export
> >> HADOOP_CONF_DIR where following configuration files are present..
> >>
> >> Thanks & Regards
> >> Brahma Reddy Battula
> >>
> >>
> >>  ------------------------------
> >> *From:* Dan Dong [dongdan39@gmail.com]
> >> *Sent:* Saturday, December 13, 2014 3:43 AM
> >> *To:* user@hadoop.apache.org
> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> >> system"
> >>
> >>     Hi,
> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
> following
> >> error when I run:
> >> $hadoop dfsadmin -report
> >> FileSystem file:/// is not a distributed file system
> >>
> >> What this mean? I have set it in core-site.xml already:
> >> <property>
> >>   <name>fs.defaultFS</name>
> >>   <value>hdfs://master-node:9000</value>
> >> </property>
> >>
> >> and in hdfs-site.xml:
> >> <property>
> >>   <name>dfs.namenode.name.dir</name>
> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
> >>   <final>true</final>
> >> </property>
> >> <property>
> >>   <name>dfs.dataname.data.dir</name>
> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
> >>   <final>true</final>
> >> </property>
> >>
> >> The java process are running on master as:
> >> 10479 SecondaryNameNode
> >> 10281 NameNode
> >> 10628 ResourceManager
> >>
> >> and on slave:
> >> 22870 DataNode
> >> 22991 NodeManager
> >>
> >> Any hints? Thanks!
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >>
> >
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Thanks, the error now changes to the following:
$ hadoop dfsadmin -report
report: Server IPC version 9 cannot communicate with client version 4

Not clear which Server and which client are conflicting. All hadoop
components comes from the hadoop-2.6.0.tar.gz package, what's going wrong?

Cheers,
Dan


2014-12-15 22:30 GMT-06:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
> Give complete hostname with domain name not just master-node.
>
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node.domain.name:9000</value>
> </property>
>
> Else give IP address also
>
>
> On 12/16/14, Dan Dong <do...@gmail.com> wrote:
> > Hi, Johny,
> >   Yes, they have been turned off from the beginning. Guess the problem is
> > still in the conf files, it would be helpful if some example *.xml could
> be
> > shown.
> >
> >   Cheers,
> >   Dan
> >
> >
> > 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
> >>
> >> do you have selinux and iptables turned off?
> >>
> >>  ------------------------------
> >> Date: Mon, 15 Dec 2014 09:54:41 -0600
> >> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> From: dongdan39@gmail.com
> >> To: user@hadoop.apache.org
> >>
> >>
> >>   Found in the log file:
> >> 2014-12-12 15:51:10,434 ERROR
> >> org.apache.hadoop.hdfs.server.namenode.NameNode:
> >> java.lang.IllegalArgumentException: Does not contain a valid host:port
> >> authority: file:///
> >>         at
> >> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
> >>         at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
> >>         at
> >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
> >>
> >> But I have set it in core-site.xml already:
> >> <property>
> >>   <name>fs.defaultFS</name>
> >>   <value>hdfs://master-node:9000</value>
> >> </property>
> >>
> >> Other settings:
> >> $ cat mapred-site.xml
> >> <configuration>
> >> <property>
> >> <name>mapred.job.tracker</name>
> >> <value>master-node:9002</value>
> >> </property>
> >> <property>
> >> <name>mapreduce.jobhistory.address</name>
> >> <value>master-node:10020</value>
> >> </property>
> >> <property>
> >> <name>mapreduce.jobhistory.webapp.address</name>
> >> <value>master-node:19888</value>
> >> </property>
> >> </configuration>
> >>
> >> $ cat yarn-site.xml
> >> <configuration>
> >>
> >> <!-- Site specific YARN configuration properties -->
> >> <property>
> >>    <name>mapreduce.framework.name</name>
> >>    <value>yarn</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.address</name>
> >>    <value>master-node:18040</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.scheduler.address</name>
> >>    <value>master-node:18030</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.webapp.address</name>
> >>    <value>master-node:18088</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.resource-tracker.address</name>
> >>    <value>master-node:18025</value>
> >> </property>
> >> <property>
> >>    <name>yarn.resourcemanager.admin.address</name>
> >>    <value>master-node:18141</value>
> >> </property>
> >> <property>
> >>    <name>yarn.nodemanager.aux-services</name>
> >>    <value>mapreduce_shuffle</value>
> >> </property>
> >> <property>
> >>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
> >>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> >> </property>
> >> </configuration>
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
> >>
> >> Thank you all, but still the same after change file:/ to file://, and
> >> HADOOP_CONF_DIR points to the correct position already:
> >> $ echo $HADOOP_CONF_DIR
> >> /home/dong/import/hadoop-2.6.0/etc/hadoop
> >>
> >>
> >> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
> >>
> >>  Don't you have to use file:// instead of just one /?
> >>
> >>  ------------------------------
> >> From: brahmareddy.battula@huawei.com
> >> To: user@hadoop.apache.org
> >> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed
> file
> >> system"
> >> Date: Sat, 13 Dec 2014 05:48:18 +0000
> >>
> >>
> >> Hi Dong,
> >>
> >> HADOOP_CONF_DIR might be referring to default..you can export
> >> HADOOP_CONF_DIR where following configuration files are present..
> >>
> >> Thanks & Regards
> >> Brahma Reddy Battula
> >>
> >>
> >>  ------------------------------
> >> *From:* Dan Dong [dongdan39@gmail.com]
> >> *Sent:* Saturday, December 13, 2014 3:43 AM
> >> *To:* user@hadoop.apache.org
> >> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> >> system"
> >>
> >>     Hi,
> >>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the
> following
> >> error when I run:
> >> $hadoop dfsadmin -report
> >> FileSystem file:/// is not a distributed file system
> >>
> >> What this mean? I have set it in core-site.xml already:
> >> <property>
> >>   <name>fs.defaultFS</name>
> >>   <value>hdfs://master-node:9000</value>
> >> </property>
> >>
> >> and in hdfs-site.xml:
> >> <property>
> >>   <name>dfs.namenode.name.dir</name>
> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
> >>   <final>true</final>
> >> </property>
> >> <property>
> >>   <name>dfs.dataname.data.dir</name>
> >>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
> >>   <final>true</final>
> >> </property>
> >>
> >> The java process are running on master as:
> >> 10479 SecondaryNameNode
> >> 10281 NameNode
> >> 10628 ResourceManager
> >>
> >> and on slave:
> >> 22870 DataNode
> >> 22991 NodeManager
> >>
> >> Any hints? Thanks!
> >>
> >> Cheers,
> >> Dan
> >>
> >>
> >>
> >
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
Give complete hostname with domain name not just master-node.

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node.domain.name:9000</value>
</property>

Else give IP address also


On 12/16/14, Dan Dong <do...@gmail.com> wrote:
> Hi, Johny,
>   Yes, they have been turned off from the beginning. Guess the problem is
> still in the conf files, it would be helpful if some example *.xml could be
> shown.
>
>   Cheers,
>   Dan
>
>
> 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>>
>> do you have selinux and iptables turned off?
>>
>>  ------------------------------
>> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> From: dongdan39@gmail.com
>> To: user@hadoop.apache.org
>>
>>
>>   Found in the log file:
>> 2014-12-12 15:51:10,434 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>>         at
>> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>
>> But I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> Other settings:
>> $ cat mapred-site.xml
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master-node:9002</value>
>> </property>
>> <property>
>> <name>mapreduce.jobhistory.address</name>
>> <value>master-node:10020</value>
>> </property>
>> <property>
>> <name>mapreduce.jobhistory.webapp.address</name>
>> <value>master-node:19888</value>
>> </property>
>> </configuration>
>>
>> $ cat yarn-site.xml
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>> <property>
>>    <name>mapreduce.framework.name</name>
>>    <value>yarn</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.address</name>
>>    <value>master-node:18040</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.scheduler.address</name>
>>    <value>master-node:18030</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.webapp.address</name>
>>    <value>master-node:18088</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>    <value>master-node:18025</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.admin.address</name>
>>    <value>master-node:18141</value>
>> </property>
>> <property>
>>    <name>yarn.nodemanager.aux-services</name>
>>    <value>mapreduce_shuffle</value>
>> </property>
>> <property>
>>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>> </property>
>> </configuration>
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>>
>> Thank you all, but still the same after change file:/ to file://, and
>> HADOOP_CONF_DIR points to the correct position already:
>> $ echo $HADOOP_CONF_DIR
>> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>
>>
>> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>>
>>  Don't you have to use file:// instead of just one /?
>>
>>  ------------------------------
>> From: brahmareddy.battula@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>> Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  ------------------------------
>> *From:* Dan Dong [dongdan39@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>>     Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> and in hdfs-site.xml:
>> <property>
>>   <name>dfs.namenode.name.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>   <final>true</final>
>> </property>
>> <property>
>>   <name>dfs.dataname.data.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>   <final>true</final>
>> </property>
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
Give complete hostname with domain name not just master-node.

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node.domain.name:9000</value>
</property>

Else give IP address also


On 12/16/14, Dan Dong <do...@gmail.com> wrote:
> Hi, Johny,
>   Yes, they have been turned off from the beginning. Guess the problem is
> still in the conf files, it would be helpful if some example *.xml could be
> shown.
>
>   Cheers,
>   Dan
>
>
> 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>>
>> do you have selinux and iptables turned off?
>>
>>  ------------------------------
>> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> From: dongdan39@gmail.com
>> To: user@hadoop.apache.org
>>
>>
>>   Found in the log file:
>> 2014-12-12 15:51:10,434 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>>         at
>> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>
>> But I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> Other settings:
>> $ cat mapred-site.xml
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master-node:9002</value>
>> </property>
>> <property>
>> <name>mapreduce.jobhistory.address</name>
>> <value>master-node:10020</value>
>> </property>
>> <property>
>> <name>mapreduce.jobhistory.webapp.address</name>
>> <value>master-node:19888</value>
>> </property>
>> </configuration>
>>
>> $ cat yarn-site.xml
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>> <property>
>>    <name>mapreduce.framework.name</name>
>>    <value>yarn</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.address</name>
>>    <value>master-node:18040</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.scheduler.address</name>
>>    <value>master-node:18030</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.webapp.address</name>
>>    <value>master-node:18088</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>    <value>master-node:18025</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.admin.address</name>
>>    <value>master-node:18141</value>
>> </property>
>> <property>
>>    <name>yarn.nodemanager.aux-services</name>
>>    <value>mapreduce_shuffle</value>
>> </property>
>> <property>
>>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>> </property>
>> </configuration>
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>>
>> Thank you all, but still the same after change file:/ to file://, and
>> HADOOP_CONF_DIR points to the correct position already:
>> $ echo $HADOOP_CONF_DIR
>> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>
>>
>> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>>
>>  Don't you have to use file:// instead of just one /?
>>
>>  ------------------------------
>> From: brahmareddy.battula@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>> Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  ------------------------------
>> *From:* Dan Dong [dongdan39@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>>     Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> and in hdfs-site.xml:
>> <property>
>>   <name>dfs.namenode.name.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>   <final>true</final>
>> </property>
>> <property>
>>   <name>dfs.dataname.data.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>   <final>true</final>
>> </property>
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
Give complete hostname with domain name not just master-node.

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node.domain.name:9000</value>
</property>

Else give IP address also


On 12/16/14, Dan Dong <do...@gmail.com> wrote:
> Hi, Johny,
>   Yes, they have been turned off from the beginning. Guess the problem is
> still in the conf files, it would be helpful if some example *.xml could be
> shown.
>
>   Cheers,
>   Dan
>
>
> 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>>
>> do you have selinux and iptables turned off?
>>
>>  ------------------------------
>> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> From: dongdan39@gmail.com
>> To: user@hadoop.apache.org
>>
>>
>>   Found in the log file:
>> 2014-12-12 15:51:10,434 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>>         at
>> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>
>> But I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> Other settings:
>> $ cat mapred-site.xml
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master-node:9002</value>
>> </property>
>> <property>
>> <name>mapreduce.jobhistory.address</name>
>> <value>master-node:10020</value>
>> </property>
>> <property>
>> <name>mapreduce.jobhistory.webapp.address</name>
>> <value>master-node:19888</value>
>> </property>
>> </configuration>
>>
>> $ cat yarn-site.xml
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>> <property>
>>    <name>mapreduce.framework.name</name>
>>    <value>yarn</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.address</name>
>>    <value>master-node:18040</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.scheduler.address</name>
>>    <value>master-node:18030</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.webapp.address</name>
>>    <value>master-node:18088</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>    <value>master-node:18025</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.admin.address</name>
>>    <value>master-node:18141</value>
>> </property>
>> <property>
>>    <name>yarn.nodemanager.aux-services</name>
>>    <value>mapreduce_shuffle</value>
>> </property>
>> <property>
>>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>> </property>
>> </configuration>
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>>
>> Thank you all, but still the same after change file:/ to file://, and
>> HADOOP_CONF_DIR points to the correct position already:
>> $ echo $HADOOP_CONF_DIR
>> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>
>>
>> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>>
>>  Don't you have to use file:// instead of just one /?
>>
>>  ------------------------------
>> From: brahmareddy.battula@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>> Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  ------------------------------
>> *From:* Dan Dong [dongdan39@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>>     Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> and in hdfs-site.xml:
>> <property>
>>   <name>dfs.namenode.name.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>   <final>true</final>
>> </property>
>> <property>
>>   <name>dfs.dataname.data.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>   <final>true</final>
>> </property>
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
Give complete hostname with domain name not just master-node.

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node.domain.name:9000</value>
</property>

Else give IP address also


On 12/16/14, Dan Dong <do...@gmail.com> wrote:
> Hi, Johny,
>   Yes, they have been turned off from the beginning. Guess the problem is
> still in the conf files, it would be helpful if some example *.xml could be
> shown.
>
>   Cheers,
>   Dan
>
>
> 2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>>
>> do you have selinux and iptables turned off?
>>
>>  ------------------------------
>> Date: Mon, 15 Dec 2014 09:54:41 -0600
>> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> From: dongdan39@gmail.com
>> To: user@hadoop.apache.org
>>
>>
>>   Found in the log file:
>> 2014-12-12 15:51:10,434 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>>         at
>> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>>
>> But I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> Other settings:
>> $ cat mapred-site.xml
>> <configuration>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>master-node:9002</value>
>> </property>
>> <property>
>> <name>mapreduce.jobhistory.address</name>
>> <value>master-node:10020</value>
>> </property>
>> <property>
>> <name>mapreduce.jobhistory.webapp.address</name>
>> <value>master-node:19888</value>
>> </property>
>> </configuration>
>>
>> $ cat yarn-site.xml
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>> <property>
>>    <name>mapreduce.framework.name</name>
>>    <value>yarn</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.address</name>
>>    <value>master-node:18040</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.scheduler.address</name>
>>    <value>master-node:18030</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.webapp.address</name>
>>    <value>master-node:18088</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.resource-tracker.address</name>
>>    <value>master-node:18025</value>
>> </property>
>> <property>
>>    <name>yarn.resourcemanager.admin.address</name>
>>    <value>master-node:18141</value>
>> </property>
>> <property>
>>    <name>yarn.nodemanager.aux-services</name>
>>    <value>mapreduce_shuffle</value>
>> </property>
>> <property>
>>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>> </property>
>> </configuration>
>>
>> Cheers,
>> Dan
>>
>>
>> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>>
>> Thank you all, but still the same after change file:/ to file://, and
>> HADOOP_CONF_DIR points to the correct position already:
>> $ echo $HADOOP_CONF_DIR
>> /home/dong/import/hadoop-2.6.0/etc/hadoop
>>
>>
>> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>>
>>  Don't you have to use file:// instead of just one /?
>>
>>  ------------------------------
>> From: brahmareddy.battula@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>> Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  ------------------------------
>> *From:* Dan Dong [dongdan39@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>>     Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> and in hdfs-site.xml:
>> <property>
>>   <name>dfs.namenode.name.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>   <final>true</final>
>> </property>
>> <property>
>>   <name>dfs.dataname.data.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>   <final>true</final>
>> </property>
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Hi, Johny,
  Yes, they have been turned off from the beginning. Guess the problem is
still in the conf files, it would be helpful if some example *.xml could be
shown.

  Cheers,
  Dan


2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>
> do you have selinux and iptables turned off?
>
>  ------------------------------
> Date: Mon, 15 Dec 2014 09:54:41 -0600
> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> From: dongdan39@gmail.com
> To: user@hadoop.apache.org
>
>
>   Found in the log file:
> 2014-12-12 15:51:10,434 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
>         at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>
> But I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> Other settings:
> $ cat mapred-site.xml
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master-node:9002</value>
> </property>
> <property>
> <name>mapreduce.jobhistory.address</name>
> <value>master-node:10020</value>
> </property>
> <property>
> <name>mapreduce.jobhistory.webapp.address</name>
> <value>master-node:19888</value>
> </property>
> </configuration>
>
> $ cat yarn-site.xml
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
> <property>
>    <name>mapreduce.framework.name</name>
>    <value>yarn</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.address</name>
>    <value>master-node:18040</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.scheduler.address</name>
>    <value>master-node:18030</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.webapp.address</name>
>    <value>master-node:18088</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.resource-tracker.address</name>
>    <value>master-node:18025</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.admin.address</name>
>    <value>master-node:18141</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce_shuffle</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
> </configuration>
>
> Cheers,
> Dan
>
>
> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
>  Don't you have to use file:// instead of just one /?
>
>  ------------------------------
> From: brahmareddy.battula@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +0000
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
> Thanks & Regards
> Brahma Reddy Battula
>
>
>  ------------------------------
> *From:* Dan Dong [dongdan39@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
>     Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> and in hdfs-site.xml:
> <property>
>   <name>dfs.namenode.name.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.dataname.data.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>   <final>true</final>
> </property>
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Hi, Johny,
  Yes, they have been turned off from the beginning. Guess the problem is
still in the conf files, it would be helpful if some example *.xml could be
shown.

  Cheers,
  Dan


2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>
> do you have selinux and iptables turned off?
>
>  ------------------------------
> Date: Mon, 15 Dec 2014 09:54:41 -0600
> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> From: dongdan39@gmail.com
> To: user@hadoop.apache.org
>
>
>   Found in the log file:
> 2014-12-12 15:51:10,434 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
>         at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>
> But I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> Other settings:
> $ cat mapred-site.xml
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master-node:9002</value>
> </property>
> <property>
> <name>mapreduce.jobhistory.address</name>
> <value>master-node:10020</value>
> </property>
> <property>
> <name>mapreduce.jobhistory.webapp.address</name>
> <value>master-node:19888</value>
> </property>
> </configuration>
>
> $ cat yarn-site.xml
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
> <property>
>    <name>mapreduce.framework.name</name>
>    <value>yarn</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.address</name>
>    <value>master-node:18040</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.scheduler.address</name>
>    <value>master-node:18030</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.webapp.address</name>
>    <value>master-node:18088</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.resource-tracker.address</name>
>    <value>master-node:18025</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.admin.address</name>
>    <value>master-node:18141</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce_shuffle</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
> </configuration>
>
> Cheers,
> Dan
>
>
> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
>  Don't you have to use file:// instead of just one /?
>
>  ------------------------------
> From: brahmareddy.battula@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +0000
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
> Thanks & Regards
> Brahma Reddy Battula
>
>
>  ------------------------------
> *From:* Dan Dong [dongdan39@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
>     Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> and in hdfs-site.xml:
> <property>
>   <name>dfs.namenode.name.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.dataname.data.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>   <final>true</final>
> </property>
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Hi, Johny,
  Yes, they have been turned off from the beginning. Guess the problem is
still in the conf files, it would be helpful if some example *.xml could be
shown.

  Cheers,
  Dan


2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>
> do you have selinux and iptables turned off?
>
>  ------------------------------
> Date: Mon, 15 Dec 2014 09:54:41 -0600
> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> From: dongdan39@gmail.com
> To: user@hadoop.apache.org
>
>
>   Found in the log file:
> 2014-12-12 15:51:10,434 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
>         at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>
> But I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> Other settings:
> $ cat mapred-site.xml
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master-node:9002</value>
> </property>
> <property>
> <name>mapreduce.jobhistory.address</name>
> <value>master-node:10020</value>
> </property>
> <property>
> <name>mapreduce.jobhistory.webapp.address</name>
> <value>master-node:19888</value>
> </property>
> </configuration>
>
> $ cat yarn-site.xml
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
> <property>
>    <name>mapreduce.framework.name</name>
>    <value>yarn</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.address</name>
>    <value>master-node:18040</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.scheduler.address</name>
>    <value>master-node:18030</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.webapp.address</name>
>    <value>master-node:18088</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.resource-tracker.address</name>
>    <value>master-node:18025</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.admin.address</name>
>    <value>master-node:18141</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce_shuffle</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
> </configuration>
>
> Cheers,
> Dan
>
>
> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
>  Don't you have to use file:// instead of just one /?
>
>  ------------------------------
> From: brahmareddy.battula@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +0000
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
> Thanks & Regards
> Brahma Reddy Battula
>
>
>  ------------------------------
> *From:* Dan Dong [dongdan39@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
>     Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> and in hdfs-site.xml:
> <property>
>   <name>dfs.namenode.name.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.dataname.data.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>   <final>true</final>
> </property>
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Hi, Johny,
  Yes, they have been turned off from the beginning. Guess the problem is
still in the conf files, it would be helpful if some example *.xml could be
shown.

  Cheers,
  Dan


2014-12-15 12:24 GMT-06:00 johny casanova <pc...@outlook.com>:
>
> do you have selinux and iptables turned off?
>
>  ------------------------------
> Date: Mon, 15 Dec 2014 09:54:41 -0600
> Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> From: dongdan39@gmail.com
> To: user@hadoop.apache.org
>
>
>   Found in the log file:
> 2014-12-12 15:51:10,434 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
>         at
> org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
>
> But I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> Other settings:
> $ cat mapred-site.xml
> <configuration>
> <property>
> <name>mapred.job.tracker</name>
> <value>master-node:9002</value>
> </property>
> <property>
> <name>mapreduce.jobhistory.address</name>
> <value>master-node:10020</value>
> </property>
> <property>
> <name>mapreduce.jobhistory.webapp.address</name>
> <value>master-node:19888</value>
> </property>
> </configuration>
>
> $ cat yarn-site.xml
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
> <property>
>    <name>mapreduce.framework.name</name>
>    <value>yarn</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.address</name>
>    <value>master-node:18040</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.scheduler.address</name>
>    <value>master-node:18030</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.webapp.address</name>
>    <value>master-node:18088</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.resource-tracker.address</name>
>    <value>master-node:18025</value>
> </property>
> <property>
>    <name>yarn.resourcemanager.admin.address</name>
>    <value>master-node:18141</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce_shuffle</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
> </configuration>
>
> Cheers,
> Dan
>
>
> 2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
>  Don't you have to use file:// instead of just one /?
>
>  ------------------------------
> From: brahmareddy.battula@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +0000
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
> Thanks & Regards
> Brahma Reddy Battula
>
>
>  ------------------------------
> *From:* Dan Dong [dongdan39@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
>     Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> and in hdfs-site.xml:
> <property>
>   <name>dfs.namenode.name.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.dataname.data.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>   <final>true</final>
> </property>
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by johny casanova <pc...@outlook.com>.
do you have selinux and iptables turned off?
 



Date: Mon, 15 Dec 2014 09:54:41 -0600
Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
From: dongdan39@gmail.com
To: user@hadoop.apache.org






Found in the log file:
2014-12-12 15:51:10,434 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property>

Other settings:
$ cat mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master-node:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-node:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-node:19888</value>
</property>
</configuration>


$ cat yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>  
   <name>mapreduce.framework.name</name>  
   <value>yarn</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.address</name>  
   <value>master-node:18040</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.scheduler.address</name>  
   <value>master-node:18030</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.webapp.address</name>  
   <value>master-node:18088</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.resource-tracker.address</name>  
   <value>master-node:18025</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.admin.address</name>  
   <value>master-node:18141</value>  
</property>  
<property>  
   <name>yarn.nodemanager.aux-services</name>  
   <value>mapreduce_shuffle</value>  
</property>  
<property>  
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
</property>  
</configuration>

Cheers,
Dan




2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:

Thank you all, but still the same after change file:/ to file://, and HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR 
/home/dong/import/hadoop-2.6.0/etc/hadoop




2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:




Don't you have to use file:// instead of just one /?
 



From: brahmareddy.battula@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
Date: Sat, 13 Dec 2014 05:48:18 +0000




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..



Thanks & Regards
Brahma Reddy Battula






From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property> 

and in hdfs-site.xml:
<property>   
  <name>dfs.namenode.name.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>   
  <final>true</final>  
</property>   
<property>   
  <name>dfs.dataname.data.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>   
  <final>true</final>  
</property>  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




 		 	   		  

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by johny casanova <pc...@outlook.com>.
do you have selinux and iptables turned off?
 



Date: Mon, 15 Dec 2014 09:54:41 -0600
Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
From: dongdan39@gmail.com
To: user@hadoop.apache.org






Found in the log file:
2014-12-12 15:51:10,434 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property>

Other settings:
$ cat mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master-node:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-node:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-node:19888</value>
</property>
</configuration>


$ cat yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>  
   <name>mapreduce.framework.name</name>  
   <value>yarn</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.address</name>  
   <value>master-node:18040</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.scheduler.address</name>  
   <value>master-node:18030</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.webapp.address</name>  
   <value>master-node:18088</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.resource-tracker.address</name>  
   <value>master-node:18025</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.admin.address</name>  
   <value>master-node:18141</value>  
</property>  
<property>  
   <name>yarn.nodemanager.aux-services</name>  
   <value>mapreduce_shuffle</value>  
</property>  
<property>  
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
</property>  
</configuration>

Cheers,
Dan




2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:

Thank you all, but still the same after change file:/ to file://, and HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR 
/home/dong/import/hadoop-2.6.0/etc/hadoop




2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:




Don't you have to use file:// instead of just one /?
 



From: brahmareddy.battula@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
Date: Sat, 13 Dec 2014 05:48:18 +0000




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..



Thanks & Regards
Brahma Reddy Battula






From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property> 

and in hdfs-site.xml:
<property>   
  <name>dfs.namenode.name.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>   
  <final>true</final>  
</property>   
<property>   
  <name>dfs.dataname.data.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>   
  <final>true</final>  
</property>  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




 		 	   		  

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by johny casanova <pc...@outlook.com>.
do you have selinux and iptables turned off?
 



Date: Mon, 15 Dec 2014 09:54:41 -0600
Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
From: dongdan39@gmail.com
To: user@hadoop.apache.org






Found in the log file:
2014-12-12 15:51:10,434 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property>

Other settings:
$ cat mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master-node:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-node:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-node:19888</value>
</property>
</configuration>


$ cat yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>  
   <name>mapreduce.framework.name</name>  
   <value>yarn</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.address</name>  
   <value>master-node:18040</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.scheduler.address</name>  
   <value>master-node:18030</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.webapp.address</name>  
   <value>master-node:18088</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.resource-tracker.address</name>  
   <value>master-node:18025</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.admin.address</name>  
   <value>master-node:18141</value>  
</property>  
<property>  
   <name>yarn.nodemanager.aux-services</name>  
   <value>mapreduce_shuffle</value>  
</property>  
<property>  
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
</property>  
</configuration>

Cheers,
Dan




2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:

Thank you all, but still the same after change file:/ to file://, and HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR 
/home/dong/import/hadoop-2.6.0/etc/hadoop




2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:




Don't you have to use file:// instead of just one /?
 



From: brahmareddy.battula@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
Date: Sat, 13 Dec 2014 05:48:18 +0000




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..



Thanks & Regards
Brahma Reddy Battula






From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property> 

and in hdfs-site.xml:
<property>   
  <name>dfs.namenode.name.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>   
  <final>true</final>  
</property>   
<property>   
  <name>dfs.dataname.data.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>   
  <final>true</final>  
</property>  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




 		 	   		  

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by johny casanova <pc...@outlook.com>.
do you have selinux and iptables turned off?
 



Date: Mon, 15 Dec 2014 09:54:41 -0600
Subject: Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
From: dongdan39@gmail.com
To: user@hadoop.apache.org






Found in the log file:
2014-12-12 15:51:10,434 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property>

Other settings:
$ cat mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master-node:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-node:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-node:19888</value>
</property>
</configuration>


$ cat yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>  
   <name>mapreduce.framework.name</name>  
   <value>yarn</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.address</name>  
   <value>master-node:18040</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.scheduler.address</name>  
   <value>master-node:18030</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.webapp.address</name>  
   <value>master-node:18088</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.resource-tracker.address</name>  
   <value>master-node:18025</value>  
</property>  
<property>  
   <name>yarn.resourcemanager.admin.address</name>  
   <value>master-node:18141</value>  
</property>  
<property>  
   <name>yarn.nodemanager.aux-services</name>  
   <value>mapreduce_shuffle</value>  
</property>  
<property>  
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
</property>  
</configuration>

Cheers,
Dan




2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:

Thank you all, but still the same after change file:/ to file://, and HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR 
/home/dong/import/hadoop-2.6.0/etc/hadoop




2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:




Don't you have to use file:// instead of just one /?
 



From: brahmareddy.battula@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
Date: Sat, 13 Dec 2014 05:48:18 +0000




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..



Thanks & Regards
Brahma Reddy Battula






From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property> 

and in hdfs-site.xml:
<property>   
  <name>dfs.namenode.name.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>   
  <final>true</final>  
</property>   
<property>   
  <name>dfs.dataname.data.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>   
  <final>true</final>  
</property>  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




 		 	   		  

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Found in the log file:
2014-12-12 15:51:10,434 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: file:///
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

Other settings:
$ cat mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master-node:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-node:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-node:19888</value>
</property>
</configuration>

$ cat yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
<property>
   <name>yarn.resourcemanager.address</name>
   <value>master-node:18040</value>
</property>
<property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>master-node:18030</value>
</property>
<property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>master-node:18088</value>
</property>
<property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>master-node:18025</value>
</property>
<property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>master-node:18141</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

Cheers,
Dan


2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
>> Don't you have to use file:// instead of just one /?
>>
>>  ------------------------------
>> From: brahmareddy.battula@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>>  Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  ------------------------------
>> *From:* Dan Dong [dongdan39@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>>     Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> and in hdfs-site.xml:
>> <property>
>>   <name>dfs.namenode.name.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>   <final>true</final>
>> </property>
>> <property>
>>   <name>dfs.dataname.data.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>   <final>true</final>
>> </property>
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Found in the log file:
2014-12-12 15:51:10,434 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: file:///
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

Other settings:
$ cat mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master-node:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-node:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-node:19888</value>
</property>
</configuration>

$ cat yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
<property>
   <name>yarn.resourcemanager.address</name>
   <value>master-node:18040</value>
</property>
<property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>master-node:18030</value>
</property>
<property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>master-node:18088</value>
</property>
<property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>master-node:18025</value>
</property>
<property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>master-node:18141</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

Cheers,
Dan


2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
>> Don't you have to use file:// instead of just one /?
>>
>>  ------------------------------
>> From: brahmareddy.battula@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>>  Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  ------------------------------
>> *From:* Dan Dong [dongdan39@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>>     Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> and in hdfs-site.xml:
>> <property>
>>   <name>dfs.namenode.name.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>   <final>true</final>
>> </property>
>> <property>
>>   <name>dfs.dataname.data.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>   <final>true</final>
>> </property>
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Found in the log file:
2014-12-12 15:51:10,434 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: file:///
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

Other settings:
$ cat mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master-node:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-node:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-node:19888</value>
</property>
</configuration>

$ cat yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
<property>
   <name>yarn.resourcemanager.address</name>
   <value>master-node:18040</value>
</property>
<property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>master-node:18030</value>
</property>
<property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>master-node:18088</value>
</property>
<property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>master-node:18025</value>
</property>
<property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>master-node:18141</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

Cheers,
Dan


2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
>> Don't you have to use file:// instead of just one /?
>>
>>  ------------------------------
>> From: brahmareddy.battula@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>>  Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  ------------------------------
>> *From:* Dan Dong [dongdan39@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>>     Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> and in hdfs-site.xml:
>> <property>
>>   <name>dfs.namenode.name.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>   <final>true</final>
>> </property>
>> <property>
>>   <name>dfs.dataname.data.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>   <final>true</final>
>> </property>
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Found in the log file:
2014-12-12 15:51:10,434 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: file:///
        at
org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

But I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

Other settings:
$ cat mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master-node:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master-node:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master-node:19888</value>
</property>
</configuration>

$ cat yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
</property>
<property>
   <name>yarn.resourcemanager.address</name>
   <value>master-node:18040</value>
</property>
<property>
   <name>yarn.resourcemanager.scheduler.address</name>
   <value>master-node:18030</value>
</property>
<property>
   <name>yarn.resourcemanager.webapp.address</name>
   <value>master-node:18088</value>
</property>
<property>
   <name>yarn.resourcemanager.resource-tracker.address</name>
   <value>master-node:18025</value>
</property>
<property>
   <name>yarn.resourcemanager.admin.address</name>
   <value>master-node:18141</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

Cheers,
Dan


2014-12-15 9:17 GMT-06:00 Dan Dong <do...@gmail.com>:
>
> Thank you all, but still the same after change file:/ to file://, and
> HADOOP_CONF_DIR points to the correct position already:
> $ echo $HADOOP_CONF_DIR
> /home/dong/import/hadoop-2.6.0/etc/hadoop
>
>
> 2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
>> Don't you have to use file:// instead of just one /?
>>
>>  ------------------------------
>> From: brahmareddy.battula@huawei.com
>> To: user@hadoop.apache.org
>> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>> Date: Sat, 13 Dec 2014 05:48:18 +0000
>>
>>
>> Hi Dong,
>>
>> HADOOP_CONF_DIR might be referring to default..you can export
>> HADOOP_CONF_DIR where following configuration files are present..
>>
>>  Thanks & Regards
>> Brahma Reddy Battula
>>
>>
>>  ------------------------------
>> *From:* Dan Dong [dongdan39@gmail.com]
>> *Sent:* Saturday, December 13, 2014 3:43 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
>> system"
>>
>>     Hi,
>>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
>> error when I run:
>> $hadoop dfsadmin -report
>> FileSystem file:/// is not a distributed file system
>>
>> What this mean? I have set it in core-site.xml already:
>> <property>
>>   <name>fs.defaultFS</name>
>>   <value>hdfs://master-node:9000</value>
>> </property>
>>
>> and in hdfs-site.xml:
>> <property>
>>   <name>dfs.namenode.name.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>>   <final>true</final>
>> </property>
>> <property>
>>   <name>dfs.dataname.data.dir</name>
>>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>>   <final>true</final>
>> </property>
>>
>> The java process are running on master as:
>> 10479 SecondaryNameNode
>> 10281 NameNode
>> 10628 ResourceManager
>>
>> and on slave:
>> 22870 DataNode
>> 22991 NodeManager
>>
>> Any hints? Thanks!
>>
>> Cheers,
>> Dan
>>
>>
>>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Thank you all, but still the same after change file:/ to file://, and
HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR
/home/dong/import/hadoop-2.6.0/etc/hadoop


2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
> Don't you have to use file:// instead of just one /?
>
>  ------------------------------
> From: brahmareddy.battula@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +0000
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
>  Thanks & Regards
> Brahma Reddy Battula
>
>
>  ------------------------------
> *From:* Dan Dong [dongdan39@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
>     Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> and in hdfs-site.xml:
> <property>
>   <name>dfs.namenode.name.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.dataname.data.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>   <final>true</final>
> </property>
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Thank you all, but still the same after change file:/ to file://, and
HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR
/home/dong/import/hadoop-2.6.0/etc/hadoop


2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
> Don't you have to use file:// instead of just one /?
>
>  ------------------------------
> From: brahmareddy.battula@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +0000
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
>  Thanks & Regards
> Brahma Reddy Battula
>
>
>  ------------------------------
> *From:* Dan Dong [dongdan39@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
>     Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> and in hdfs-site.xml:
> <property>
>   <name>dfs.namenode.name.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.dataname.data.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>   <final>true</final>
> </property>
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Thank you all, but still the same after change file:/ to file://, and
HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR
/home/dong/import/hadoop-2.6.0/etc/hadoop


2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
> Don't you have to use file:// instead of just one /?
>
>  ------------------------------
> From: brahmareddy.battula@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +0000
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
>  Thanks & Regards
> Brahma Reddy Battula
>
>
>  ------------------------------
> *From:* Dan Dong [dongdan39@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
>     Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> and in hdfs-site.xml:
> <property>
>   <name>dfs.namenode.name.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.dataname.data.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>   <final>true</final>
> </property>
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>

Re: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Dan Dong <do...@gmail.com>.
Thank you all, but still the same after change file:/ to file://, and
HADOOP_CONF_DIR points to the correct position already:
$ echo $HADOOP_CONF_DIR
/home/dong/import/hadoop-2.6.0/etc/hadoop


2014-12-15 8:57 GMT-06:00 johny casanova <pc...@outlook.com>:
>
> Don't you have to use file:// instead of just one /?
>
>  ------------------------------
> From: brahmareddy.battula@huawei.com
> To: user@hadoop.apache.org
> Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
> Date: Sat, 13 Dec 2014 05:48:18 +0000
>
>
> Hi Dong,
>
> HADOOP_CONF_DIR might be referring to default..you can export
> HADOOP_CONF_DIR where following configuration files are present..
>
>  Thanks & Regards
> Brahma Reddy Battula
>
>
>  ------------------------------
> *From:* Dan Dong [dongdan39@gmail.com]
> *Sent:* Saturday, December 13, 2014 3:43 AM
> *To:* user@hadoop.apache.org
> *Subject:* Hadoop 2.6.0: "FileSystem file:/// is not a distributed file
> system"
>
>     Hi,
>   I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following
> error when I run:
> $hadoop dfsadmin -report
> FileSystem file:/// is not a distributed file system
>
> What this mean? I have set it in core-site.xml already:
> <property>
>   <name>fs.defaultFS</name>
>   <value>hdfs://master-node:9000</value>
> </property>
>
> and in hdfs-site.xml:
> <property>
>   <name>dfs.namenode.name.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.dataname.data.dir</name>
>   <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
>   <final>true</final>
> </property>
>
> The java process are running on master as:
> 10479 SecondaryNameNode
> 10281 NameNode
> 10628 ResourceManager
>
> and on slave:
> 22870 DataNode
> 22991 NodeManager
>
> Any hints? Thanks!
>
> Cheers,
> Dan
>
>
>

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by johny casanova <pc...@outlook.com>.
Don't you have to use file:// instead of just one /?
 



From: brahmareddy.battula@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
Date: Sat, 13 Dec 2014 05:48:18 +0000




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..




Thanks & Regards

Brahma Reddy Battula







From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property> 

and in hdfs-site.xml:
<property>   
  <name>dfs.namenode.name.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>   
  <final>true</final>  
</property>   
<property>   
  <name>dfs.dataname.data.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>   
  <final>true</final>  
</property>  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




 		 	   		  

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by johny casanova <pc...@outlook.com>.
Don't you have to use file:// instead of just one /?
 



From: brahmareddy.battula@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
Date: Sat, 13 Dec 2014 05:48:18 +0000




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..




Thanks & Regards

Brahma Reddy Battula







From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property> 

and in hdfs-site.xml:
<property>   
  <name>dfs.namenode.name.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>   
  <final>true</final>  
</property>   
<property>   
  <name>dfs.dataname.data.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>   
  <final>true</final>  
</property>  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




 		 	   		  

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by johny casanova <pc...@outlook.com>.
Don't you have to use file:// instead of just one /?
 



From: brahmareddy.battula@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
Date: Sat, 13 Dec 2014 05:48:18 +0000




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..




Thanks & Regards

Brahma Reddy Battula







From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property> 

and in hdfs-site.xml:
<property>   
  <name>dfs.namenode.name.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>   
  <final>true</final>  
</property>   
<property>   
  <name>dfs.dataname.data.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>   
  <final>true</final>  
</property>  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




 		 	   		  

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by johny casanova <pc...@outlook.com>.
Don't you have to use file:// instead of just one /?
 



From: brahmareddy.battula@huawei.com
To: user@hadoop.apache.org
Subject: RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"
Date: Sat, 13 Dec 2014 05:48:18 +0000




Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..




Thanks & Regards

Brahma Reddy Battula







From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"










Hi, 
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>  
  <name>fs.defaultFS</name>  
  <value>hdfs://master-node:9000</value>  
</property> 

and in hdfs-site.xml:
<property>   
  <name>dfs.namenode.name.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>   
  <final>true</final>  
</property>   
<property>   
  <name>dfs.dataname.data.dir</name>   
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>   
  <final>true</final>  
</property>  

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan




 		 	   		  

RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Brahma Reddy Battula <br...@huawei.com>.
Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..


Thanks & Regards

Brahma Reddy Battula


________________________________
From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Hi,
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

and in hdfs-site.xml:
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
  <final>true</final>
</property>
<property>
  <name>dfs.dataname.data.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
  <final>true</final>
</property>

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan



RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Brahma Reddy Battula <br...@huawei.com>.
Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..


Thanks & Regards

Brahma Reddy Battula


________________________________
From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Hi,
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

and in hdfs-site.xml:
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
  <final>true</final>
</property>
<property>
  <name>dfs.dataname.data.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
  <final>true</final>
</property>

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan



RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Brahma Reddy Battula <br...@huawei.com>.
Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..


Thanks & Regards

Brahma Reddy Battula


________________________________
From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Hi,
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

and in hdfs-site.xml:
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
  <final>true</final>
</property>
<property>
  <name>dfs.dataname.data.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
  <final>true</final>
</property>

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan



RE: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Posted by Brahma Reddy Battula <br...@huawei.com>.
Hi Dong,

HADOOP_CONF_DIR might be referring to default..you can export HADOOP_CONF_DIR where following configuration files are present..


Thanks & Regards

Brahma Reddy Battula


________________________________
From: Dan Dong [dongdan39@gmail.com]
Sent: Saturday, December 13, 2014 3:43 AM
To: user@hadoop.apache.org
Subject: Hadoop 2.6.0: "FileSystem file:/// is not a distributed file system"

Hi,
  I installed Hadoop2.6.0 on my cluster with 2 nodes, I got the following error when I run:
$hadoop dfsadmin -report
FileSystem file:/// is not a distributed file system

What this mean? I have set it in core-site.xml already:
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master-node:9000</value>
</property>

and in hdfs-site.xml:
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/name</value>
  <final>true</final>
</property>
<property>
  <name>dfs.dataname.data.dir</name>
  <value>file:/home/dong/hadoop-2.6.0-dist/dfs/data</value>
  <final>true</final>
</property>

The java process are running on master as:
10479 SecondaryNameNode
10281 NameNode
10628 ResourceManager

and on slave:
22870 DataNode
22991 NodeManager

Any hints? Thanks!

Cheers,
Dan