You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Silvan Kaiser <si...@quobyte.com> on 2015/04/16 12:53:47 UTC

Question on configuring Hadoop 2.6.0 with a different filesystem

Hello!
I'm rather new to hadoop and currently testing the integration of a new
file system as replacement for HDFS, similiar to integrations like
GlusterFS, GPFS, Ceph, etc. . I do have an implementation of the FileSystem
class but a basic issue trying to test it, This seems to be rooted in a
misconfiguration of my setup:

Upon NameNode startup the fs.defaultFS settings is rejected because the
scheme does not match 'hdfs', which is true as i'm using a scheme for our
plugin. Log output:

~/tmp/hadoop-2.6.0>sbin/start-dfs.sh
Incorrect configuration: namenode address dfs.namenode.servicerpc-address
or dfs.namenode.rpc-address is not configured.
Starting namenodes on []
localhost: starting namenode, logging to
/home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-namenode-kaisers.out
localhost: starting datanode, logging to
/home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-datanode-kaisers.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to
/home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-secondarynamenode-kaisers.out
0.0.0.0: Exception in thread "main" java.lang.IllegalArgumentException:
Invalid URI for NameNode address (check fs.defaultFS): quobyte://
prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ is not of scheme
'hdfs'.
0.0.0.0:        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:429)
0.0.0.0:        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:413)
0.0.0.0:        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:406)
0.0.0.0:        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:229)
0.0.0.0:        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
0.0.0.0:        at
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)

Now the first error message states that namenode adress settings are
missing but i could find no example where these are set for a different
file system. All examples only set fs.defaultFS but this seems not to be
sufficient.

The setup is pseudo-distributed as in the hadoop documentation,
core-site.xml contains these properties:
    <property>
        <name>fs.default.name</name>
        <!-- <value>hdfs://localhost:9000</value> -->
        <value>quobyte://
prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <!-- <value>hdfs://localhost:9000</value> -->
        <value>quobyte://
prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
    </property>
    <property>
        <name>fs.quobyte.impl</name>
        <value>com.quobyte.hadoop.QuobyteFileSystem</value>
    </property>

Any comments or e.g. links to documentation regarding this would be great.

Thansk for reading & best regards
Silvan Kaiser


-- 
Quobyte GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

-- 

--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

Re: Question on configuring Hadoop 2.6.0 with a different filesystem

Posted by sandeep vura <sa...@gmail.com>.
Hi Silvan,

Please put the below configuration in core-site.xml and start the cluster.

  <property> <name>fs.default.name</name>     <value>quobyte://
prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value></property>
<property>        <name>fs.quobyte.impl</name>
<value>com.quobyte.hadoop.QuobyteFileSystem</value>    </property>

Regards,
Sandeep.v

On Thu, Apr 16, 2015 at 4:23 PM, Silvan Kaiser <si...@quobyte.com> wrote:

> Hello!
> I'm rather new to hadoop and currently testing the integration of a new
> file system as replacement for HDFS, similiar to integrations like
> GlusterFS, GPFS, Ceph, etc. . I do have an implementation of the FileSystem
> class but a basic issue trying to test it, This seems to be rooted in a
> misconfiguration of my setup:
>
> Upon NameNode startup the fs.defaultFS settings is rejected because the
> scheme does not match 'hdfs', which is true as i'm using a scheme for our
> plugin. Log output:
>
> ~/tmp/hadoop-2.6.0>sbin/start-dfs.sh
> Incorrect configuration: namenode address dfs.namenode.servicerpc-address
> or dfs.namenode.rpc-address is not configured.
> Starting namenodes on []
> localhost: starting namenode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-namenode-kaisers.out
> localhost: starting datanode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-datanode-kaisers.out
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: starting secondarynamenode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-secondarynamenode-kaisers.out
> 0.0.0.0: Exception in thread "main" java.lang.IllegalArgumentException:
> Invalid URI for NameNode address (check fs.defaultFS): quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ is not of scheme
> 'hdfs'.
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:429)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:413)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:406)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:229)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
>
> Now the first error message states that namenode adress settings are
> missing but i could find no example where these are set for a different
> file system. All examples only set fs.defaultFS but this seems not to be
> sufficient.
>
> The setup is pseudo-distributed as in the hadoop documentation,
> core-site.xml contains these properties:
>     <property>
>         <name>fs.default.name</name>
>         <!-- <value>hdfs://localhost:9000</value> -->
>         <value>quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
>     </property>
>     <property>
>         <name>fs.defaultFS</name>
>         <!-- <value>hdfs://localhost:9000</value> -->
>         <value>quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
>     </property>
>     <property>
>         <name>fs.quobyte.impl</name>
>         <value>com.quobyte.hadoop.QuobyteFileSystem</value>
>     </property>
>
> Any comments or e.g. links to documentation regarding this would be great.
>
> Thansk for reading & best regards
> Silvan Kaiser
>
>
> --
> Quobyte GmbH
> Boyenstr. 41 - 10115 Berlin-Mitte - Germany
> +49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
>
> --
> *Quobyte* GmbH
> Boyenstr. 41 - 10115 Berlin-Mitte - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>

Re: Question on configuring Hadoop 2.6.0 with a different filesystem

Posted by sandeep vura <sa...@gmail.com>.
Hi Silvan,

Please put the below configuration in core-site.xml and start the cluster.

  <property> <name>fs.default.name</name>     <value>quobyte://
prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value></property>
<property>        <name>fs.quobyte.impl</name>
<value>com.quobyte.hadoop.QuobyteFileSystem</value>    </property>

Regards,
Sandeep.v

On Thu, Apr 16, 2015 at 4:23 PM, Silvan Kaiser <si...@quobyte.com> wrote:

> Hello!
> I'm rather new to hadoop and currently testing the integration of a new
> file system as replacement for HDFS, similiar to integrations like
> GlusterFS, GPFS, Ceph, etc. . I do have an implementation of the FileSystem
> class but a basic issue trying to test it, This seems to be rooted in a
> misconfiguration of my setup:
>
> Upon NameNode startup the fs.defaultFS settings is rejected because the
> scheme does not match 'hdfs', which is true as i'm using a scheme for our
> plugin. Log output:
>
> ~/tmp/hadoop-2.6.0>sbin/start-dfs.sh
> Incorrect configuration: namenode address dfs.namenode.servicerpc-address
> or dfs.namenode.rpc-address is not configured.
> Starting namenodes on []
> localhost: starting namenode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-namenode-kaisers.out
> localhost: starting datanode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-datanode-kaisers.out
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: starting secondarynamenode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-secondarynamenode-kaisers.out
> 0.0.0.0: Exception in thread "main" java.lang.IllegalArgumentException:
> Invalid URI for NameNode address (check fs.defaultFS): quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ is not of scheme
> 'hdfs'.
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:429)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:413)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:406)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:229)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
>
> Now the first error message states that namenode adress settings are
> missing but i could find no example where these are set for a different
> file system. All examples only set fs.defaultFS but this seems not to be
> sufficient.
>
> The setup is pseudo-distributed as in the hadoop documentation,
> core-site.xml contains these properties:
>     <property>
>         <name>fs.default.name</name>
>         <!-- <value>hdfs://localhost:9000</value> -->
>         <value>quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
>     </property>
>     <property>
>         <name>fs.defaultFS</name>
>         <!-- <value>hdfs://localhost:9000</value> -->
>         <value>quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
>     </property>
>     <property>
>         <name>fs.quobyte.impl</name>
>         <value>com.quobyte.hadoop.QuobyteFileSystem</value>
>     </property>
>
> Any comments or e.g. links to documentation regarding this would be great.
>
> Thansk for reading & best regards
> Silvan Kaiser
>
>
> --
> Quobyte GmbH
> Boyenstr. 41 - 10115 Berlin-Mitte - Germany
> +49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
>
> --
> *Quobyte* GmbH
> Boyenstr. 41 - 10115 Berlin-Mitte - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>

Re: Question on configuring Hadoop 2.6.0 with a different filesystem

Posted by sandeep vura <sa...@gmail.com>.
Hi Silvan,

Please put the below configuration in core-site.xml and start the cluster.

  <property> <name>fs.default.name</name>     <value>quobyte://
prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value></property>
<property>        <name>fs.quobyte.impl</name>
<value>com.quobyte.hadoop.QuobyteFileSystem</value>    </property>

Regards,
Sandeep.v

On Thu, Apr 16, 2015 at 4:23 PM, Silvan Kaiser <si...@quobyte.com> wrote:

> Hello!
> I'm rather new to hadoop and currently testing the integration of a new
> file system as replacement for HDFS, similiar to integrations like
> GlusterFS, GPFS, Ceph, etc. . I do have an implementation of the FileSystem
> class but a basic issue trying to test it, This seems to be rooted in a
> misconfiguration of my setup:
>
> Upon NameNode startup the fs.defaultFS settings is rejected because the
> scheme does not match 'hdfs', which is true as i'm using a scheme for our
> plugin. Log output:
>
> ~/tmp/hadoop-2.6.0>sbin/start-dfs.sh
> Incorrect configuration: namenode address dfs.namenode.servicerpc-address
> or dfs.namenode.rpc-address is not configured.
> Starting namenodes on []
> localhost: starting namenode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-namenode-kaisers.out
> localhost: starting datanode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-datanode-kaisers.out
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: starting secondarynamenode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-secondarynamenode-kaisers.out
> 0.0.0.0: Exception in thread "main" java.lang.IllegalArgumentException:
> Invalid URI for NameNode address (check fs.defaultFS): quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ is not of scheme
> 'hdfs'.
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:429)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:413)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:406)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:229)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
>
> Now the first error message states that namenode adress settings are
> missing but i could find no example where these are set for a different
> file system. All examples only set fs.defaultFS but this seems not to be
> sufficient.
>
> The setup is pseudo-distributed as in the hadoop documentation,
> core-site.xml contains these properties:
>     <property>
>         <name>fs.default.name</name>
>         <!-- <value>hdfs://localhost:9000</value> -->
>         <value>quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
>     </property>
>     <property>
>         <name>fs.defaultFS</name>
>         <!-- <value>hdfs://localhost:9000</value> -->
>         <value>quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
>     </property>
>     <property>
>         <name>fs.quobyte.impl</name>
>         <value>com.quobyte.hadoop.QuobyteFileSystem</value>
>     </property>
>
> Any comments or e.g. links to documentation regarding this would be great.
>
> Thansk for reading & best regards
> Silvan Kaiser
>
>
> --
> Quobyte GmbH
> Boyenstr. 41 - 10115 Berlin-Mitte - Germany
> +49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
>
> --
> *Quobyte* GmbH
> Boyenstr. 41 - 10115 Berlin-Mitte - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>

Re: Question on configuring Hadoop 2.6.0 with a different filesystem

Posted by sandeep vura <sa...@gmail.com>.
Hi Silvan,

Please put the below configuration in core-site.xml and start the cluster.

  <property> <name>fs.default.name</name>     <value>quobyte://
prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value></property>
<property>        <name>fs.quobyte.impl</name>
<value>com.quobyte.hadoop.QuobyteFileSystem</value>    </property>

Regards,
Sandeep.v

On Thu, Apr 16, 2015 at 4:23 PM, Silvan Kaiser <si...@quobyte.com> wrote:

> Hello!
> I'm rather new to hadoop and currently testing the integration of a new
> file system as replacement for HDFS, similiar to integrations like
> GlusterFS, GPFS, Ceph, etc. . I do have an implementation of the FileSystem
> class but a basic issue trying to test it, This seems to be rooted in a
> misconfiguration of my setup:
>
> Upon NameNode startup the fs.defaultFS settings is rejected because the
> scheme does not match 'hdfs', which is true as i'm using a scheme for our
> plugin. Log output:
>
> ~/tmp/hadoop-2.6.0>sbin/start-dfs.sh
> Incorrect configuration: namenode address dfs.namenode.servicerpc-address
> or dfs.namenode.rpc-address is not configured.
> Starting namenodes on []
> localhost: starting namenode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-namenode-kaisers.out
> localhost: starting datanode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-datanode-kaisers.out
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: starting secondarynamenode, logging to
> /home/kaisers/tmp/hadoop-2.6.0/logs/hadoop-kaisers-secondarynamenode-kaisers.out
> 0.0.0.0: Exception in thread "main" java.lang.IllegalArgumentException:
> Invalid URI for NameNode address (check fs.defaultFS): quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/ is not of scheme
> 'hdfs'.
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:429)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:413)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:406)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:229)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
> 0.0.0.0:        at
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
>
> Now the first error message states that namenode adress settings are
> missing but i could find no example where these are set for a different
> file system. All examples only set fs.defaultFS but this seems not to be
> sufficient.
>
> The setup is pseudo-distributed as in the hadoop documentation,
> core-site.xml contains these properties:
>     <property>
>         <name>fs.default.name</name>
>         <!-- <value>hdfs://localhost:9000</value> -->
>         <value>quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
>     </property>
>     <property>
>         <name>fs.defaultFS</name>
>         <!-- <value>hdfs://localhost:9000</value> -->
>         <value>quobyte://
> prod.corp.quobyte.com:7861/users/kaisers/hadoop-test/</value>
>     </property>
>     <property>
>         <name>fs.quobyte.impl</name>
>         <value>com.quobyte.hadoop.QuobyteFileSystem</value>
>     </property>
>
> Any comments or e.g. links to documentation regarding this would be great.
>
> Thansk for reading & best regards
> Silvan Kaiser
>
>
> --
> Quobyte GmbH
> Boyenstr. 41 - 10115 Berlin-Mitte - Germany
> +49-30-814 591 800 - www.quobyte.com<http://www.quobyte.com/>
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
>
> --
> *Quobyte* GmbH
> Boyenstr. 41 - 10115 Berlin-Mitte - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>