You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by mpiller <ma...@themidnightcoders.com> on 2009/08/19 15:19:50 UTC
local mode with S3 configuration
Hi,
I tried searching the forum for an answer and could not find it..
I am running Hadoop in the local mode - very basic single-server setup. I
set the configuration to use s3n, that is I have properties fs.default.name,
fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey configured in
hadoop-site.xml. What is not clear to me is how my namenode should be
configured. My configuration file includes the following:
<property>
<name>mapred.job.tracker</name>
<value>localhost:47111</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
and I see an exception in the namenode log file (where XXXXXX is the name of
my S3 bucket):
2009-08-19 07:44:56,593 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.net.UnknownHostException: Invalid hostname for server: XXXXXX
at org.apache.hadoop.ipc.Server.bind(Server.java:179)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:234)
at org.apache.hadoop.ipc.Server.<init>(Server.java:960)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:465)
at org.apache.hadoop.ipc.RPC.getServer(RPC.java:427)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:153)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
My questions are:
- do I need a namenode if I use S3 for input/output?
- If I do, what should it be set to?
Regards,
Mark
--
View this message in context: http://www.nabble.com/local-mode-with-S3-configuration-tp25043948p25043948.html
Sent from the HBase User mailing list archive at Nabble.com.
Re: local mode with S3 configuration
Posted by mpiller <ma...@themidnightcoders.com>.
Hi J-D,
Thanks for the answer, I missed that note in the doc.
I'm sorry for posting in the wrong place, I thought *this* was the hadoop
forum.
Regards,
Mark
Jean-Daniel Cryans-2 wrote:
>
> Mark,
>
> This question belongs to the Hadoop mailing list, not the HBase one.
>
> But to answer your question, I see this in the doc
> http://wiki.apache.org/hadoop/AmazonS3 :
>
> To run in distributed mode you only need to run the MapReduce daemons
> (JobTracker and TaskTrackers) - HDFS NameNode and DataNodes are
> unnecessary.
>
>
> J-D
>
> On Wed, Aug 19, 2009 at 9:19 AM, mpiller<ma...@themidnightcoders.com>
> wrote:
>>
>> Hi,
>>
>> I tried searching the forum for an answer and could not find it..
>>
>> I am running Hadoop in the local mode - very basic single-server setup. I
>> set the configuration to use s3n, that is I have properties
>> fs.default.name,
>> fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey configured in
>> hadoop-site.xml. What is not clear to me is how my namenode should be
>> configured. My configuration file includes the following:
>>
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>localhost:47111</value>
>> </property>
>>
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>> and I see an exception in the namenode log file (where XXXXXX is the name
>> of
>> my S3 bucket):
>>
>> 2009-08-19 07:44:56,593 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>> java.net.UnknownHostException: Invalid hostname for server: XXXXXX
>> at org.apache.hadoop.ipc.Server.bind(Server.java:179)
>> at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:234)
>> at org.apache.hadoop.ipc.Server.<init>(Server.java:960)
>> at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:465)
>> at org.apache.hadoop.ipc.RPC.getServer(RPC.java:427)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:153)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
>>
>> My questions are:
>>
>> - do I need a namenode if I use S3 for input/output?
>> - If I do, what should it be set to?
>>
>>
>> Regards,
>> Mark
>> --
>> View this message in context:
>> http://www.nabble.com/local-mode-with-S3-configuration-tp25043948p25043948.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>
>
--
View this message in context: http://www.nabble.com/local-mode-with-S3-configuration-tp25043948p25047386.html
Sent from the HBase User mailing list archive at Nabble.com.
Re: local mode with S3 configuration
Posted by Jean-Daniel Cryans <jd...@apache.org>.
Mark,
This question belongs to the Hadoop mailing list, not the HBase one.
But to answer your question, I see this in the doc
http://wiki.apache.org/hadoop/AmazonS3 :
To run in distributed mode you only need to run the MapReduce daemons
(JobTracker and TaskTrackers) - HDFS NameNode and DataNodes are
unnecessary.
J-D
On Wed, Aug 19, 2009 at 9:19 AM, mpiller<ma...@themidnightcoders.com> wrote:
>
> Hi,
>
> I tried searching the forum for an answer and could not find it..
>
> I am running Hadoop in the local mode - very basic single-server setup. I
> set the configuration to use s3n, that is I have properties fs.default.name,
> fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey configured in
> hadoop-site.xml. What is not clear to me is how my namenode should be
> configured. My configuration file includes the following:
>
> <property>
> <name>mapred.job.tracker</name>
> <value>localhost:47111</value>
> </property>
>
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> and I see an exception in the namenode log file (where XXXXXX is the name of
> my S3 bucket):
>
> 2009-08-19 07:44:56,593 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.net.UnknownHostException: Invalid hostname for server: XXXXXX
> at org.apache.hadoop.ipc.Server.bind(Server.java:179)
> at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:234)
> at org.apache.hadoop.ipc.Server.<init>(Server.java:960)
> at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:465)
> at org.apache.hadoop.ipc.RPC.getServer(RPC.java:427)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:153)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
>
> My questions are:
>
> - do I need a namenode if I use S3 for input/output?
> - If I do, what should it be set to?
>
>
> Regards,
> Mark
> --
> View this message in context: http://www.nabble.com/local-mode-with-S3-configuration-tp25043948p25043948.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
>