You are viewing a plain text version of this content. The canonical link for it is here.
Posted to general@hadoop.apache.org by Tejas Lagvankar <te...@umbc.edu> on 2009/10/13 16:17:14 UTC
0.20.1 Cluster Setup Problem
Hi,
We are trying to set up a cluster (starting with 2 machines) using the
new 0.20.1 version.
On the master machine, just after the server starts, the name node
dies off with the following exception:
2009-10-13 01:22:24,740 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
Incomplete HDFS URI, no host: hdfs://master_hadoop
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize
(DistributedFileSystem.java:78)
at org.apache.hadoop.fs.FileSystem.createFileSystem
(FileSystem.java:1373)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:
66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
1385)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
(NameNode.java:208)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
(NameNode.java:204)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
(NameNode.java:279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main
(NameNode.java:965)
Can anyone help ? Also can anyone send across example configuration
files for 0.20.1 if they are different than we are using ?
The detail log file is attached along with.
Re: 0.20.1 Cluster Setup Problem
Posted by Kevin Sweeney <ke...@yieldex.com>.
Hi Tejas,
I just upgraded to 20.1 as well and you config all looks the same as mine
except in the core-site.xml I have:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Maybe you need to add the port on yours. I haven't seen that error before,
but it seems to be suggesting it can't resolve the host. I'd say
double-check your names and that they resolve.
Hope that helps,
Kevin
On Tue, Oct 13, 2009 at 2:17 PM, Tejas Lagvankar <te...@umbc.edu> wrote:
> Hi,
>
>
> We are trying to set up a cluster (starting with 2 machines) using the new
> 0.20.1 version.
>
> On the master machine, just after the server starts, the name node dies off
> with the following exception:
>
> 2009-10-13 01:22:24,740 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://master_hadoop
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> Can anyone help ? Also can anyone send across example configuration files
> for 0.20.1 if they are different than we are using ?
>
> The detail log file is attached along with.
>
>
>
>
> The configuration files are as follows:
>
> MASTER CONFIG
> ------ conf/masters -------
> master_hadoop
>
> ------ conf/slaves -------
> master_hadoop
> slave_hadoop
>
> ------ core-site.xml -------
> <configuration>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp</value>
> </property>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
>
>
> SLAVE CONFIG
> ------ core-site.xml -------
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp/</value>
> </property>
>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
> Regards,
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2
>
>
>
>
>
Re: 0.20.1 Cluster Setup Problem
Posted by jun hu <jh...@gmail.com>.
I think you should edit the core-site.xml .
(master and slave machine)
> ------ core-site.xml -------
> <configuration>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop: 54310
</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp</value>
> </property>
On Tue, Oct 13, 2009 at 10:17 PM, Tejas Lagvankar <te...@umbc.edu> wrote:
> Hi,
>
> We are trying to set up a cluster (starting with 2 machines) using the new
> 0.20.1 version.
>
> On the master machine, just after the server starts, the name node dies off
> with the following exception:
>
> 2009-10-13 01:22:24,740 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://master_hadoop
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> Can anyone help ? Also can anyone send across example configuration files
> for 0.20.1 if they are different than we are using ?
>
> The detail log file is attached along with.
>
>
>
>
> The configuration files are as follows:
>
> MASTER CONFIG
> ------ conf/masters -------
> master_hadoop
>
> ------ conf/slaves -------
> master_hadoop
> slave_hadoop
>
> ------ core-site.xml -------
> <configuration>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp</value>
> </property>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
>
>
> SLAVE CONFIG
> ------ core-site.xml -------
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp/</value>
> </property>
>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
> Regards,
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>
>
>
>
>
--
Best Regards!
胡俊
Re: 0.20.1 Cluster Setup Problem
Posted by Tejas Lagvankar <te...@umbc.edu>.
Thanks Todd,
I never thought of that !!
Regards,
Tejas
On Oct 13, 2009, at 1:50 PM, Todd Lipcon wrote:
> Your issue was probably that slave_hadoop and master_hadoop are not
> valid
> host names:
>
> RFCs <http://en.wikipedia.org/wiki/Request_for_Comments> mandate
> that a
> hostname's labels may contain only the
> ASCII<http://en.wikipedia.org/wiki/ASCII>letters 'a' through 'z'
> (case-insensitive), the digits '0' through '9', and
> the hyphen. Hostname labels cannot begin or end with a hyphen. No
> other
> symbols, punctuation characters, or blank spaces are permitted.
>
> from http://en.wikipedia.org/wiki/Hostname
>
> -Todd
>
> On Tue, Oct 13, 2009 at 10:01 AM, Tejas Lagvankar <te...@umbc.edu>
> wrote:
>
>> Hey Kevin,
>>
>> You were right...
>> I changed all my aliases to IP addresses. It worked !
>>
>> Thank you all again :)
>>
>> Regards,
>> Tejas
>>
>>
>> On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:
>>
>> By name resolution, I assume that you mean the name mentioned in
>>> /etc/hosts. Yes, in the logs, the IP address appears in the
>>> beginning.
>>> Correct me if I'm wrong
>>> I will also try with using just the IP's instead of the aliases.
>>>
>>> On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
>>>
>>> did you verify the name resolution?
>>>>
>>>> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu>
>>>> wrote:
>>>>
>>>> I get the same error even if i specify the port number. I have
>>>> tried with
>>>> port numbers 54310 as well as 9000.
>>>>
>>>>
>>>> Regards,
>>>> Tejas
>>>>
>>>>
>>>> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>>>>
>>>> I think you need to specify the port as well for following port
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://master_hadoop</value>
>>>> </property>
>>>>
>>>>
>>>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu>
>>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>>
>>>> We are trying to set up a cluster (starting with 2 machines)
>>>> using the
>>>> new
>>>> 0.20.1 version.
>>>>
>>>> On the master machine, just after the server starts, the name
>>>> node dies
>>>> off
>>>> with the following exception:
>>>>
>>>> 2009-10-13 01:22:24,740 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>>> java.io.IOException:
>>>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>>>> at
>>>>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize
>>>> (DistributedFileSystem.java:78)
>>>> at
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
>>>> 1373)
>>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
>>>> 1385)
>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>>>> at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
>>>> (NameNode.java:208)
>>>> at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
>>>> (NameNode.java:204)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
>>>> (NameNode.java:279)
>>>> at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
>>>> (NameNode.java:956)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main
>>>> (NameNode.java:965)
>>>>
>>>> Can anyone help ? Also can anyone send across example
>>>> configuration
>>>> files
>>>> for 0.20.1 if they are different than we are using ?
>>>>
>>>> The detail log file is attached along with.
>>>>
>>>>
>>>>
>>>>
>>>> The configuration files are as follows:
>>>>
>>>> MASTER CONFIG
>>>> ------ conf/masters -------
>>>> master_hadoop
>>>>
>>>> ------ conf/slaves -------
>>>> master_hadoop
>>>> slave_hadoop
>>>>
>>>> ------ core-site.xml -------
>>>> <configuration>
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://master_hadoop</value>
>>>> </property>
>>>>
>>>> <property>
>>>> <name>hadoop.tmp.dir</name>
>>>> <value>/opt/hadoop-0.20.1/tmp</value>
>>>> </property>
>>>>
>>>> ------ hdfs-site.xml -------
>>>> <property>
>>>> <name>dfs.replication</name>
>>>> <value>2</value>
>>>> </property>
>>>>
>>>>
>>>> ------ mapred-site.xml -------
>>>> <property>
>>>> <name>mapred.job.tracker</name>
>>>> <value>tejas_hadoop:9001</value>
>>>> </property>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> SLAVE CONFIG
>>>> ------ core-site.xml -------
>>>> <property>
>>>> <name>hadoop.tmp.dir</name>
>>>> <value>/opt/hadoop-0.20.1/tmp/</value>
>>>> </property>
>>>>
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://master_hadoop</value>
>>>> </property>
>>>>
>>>>
>>>> ------ hdfs-site.xml -------
>>>> <property>
>>>> <name>dfs.replication</name>
>>>> <value>2</value>
>>>> </property>
>>>>
>>>> ------ mapred-site.xml -------
>>>> <property>
>>>> <name>mapred.job.tracker</name>
>>>> <value>tejas_hadoop:9001</value>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Tejas Lagvankar
>>>> meettejas@umbc.edu
>>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2> <
>>>> http://www.umbc.edu/%7Etej2>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Chandan Tamrakar
>>>>
>>>> Tejas Lagvankar
>>>> meettejas@umbc.edu
>>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>> Tejas Lagvankar
>>> meettejas@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
Tejas Lagvankar
meettejas@umbc.edu
www.umbc.edu/~tej2
Re: 0.20.1 Cluster Setup Problem
Posted by Tejas Lagvankar <te...@umbc.edu>.
Thanks Todd,
I never thought of that !!
Regards,
Tejas
On Oct 13, 2009, at 1:50 PM, Todd Lipcon wrote:
> Your issue was probably that slave_hadoop and master_hadoop are not
> valid
> host names:
>
> RFCs <http://en.wikipedia.org/wiki/Request_for_Comments> mandate
> that a
> hostname's labels may contain only the
> ASCII<http://en.wikipedia.org/wiki/ASCII>letters 'a' through 'z'
> (case-insensitive), the digits '0' through '9', and
> the hyphen. Hostname labels cannot begin or end with a hyphen. No
> other
> symbols, punctuation characters, or blank spaces are permitted.
>
> from http://en.wikipedia.org/wiki/Hostname
>
> -Todd
>
> On Tue, Oct 13, 2009 at 10:01 AM, Tejas Lagvankar <te...@umbc.edu>
> wrote:
>
>> Hey Kevin,
>>
>> You were right...
>> I changed all my aliases to IP addresses. It worked !
>>
>> Thank you all again :)
>>
>> Regards,
>> Tejas
>>
>>
>> On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:
>>
>> By name resolution, I assume that you mean the name mentioned in
>>> /etc/hosts. Yes, in the logs, the IP address appears in the
>>> beginning.
>>> Correct me if I'm wrong
>>> I will also try with using just the IP's instead of the aliases.
>>>
>>> On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
>>>
>>> did you verify the name resolution?
>>>>
>>>> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu>
>>>> wrote:
>>>>
>>>> I get the same error even if i specify the port number. I have
>>>> tried with
>>>> port numbers 54310 as well as 9000.
>>>>
>>>>
>>>> Regards,
>>>> Tejas
>>>>
>>>>
>>>> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>>>>
>>>> I think you need to specify the port as well for following port
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://master_hadoop</value>
>>>> </property>
>>>>
>>>>
>>>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu>
>>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>>
>>>> We are trying to set up a cluster (starting with 2 machines)
>>>> using the
>>>> new
>>>> 0.20.1 version.
>>>>
>>>> On the master machine, just after the server starts, the name
>>>> node dies
>>>> off
>>>> with the following exception:
>>>>
>>>> 2009-10-13 01:22:24,740 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>>> java.io.IOException:
>>>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>>>> at
>>>>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize
>>>> (DistributedFileSystem.java:78)
>>>> at
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
>>>> 1373)
>>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
>>>> 1385)
>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>>>> at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
>>>> (NameNode.java:208)
>>>> at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
>>>> (NameNode.java:204)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
>>>> (NameNode.java:279)
>>>> at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
>>>> (NameNode.java:956)
>>>> at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main
>>>> (NameNode.java:965)
>>>>
>>>> Can anyone help ? Also can anyone send across example
>>>> configuration
>>>> files
>>>> for 0.20.1 if they are different than we are using ?
>>>>
>>>> The detail log file is attached along with.
>>>>
>>>>
>>>>
>>>>
>>>> The configuration files are as follows:
>>>>
>>>> MASTER CONFIG
>>>> ------ conf/masters -------
>>>> master_hadoop
>>>>
>>>> ------ conf/slaves -------
>>>> master_hadoop
>>>> slave_hadoop
>>>>
>>>> ------ core-site.xml -------
>>>> <configuration>
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://master_hadoop</value>
>>>> </property>
>>>>
>>>> <property>
>>>> <name>hadoop.tmp.dir</name>
>>>> <value>/opt/hadoop-0.20.1/tmp</value>
>>>> </property>
>>>>
>>>> ------ hdfs-site.xml -------
>>>> <property>
>>>> <name>dfs.replication</name>
>>>> <value>2</value>
>>>> </property>
>>>>
>>>>
>>>> ------ mapred-site.xml -------
>>>> <property>
>>>> <name>mapred.job.tracker</name>
>>>> <value>tejas_hadoop:9001</value>
>>>> </property>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> SLAVE CONFIG
>>>> ------ core-site.xml -------
>>>> <property>
>>>> <name>hadoop.tmp.dir</name>
>>>> <value>/opt/hadoop-0.20.1/tmp/</value>
>>>> </property>
>>>>
>>>>
>>>> <property>
>>>> <name>fs.default.name</name>
>>>> <value>hdfs://master_hadoop</value>
>>>> </property>
>>>>
>>>>
>>>> ------ hdfs-site.xml -------
>>>> <property>
>>>> <name>dfs.replication</name>
>>>> <value>2</value>
>>>> </property>
>>>>
>>>> ------ mapred-site.xml -------
>>>> <property>
>>>> <name>mapred.job.tracker</name>
>>>> <value>tejas_hadoop:9001</value>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Tejas Lagvankar
>>>> meettejas@umbc.edu
>>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2> <
>>>> http://www.umbc.edu/%7Etej2>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Chandan Tamrakar
>>>>
>>>> Tejas Lagvankar
>>>> meettejas@umbc.edu
>>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>> Tejas Lagvankar
>>> meettejas@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
Tejas Lagvankar
meettejas@umbc.edu
www.umbc.edu/~tej2
Re: 0.20.1 Cluster Setup Problem
Posted by Todd Lipcon <to...@cloudera.com>.
Your issue was probably that slave_hadoop and master_hadoop are not valid
host names:
RFCs <http://en.wikipedia.org/wiki/Request_for_Comments> mandate that a
hostname's labels may contain only the
ASCII<http://en.wikipedia.org/wiki/ASCII>letters 'a' through 'z'
(case-insensitive), the digits '0' through '9', and
the hyphen. Hostname labels cannot begin or end with a hyphen. No other
symbols, punctuation characters, or blank spaces are permitted.
from http://en.wikipedia.org/wiki/Hostname
-Todd
On Tue, Oct 13, 2009 at 10:01 AM, Tejas Lagvankar <te...@umbc.edu> wrote:
> Hey Kevin,
>
> You were right...
> I changed all my aliases to IP addresses. It worked !
>
> Thank you all again :)
>
> Regards,
> Tejas
>
>
> On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:
>
> By name resolution, I assume that you mean the name mentioned in
>> /etc/hosts. Yes, in the logs, the IP address appears in the beginning.
>> Correct me if I'm wrong
>> I will also try with using just the IP's instead of the aliases.
>>
>> On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
>>
>> did you verify the name resolution?
>>>
>>> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu> wrote:
>>>
>>> I get the same error even if i specify the port number. I have tried with
>>> port numbers 54310 as well as 9000.
>>>
>>>
>>> Regards,
>>> Tejas
>>>
>>>
>>> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>>>
>>> I think you need to specify the port as well for following port
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>>
>>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu> wrote:
>>>
>>> Hi,
>>>
>>>
>>> We are trying to set up a cluster (starting with 2 machines) using the
>>> new
>>> 0.20.1 version.
>>>
>>> On the master machine, just after the server starts, the name node dies
>>> off
>>> with the following exception:
>>>
>>> 2009-10-13 01:22:24,740 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>>> at
>>>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>>> at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>
>>> Can anyone help ? Also can anyone send across example configuration
>>> files
>>> for 0.20.1 if they are different than we are using ?
>>>
>>> The detail log file is attached along with.
>>>
>>>
>>>
>>>
>>> The configuration files are as follows:
>>>
>>> MASTER CONFIG
>>> ------ conf/masters -------
>>> master_hadoop
>>>
>>> ------ conf/slaves -------
>>> master_hadoop
>>> slave_hadoop
>>>
>>> ------ core-site.xml -------
>>> <configuration>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp</value>
>>> </property>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>>
>>>
>>> SLAVE CONFIG
>>> ------ core-site.xml -------
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp/</value>
>>> </property>
>>>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Tejas Lagvankar
>>> meettejas@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2> <
>>> http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Chandan Tamrakar
>>>
>>> Tejas Lagvankar
>>> meettejas@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>
>
>
>
Re: 0.20.1 Cluster Setup Problem
Posted by Todd Lipcon <to...@cloudera.com>.
Your issue was probably that slave_hadoop and master_hadoop are not valid
host names:
RFCs <http://en.wikipedia.org/wiki/Request_for_Comments> mandate that a
hostname's labels may contain only the
ASCII<http://en.wikipedia.org/wiki/ASCII>letters 'a' through 'z'
(case-insensitive), the digits '0' through '9', and
the hyphen. Hostname labels cannot begin or end with a hyphen. No other
symbols, punctuation characters, or blank spaces are permitted.
from http://en.wikipedia.org/wiki/Hostname
-Todd
On Tue, Oct 13, 2009 at 10:01 AM, Tejas Lagvankar <te...@umbc.edu> wrote:
> Hey Kevin,
>
> You were right...
> I changed all my aliases to IP addresses. It worked !
>
> Thank you all again :)
>
> Regards,
> Tejas
>
>
> On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:
>
> By name resolution, I assume that you mean the name mentioned in
>> /etc/hosts. Yes, in the logs, the IP address appears in the beginning.
>> Correct me if I'm wrong
>> I will also try with using just the IP's instead of the aliases.
>>
>> On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
>>
>> did you verify the name resolution?
>>>
>>> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu> wrote:
>>>
>>> I get the same error even if i specify the port number. I have tried with
>>> port numbers 54310 as well as 9000.
>>>
>>>
>>> Regards,
>>> Tejas
>>>
>>>
>>> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>>>
>>> I think you need to specify the port as well for following port
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>>
>>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu> wrote:
>>>
>>> Hi,
>>>
>>>
>>> We are trying to set up a cluster (starting with 2 machines) using the
>>> new
>>> 0.20.1 version.
>>>
>>> On the master machine, just after the server starts, the name node dies
>>> off
>>> with the following exception:
>>>
>>> 2009-10-13 01:22:24,740 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>>> at
>>>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>>> at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>
>>> Can anyone help ? Also can anyone send across example configuration
>>> files
>>> for 0.20.1 if they are different than we are using ?
>>>
>>> The detail log file is attached along with.
>>>
>>>
>>>
>>>
>>> The configuration files are as follows:
>>>
>>> MASTER CONFIG
>>> ------ conf/masters -------
>>> master_hadoop
>>>
>>> ------ conf/slaves -------
>>> master_hadoop
>>> slave_hadoop
>>>
>>> ------ core-site.xml -------
>>> <configuration>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp</value>
>>> </property>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>>
>>>
>>> SLAVE CONFIG
>>> ------ core-site.xml -------
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp/</value>
>>> </property>
>>>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Tejas Lagvankar
>>> meettejas@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2> <
>>> http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Chandan Tamrakar
>>>
>>> Tejas Lagvankar
>>> meettejas@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>
>
>
>
Re: 0.20.1 Cluster Setup Problem
Posted by Tejas Lagvankar <te...@umbc.edu>.
Hey Kevin,
You were right...
I changed all my aliases to IP addresses. It worked !
Thank you all again :)
Regards,
Tejas
On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:
> By name resolution, I assume that you mean the name mentioned in /
> etc/hosts. Yes, in the logs, the IP address appears in the beginning.
> Correct me if I'm wrong
> I will also try with using just the IP's instead of the aliases.
>
> On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
>
>> did you verify the name resolution?
>>
>> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu>
>> wrote:
>>
>> I get the same error even if i specify the port number. I have
>> tried with port numbers 54310 as well as 9000.
>>
>>
>> Regards,
>> Tejas
>>
>>
>> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>>
>> I think you need to specify the port as well for following port
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>>
>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu>
>> wrote:
>>
>> Hi,
>>
>>
>> We are trying to set up a cluster (starting with 2 machines) using
>> the new
>> 0.20.1 version.
>>
>> On the master machine, just after the server starts, the name node
>> dies off
>> with the following exception:
>>
>> 2009-10-13 01:22:24,740 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize
>> (DistributedFileSystem.java:78)
>> at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
>> 1373)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
>> 1385)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
>> (NameNode.java:208)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
>> (NameNode.java:204)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
>> (NameNode.java:279)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
>> (NameNode.java:956)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
>> 965)
>>
>> Can anyone help ? Also can anyone send across example
>> configuration files
>> for 0.20.1 if they are different than we are using ?
>>
>> The detail log file is attached along with.
>>
>>
>>
>>
>> The configuration files are as follows:
>>
>> MASTER CONFIG
>> ------ conf/masters -------
>> master_hadoop
>>
>> ------ conf/slaves -------
>> master_hadoop
>> slave_hadoop
>>
>> ------ core-site.xml -------
>> <configuration>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/hadoop-0.20.1/tmp</value>
>> </property>
>>
>> ------ hdfs-site.xml -------
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>>
>> ------ mapred-site.xml -------
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>tejas_hadoop:9001</value>
>> </property>
>>
>>
>>
>>
>>
>> SLAVE CONFIG
>> ------ core-site.xml -------
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/hadoop-0.20.1/tmp/</value>
>> </property>
>>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>>
>> ------ hdfs-site.xml -------
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>> ------ mapred-site.xml -------
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>tejas_hadoop:9001</value>
>> </property>
>>
>>
>>
>> Regards,
>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
>>
>>
>>
>> --
>> Chandan Tamrakar
>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2
>>
>>
>>
>>
>>
>>
>>
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2
>
>
>
Tejas Lagvankar
meettejas@umbc.edu
www.umbc.edu/~tej2
Re: 0.20.1 Cluster Setup Problem
Posted by Tejas Lagvankar <te...@umbc.edu>.
Hey Kevin,
You were right...
I changed all my aliases to IP addresses. It worked !
Thank you all again :)
Regards,
Tejas
On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:
> By name resolution, I assume that you mean the name mentioned in /
> etc/hosts. Yes, in the logs, the IP address appears in the beginning.
> Correct me if I'm wrong
> I will also try with using just the IP's instead of the aliases.
>
> On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
>
>> did you verify the name resolution?
>>
>> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu>
>> wrote:
>>
>> I get the same error even if i specify the port number. I have
>> tried with port numbers 54310 as well as 9000.
>>
>>
>> Regards,
>> Tejas
>>
>>
>> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>>
>> I think you need to specify the port as well for following port
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>>
>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu>
>> wrote:
>>
>> Hi,
>>
>>
>> We are trying to set up a cluster (starting with 2 machines) using
>> the new
>> 0.20.1 version.
>>
>> On the master machine, just after the server starts, the name node
>> dies off
>> with the following exception:
>>
>> 2009-10-13 01:22:24,740 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize
>> (DistributedFileSystem.java:78)
>> at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
>> 1373)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
>> 1385)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
>> (NameNode.java:208)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
>> (NameNode.java:204)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
>> (NameNode.java:279)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
>> (NameNode.java:956)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
>> 965)
>>
>> Can anyone help ? Also can anyone send across example
>> configuration files
>> for 0.20.1 if they are different than we are using ?
>>
>> The detail log file is attached along with.
>>
>>
>>
>>
>> The configuration files are as follows:
>>
>> MASTER CONFIG
>> ------ conf/masters -------
>> master_hadoop
>>
>> ------ conf/slaves -------
>> master_hadoop
>> slave_hadoop
>>
>> ------ core-site.xml -------
>> <configuration>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/hadoop-0.20.1/tmp</value>
>> </property>
>>
>> ------ hdfs-site.xml -------
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>>
>> ------ mapred-site.xml -------
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>tejas_hadoop:9001</value>
>> </property>
>>
>>
>>
>>
>>
>> SLAVE CONFIG
>> ------ core-site.xml -------
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/hadoop-0.20.1/tmp/</value>
>> </property>
>>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>>
>> ------ hdfs-site.xml -------
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>> ------ mapred-site.xml -------
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>tejas_hadoop:9001</value>
>> </property>
>>
>>
>>
>> Regards,
>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
>>
>>
>>
>> --
>> Chandan Tamrakar
>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2
>>
>>
>>
>>
>>
>>
>>
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2
>
>
>
Tejas Lagvankar
meettejas@umbc.edu
www.umbc.edu/~tej2
Re: 0.20.1 Cluster Setup Problem
Posted by Tejas Lagvankar <te...@umbc.edu>.
By name resolution, I assume that you mean the name mentioned in /etc/
hosts. Yes, in the logs, the IP address appears in the beginning.
Correct me if I'm wrong
I will also try with using just the IP's instead of the aliases.
On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
> did you verify the name resolution?
>
> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu>
> wrote:
>
> I get the same error even if i specify the port number. I have tried
> with port numbers 54310 as well as 9000.
>
>
> Regards,
> Tejas
>
>
> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>
> I think you need to specify the port as well for following port
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu>
> wrote:
>
> Hi,
>
>
> We are trying to set up a cluster (starting with 2 machines) using
> the new
> 0.20.1 version.
>
> On the master machine, just after the server starts, the name node
> dies off
> with the following exception:
>
> 2009-10-13 01:22:24,740 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://master_hadoop
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize
> (DistributedFileSystem.java:78)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
> 1385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
> (NameNode.java:208)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
> (NameNode.java:204)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:
> 279)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
> (NameNode.java:956)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
> 965)
>
> Can anyone help ? Also can anyone send across example configuration
> files
> for 0.20.1 if they are different than we are using ?
>
> The detail log file is attached along with.
>
>
>
>
> The configuration files are as follows:
>
> MASTER CONFIG
> ------ conf/masters -------
> master_hadoop
>
> ------ conf/slaves -------
> master_hadoop
> slave_hadoop
>
> ------ core-site.xml -------
> <configuration>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp</value>
> </property>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
>
>
> SLAVE CONFIG
> ------ core-site.xml -------
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp/</value>
> </property>
>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
> Regards,
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>
>
>
>
>
>
>
> --
> Chandan Tamrakar
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2
>
>
>
>
>
>
>
Tejas Lagvankar
meettejas@umbc.edu
www.umbc.edu/~tej2
Re: 0.20.1 Cluster Setup Problem
Posted by Tejas Lagvankar <te...@umbc.edu>.
By name resolution, I assume that you mean the name mentioned in /etc/
hosts. Yes, in the logs, the IP address appears in the beginning.
Correct me if I'm wrong
I will also try with using just the IP's instead of the aliases.
On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:
> did you verify the name resolution?
>
> On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu>
> wrote:
>
> I get the same error even if i specify the port number. I have tried
> with port numbers 54310 as well as 9000.
>
>
> Regards,
> Tejas
>
>
> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>
> I think you need to specify the port as well for following port
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu>
> wrote:
>
> Hi,
>
>
> We are trying to set up a cluster (starting with 2 machines) using
> the new
> 0.20.1 version.
>
> On the master machine, just after the server starts, the name node
> dies off
> with the following exception:
>
> 2009-10-13 01:22:24,740 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://master_hadoop
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize
> (DistributedFileSystem.java:78)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
> 1385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
> (NameNode.java:208)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
> (NameNode.java:204)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:
> 279)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
> (NameNode.java:956)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
> 965)
>
> Can anyone help ? Also can anyone send across example configuration
> files
> for 0.20.1 if they are different than we are using ?
>
> The detail log file is attached along with.
>
>
>
>
> The configuration files are as follows:
>
> MASTER CONFIG
> ------ conf/masters -------
> master_hadoop
>
> ------ conf/slaves -------
> master_hadoop
> slave_hadoop
>
> ------ core-site.xml -------
> <configuration>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp</value>
> </property>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
>
>
> SLAVE CONFIG
> ------ core-site.xml -------
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp/</value>
> </property>
>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
> Regards,
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>
>
>
>
>
>
>
> --
> Chandan Tamrakar
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2
>
>
>
>
>
>
>
Tejas Lagvankar
meettejas@umbc.edu
www.umbc.edu/~tej2
Re: 0.20.1 Cluster Setup Problem
Posted by Kevin Sweeney <ke...@yieldex.com>.
did you verify the name resolution?
On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu> wrote:
>
> I get the same error even if i specify the port number. I have tried with
> port numbers 54310 as well as 9000.
>
>
> Regards,
> Tejas
>
>
> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>
> I think you need to specify the port as well for following port
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>>
>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu> wrote:
>>
>> Hi,
>>>
>>>
>>> We are trying to set up a cluster (starting with 2 machines) using the
>>> new
>>> 0.20.1 version.
>>>
>>> On the master machine, just after the server starts, the name node dies
>>> off
>>> with the following exception:
>>>
>>> 2009-10-13 01:22:24,740 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>>> at
>>>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>>> at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>
>>> Can anyone help ? Also can anyone send across example configuration
>>> files
>>> for 0.20.1 if they are different than we are using ?
>>>
>>> The detail log file is attached along with.
>>>
>>>
>>>
>>>
>>> The configuration files are as follows:
>>>
>>> MASTER CONFIG
>>> ------ conf/masters -------
>>> master_hadoop
>>>
>>> ------ conf/slaves -------
>>> master_hadoop
>>> slave_hadoop
>>>
>>> ------ core-site.xml -------
>>> <configuration>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp</value>
>>> </property>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>>
>>>
>>> SLAVE CONFIG
>>> ------ core-site.xml -------
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp/</value>
>>> </property>
>>>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Tejas Lagvankar
>>> meettejas@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> Chandan Tamrakar
>>
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2
>
>
>
>
Re: 0.20.1 Cluster Setup Problem
Posted by Kevin Sweeney <ke...@yieldex.com>.
did you verify the name resolution?
On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <te...@umbc.edu> wrote:
>
> I get the same error even if i specify the port number. I have tried with
> port numbers 54310 as well as 9000.
>
>
> Regards,
> Tejas
>
>
> On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
>
> I think you need to specify the port as well for following port
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>>
>> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu> wrote:
>>
>> Hi,
>>>
>>>
>>> We are trying to set up a cluster (starting with 2 machines) using the
>>> new
>>> 0.20.1 version.
>>>
>>> On the master machine, just after the server starts, the name node dies
>>> off
>>> with the following exception:
>>>
>>> 2009-10-13 01:22:24,740 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>>> at
>>>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
>>> at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>> at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>
>>> Can anyone help ? Also can anyone send across example configuration
>>> files
>>> for 0.20.1 if they are different than we are using ?
>>>
>>> The detail log file is attached along with.
>>>
>>>
>>>
>>>
>>> The configuration files are as follows:
>>>
>>> MASTER CONFIG
>>> ------ conf/masters -------
>>> master_hadoop
>>>
>>> ------ conf/slaves -------
>>> master_hadoop
>>> slave_hadoop
>>>
>>> ------ core-site.xml -------
>>> <configuration>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp</value>
>>> </property>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>>
>>>
>>> SLAVE CONFIG
>>> ------ core-site.xml -------
>>> <property>
>>> <name>hadoop.tmp.dir</name>
>>> <value>/opt/hadoop-0.20.1/tmp/</value>
>>> </property>
>>>
>>>
>>> <property>
>>> <name>fs.default.name</name>
>>> <value>hdfs://master_hadoop</value>
>>> </property>
>>>
>>>
>>> ------ hdfs-site.xml -------
>>> <property>
>>> <name>dfs.replication</name>
>>> <value>2</value>
>>> </property>
>>>
>>> ------ mapred-site.xml -------
>>> <property>
>>> <name>mapred.job.tracker</name>
>>> <value>tejas_hadoop:9001</value>
>>> </property>
>>>
>>>
>>>
>>> Regards,
>>>
>>> Tejas Lagvankar
>>> meettejas@umbc.edu
>>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> Chandan Tamrakar
>>
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2
>
>
>
>
Re: 0.20.1 Cluster Setup Problem
Posted by Tejas Lagvankar <te...@umbc.edu>.
I get the same error even if i specify the port number. I have tried
with port numbers 54310 as well as 9000.
Regards,
Tejas
On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
> I think you need to specify the port as well for following port
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu>
> wrote:
>
>> Hi,
>>
>>
>> We are trying to set up a cluster (starting with 2 machines) using
>> the new
>> 0.20.1 version.
>>
>> On the master machine, just after the server starts, the name node
>> dies off
>> with the following exception:
>>
>> 2009-10-13 01:22:24,740 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize
>> (DistributedFileSystem.java:78)
>> at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
>> 1373)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:
>> 66)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
>> 1385)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
>> (NameNode.java:208)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
>> (NameNode.java:204)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
>> (NameNode.java:279)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
>> (NameNode.java:956)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
>> 965)
>>
>> Can anyone help ? Also can anyone send across example
>> configuration files
>> for 0.20.1 if they are different than we are using ?
>>
>> The detail log file is attached along with.
>>
>>
>>
>>
>> The configuration files are as follows:
>>
>> MASTER CONFIG
>> ------ conf/masters -------
>> master_hadoop
>>
>> ------ conf/slaves -------
>> master_hadoop
>> slave_hadoop
>>
>> ------ core-site.xml -------
>> <configuration>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/hadoop-0.20.1/tmp</value>
>> </property>
>>
>> ------ hdfs-site.xml -------
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>>
>> ------ mapred-site.xml -------
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>tejas_hadoop:9001</value>
>> </property>
>>
>>
>>
>>
>>
>> SLAVE CONFIG
>> ------ core-site.xml -------
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/hadoop-0.20.1/tmp/</value>
>> </property>
>>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>>
>> ------ hdfs-site.xml -------
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>> ------ mapred-site.xml -------
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>tejas_hadoop:9001</value>
>> </property>
>>
>>
>>
>> Regards,
>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
>>
>
>
> --
> Chandan Tamrakar
Tejas Lagvankar
meettejas@umbc.edu
www.umbc.edu/~tej2
Re: 0.20.1 Cluster Setup Problem
Posted by Tejas Lagvankar <te...@umbc.edu>.
I get the same error even if i specify the port number. I have tried
with port numbers 54310 as well as 9000.
Regards,
Tejas
On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:
> I think you need to specify the port as well for following port
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu>
> wrote:
>
>> Hi,
>>
>>
>> We are trying to set up a cluster (starting with 2 machines) using
>> the new
>> 0.20.1 version.
>>
>> On the master machine, just after the server starts, the name node
>> dies off
>> with the following exception:
>>
>> 2009-10-13 01:22:24,740 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> Incomplete HDFS URI, no host: hdfs://master_hadoop
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize
>> (DistributedFileSystem.java:78)
>> at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
>> 1373)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:
>> 66)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
>> 1385)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
>> (NameNode.java:208)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
>> (NameNode.java:204)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
>> (NameNode.java:279)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
>> (NameNode.java:956)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
>> 965)
>>
>> Can anyone help ? Also can anyone send across example
>> configuration files
>> for 0.20.1 if they are different than we are using ?
>>
>> The detail log file is attached along with.
>>
>>
>>
>>
>> The configuration files are as follows:
>>
>> MASTER CONFIG
>> ------ conf/masters -------
>> master_hadoop
>>
>> ------ conf/slaves -------
>> master_hadoop
>> slave_hadoop
>>
>> ------ core-site.xml -------
>> <configuration>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/hadoop-0.20.1/tmp</value>
>> </property>
>>
>> ------ hdfs-site.xml -------
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>>
>> ------ mapred-site.xml -------
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>tejas_hadoop:9001</value>
>> </property>
>>
>>
>>
>>
>>
>> SLAVE CONFIG
>> ------ core-site.xml -------
>> <property>
>> <name>hadoop.tmp.dir</name>
>> <value>/opt/hadoop-0.20.1/tmp/</value>
>> </property>
>>
>>
>> <property>
>> <name>fs.default.name</name>
>> <value>hdfs://master_hadoop</value>
>> </property>
>>
>>
>> ------ hdfs-site.xml -------
>> <property>
>> <name>dfs.replication</name>
>> <value>2</value>
>> </property>
>>
>> ------ mapred-site.xml -------
>> <property>
>> <name>mapred.job.tracker</name>
>> <value>tejas_hadoop:9001</value>
>> </property>
>>
>>
>>
>> Regards,
>>
>> Tejas Lagvankar
>> meettejas@umbc.edu
>> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>>
>>
>>
>>
>>
>
>
> --
> Chandan Tamrakar
Tejas Lagvankar
meettejas@umbc.edu
www.umbc.edu/~tej2
Re: 0.20.1 Cluster Setup Problem
Posted by Chandan Tamrakar <ch...@nepasoft.com>.
I think you need to specify the port as well for following port
<property>
<name>fs.default.name</name>
<value>hdfs://master_hadoop</value>
</property>
On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu> wrote:
> Hi,
>
>
> We are trying to set up a cluster (starting with 2 machines) using the new
> 0.20.1 version.
>
> On the master machine, just after the server starts, the name node dies off
> with the following exception:
>
> 2009-10-13 01:22:24,740 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://master_hadoop
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> Can anyone help ? Also can anyone send across example configuration files
> for 0.20.1 if they are different than we are using ?
>
> The detail log file is attached along with.
>
>
>
>
> The configuration files are as follows:
>
> MASTER CONFIG
> ------ conf/masters -------
> master_hadoop
>
> ------ conf/slaves -------
> master_hadoop
> slave_hadoop
>
> ------ core-site.xml -------
> <configuration>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp</value>
> </property>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
>
>
> SLAVE CONFIG
> ------ core-site.xml -------
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp/</value>
> </property>
>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
> Regards,
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>
>
>
>
>
--
Chandan Tamrakar
Re: 0.20.1 Cluster Setup Problem
Posted by Kevin Sweeney <ke...@yieldex.com>.
Hi Tejas,
I just upgraded to 20.1 as well and you config all looks the same as mine
except in the core-site.xml I have:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
Maybe you need to add the port on yours. I haven't seen that error before,
but it seems to be suggesting it can't resolve the host. I'd say
double-check your names and that they resolve.
Hope that helps,
Kevin
On Tue, Oct 13, 2009 at 2:17 PM, Tejas Lagvankar <te...@umbc.edu> wrote:
> Hi,
>
>
> We are trying to set up a cluster (starting with 2 machines) using the new
> 0.20.1 version.
>
> On the master machine, just after the server starts, the name node dies off
> with the following exception:
>
> 2009-10-13 01:22:24,740 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://master_hadoop
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> Can anyone help ? Also can anyone send across example configuration files
> for 0.20.1 if they are different than we are using ?
>
> The detail log file is attached along with.
>
>
>
>
> The configuration files are as follows:
>
> MASTER CONFIG
> ------ conf/masters -------
> master_hadoop
>
> ------ conf/slaves -------
> master_hadoop
> slave_hadoop
>
> ------ core-site.xml -------
> <configuration>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp</value>
> </property>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
>
>
> SLAVE CONFIG
> ------ core-site.xml -------
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp/</value>
> </property>
>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
> Regards,
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2
>
>
>
>
>
Trends in JClouds
Posted by Mikio Uzawa <m_...@amber.plala.or.jp>.
Hi all,
I posted below three topics:
NTT focuses on the social infrastructure with clouds
A major common paper ASAHI talked the about cloud
NetWorld will dive into the cloud market with Bplats
http://jclouds.wordpress.com/
Thanks,
/mikio uzawa
Re: 0.20.1 Cluster Setup Problem
Posted by Chandan Tamrakar <ch...@nepasoft.com>.
I think you need to specify the port as well for following port
<property>
<name>fs.default.name</name>
<value>hdfs://master_hadoop</value>
</property>
On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <te...@umbc.edu> wrote:
> Hi,
>
>
> We are trying to set up a cluster (starting with 2 machines) using the new
> 0.20.1 version.
>
> On the master machine, just after the server starts, the name node dies off
> with the following exception:
>
> 2009-10-13 01:22:24,740 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> Incomplete HDFS URI, no host: hdfs://master_hadoop
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> Can anyone help ? Also can anyone send across example configuration files
> for 0.20.1 if they are different than we are using ?
>
> The detail log file is attached along with.
>
>
>
>
> The configuration files are as follows:
>
> MASTER CONFIG
> ------ conf/masters -------
> master_hadoop
>
> ------ conf/slaves -------
> master_hadoop
> slave_hadoop
>
> ------ core-site.xml -------
> <configuration>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp</value>
> </property>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
>
>
> SLAVE CONFIG
> ------ core-site.xml -------
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/opt/hadoop-0.20.1/tmp/</value>
> </property>
>
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://master_hadoop</value>
> </property>
>
>
> ------ hdfs-site.xml -------
> <property>
> <name>dfs.replication</name>
> <value>2</value>
> </property>
>
> ------ mapred-site.xml -------
> <property>
> <name>mapred.job.tracker</name>
> <value>tejas_hadoop:9001</value>
> </property>
>
>
>
> Regards,
>
> Tejas Lagvankar
> meettejas@umbc.edu
> www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>
>
>
>
>
>
--
Chandan Tamrakar