You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Mark Kerzner <ma...@gmail.com> on 2009/11/24 22:01:36 UTC

Hadoop on EC2

Hi,

I am starting a cluster of Apache Hadoop distributions, like .18 and also
.19. This all works fine, then I log in. I see that the Hadoop daemons are
already working. However, when I try

# which hadoop
/usr/local/hadoop-0.19.0/bin/hadoop
# jps
1355 Jps
1167 NameNode
1213 JobTracker
# hadoop fs -ls hdfs://localhost/
09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:8020. Already tried 0 time(s).

I do stop-all.sh and then start-all.sh, and it does not help. What am I
doing wrong?

Thank you,
Mark

Re: Hadoop on EC2

Posted by Mike Kendall <mk...@justin.tv>.
i thought that start-all and stop-all weren't supposed to be used on
distributed clusters...  that it was just sugar for testing/learning...

try start-hdfs then start-mapred

(and if you stop them, it's stop-mapred then stop-hdfs)

then again it's ec2 so they might do something weird or special that i'm not
aware of.

-mike

On Tue, Nov 24, 2009 at 1:24 PM, Stephen Watt <sw...@us.ibm.com> wrote:

> Hi Mark
>
> Are you starting the clusters from the contrib/ec2 scripts ? These scripts
> have a special way of bringing up the cluster where they are passing in
> the hostnames of the slaves as they are being assigned from ec2, thus I
> think stop-all and start-all will not work as they both assume the slaves
> are defined in the slaves file. Its been awhile since I looked at this so
> excuse my lack of specifics. I believe there is a script in the /root
> directory of each ec2 image that these values are being passed into that
> does the work of starting the tasktracker/datanode processes on each one
> of these.
>
> Kind regards
> Steve Watt
>
>
>
> From:
> Mark Kerzner <ma...@gmail.com>
> To:
> core-user@hadoop.apache.org
> Date:
> 11/24/2009 03:02 PM
> Subject:
> Hadoop on EC2
>
>
>
> Hi,
>
> I am starting a cluster of Apache Hadoop distributions, like .18 and also
> .19. This all works fine, then I log in. I see that the Hadoop daemons are
> already working. However, when I try
>
> # which hadoop
> /usr/local/hadoop-0.19.0/bin/hadoop
> # jps
> 1355 Jps
> 1167 NameNode
> 1213 JobTracker
> # hadoop fs -ls hdfs://localhost/
> 09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:8020. Already tried 0 time(s).
>
> I do stop-all.sh and then start-all.sh, and it does not help. What am I
> doing wrong?
>
> Thank you,
> Mark
>
>
>

Re: Hadoop on EC2

Posted by Stephen Watt <sw...@us.ibm.com>.
Hi Mark

Are you starting the clusters from the contrib/ec2 scripts ? These scripts 
have a special way of bringing up the cluster where they are passing in 
the hostnames of the slaves as they are being assigned from ec2, thus I 
think stop-all and start-all will not work as they both assume the slaves 
are defined in the slaves file. Its been awhile since I looked at this so 
excuse my lack of specifics. I believe there is a script in the /root 
directory of each ec2 image that these values are being passed into that 
does the work of starting the tasktracker/datanode processes on each one 
of these.

Kind regards
Steve Watt



From:
Mark Kerzner <ma...@gmail.com>
To:
core-user@hadoop.apache.org
Date:
11/24/2009 03:02 PM
Subject:
Hadoop on EC2



Hi,

I am starting a cluster of Apache Hadoop distributions, like .18 and also
.19. This all works fine, then I log in. I see that the Hadoop daemons are
already working. However, when I try

# which hadoop
/usr/local/hadoop-0.19.0/bin/hadoop
# jps
1355 Jps
1167 NameNode
1213 JobTracker
# hadoop fs -ls hdfs://localhost/
09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:8020. Already tried 0 time(s).

I do stop-all.sh and then start-all.sh, and it does not help. What am I
doing wrong?

Thank you,
Mark



Re: Hadoop on EC2

Posted by Mark Kerzner <ma...@gmail.com>.
Well, maybe I found what I was doing wrong:

I was always using hdfs://localhost, and it works just as well with /
instead

Mark

On Tue, Nov 24, 2009 at 3:01 PM, Mark Kerzner <ma...@gmail.com> wrote:

> Hi,
>
> I am starting a cluster of Apache Hadoop distributions, like .18 and also
> .19. This all works fine, then I log in. I see that the Hadoop daemons are
> already working. However, when I try
>
> # which hadoop
> /usr/local/hadoop-0.19.0/bin/hadoop
> # jps
> 1355 Jps
> 1167 NameNode
> 1213 JobTracker
> # hadoop fs -ls hdfs://localhost/
> 09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:8020. Already tried 0 time(s).
>
> I do stop-all.sh and then start-all.sh, and it does not help. What am I
> doing wrong?
>
> Thank you,
> Mark
>
>

Re: Hadoop on EC2

Posted by Mark Kerzner <ma...@gmail.com>.
It did! thank you

hadoop fs -ls hdfs://
Found 1 items
drwxr-xr-x   - root supergroup          0 2009-11-24 23:04 /mnt

On Tue, Nov 24, 2009 at 11:37 PM, Rekha Joshi <re...@yahoo-inc.com>wrote:

> If you use hadoop fs -ls hdfs:// that will work for your intent. Thanks!
>
> On 11/25/09 2:31 AM, "Mark Kerzner" <ma...@gmail.com> wrote:
>
> Hi,
>
> I am starting a cluster of Apache Hadoop distributions, like .18 and also
> .19. This all works fine, then I log in. I see that the Hadoop daemons are
> already working. However, when I try
>
> # which hadoop
> /usr/local/hadoop-0.19.0/bin/hadoop
> # jps
> 1355 Jps
> 1167 NameNode
> 1213 JobTracker
> # hadoop fs -ls hdfs://localhost/
> 09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:8020. Already tried 0 time(s).
>
> I do stop-all.sh and then start-all.sh, and it does not help. What am I
> doing wrong?
>
> Thank you,
> Mark
>
>

Re: Hadoop on EC2

Posted by Rekha Joshi <re...@yahoo-inc.com>.
If you use hadoop fs -ls hdfs:// that will work for your intent. Thanks!

On 11/25/09 2:31 AM, "Mark Kerzner" <ma...@gmail.com> wrote:

Hi,

I am starting a cluster of Apache Hadoop distributions, like .18 and also
.19. This all works fine, then I log in. I see that the Hadoop daemons are
already working. However, when I try

# which hadoop
/usr/local/hadoop-0.19.0/bin/hadoop
# jps
1355 Jps
1167 NameNode
1213 JobTracker
# hadoop fs -ls hdfs://localhost/
09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:8020. Already tried 0 time(s).

I do stop-all.sh and then start-all.sh, and it does not help. What am I
doing wrong?

Thank you,
Mark