You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by abhishek sharma <ab...@gmail.com> on 2010/04/14 08:07:08 UTC

stop scripts not working properly

Hi all,

I am using the Cloudera Hadoop distribution version 0.20.2+228.

I have a small 9 node cluster and when I try to stop the Hadoop DFS
and Mapred using
the stop-mapred.sh and stop-dfs.sh scripts, it downs shutdown some of
the TaskTrackers and DataNodes. I get a message saying no tasktracker
or datanode to stop, but when I log into the machines, I can see the
TaskTracker and DataNode processes running (for e.g. using jps).

I did not notice anything unusal in the log files. I am not sure what
might be the problem but when I use Hadoop version 0.20.0, the scripts
work fine.

Any idea what time be happening?

Thanks,
Abhishek

Re: stop scripts not working properly

Posted by abhishek sharma <ab...@usc.edu>.
Hi Todd,

I am using the tarball.

Let me try configuring the pid files to stored somewhere else.

Thanks for the tip,
Abhishek

On Tue, Apr 13, 2010 at 11:10 PM, Todd Lipcon <to...@cloudera.com> wrote:
> Hi Abhishek,
>
> Are you using the tarball or the RPMs/debs? The issue is most likely that
> your pid files are ending up in /tmp and thus getting cleaned out
> periodically.
>
> -Todd
>
> On Tue, Apr 13, 2010 at 11:07 PM, abhishek sharma <ab...@gmail.com>wrote:
>
>> Hi all,
>>
>> I am using the Cloudera Hadoop distribution version 0.20.2+228.
>>
>> I have a small 9 node cluster and when I try to stop the Hadoop DFS
>> and Mapred using
>> the stop-mapred.sh and stop-dfs.sh scripts, it downs shutdown some of
>> the TaskTrackers and DataNodes. I get a message saying no tasktracker
>> or datanode to stop, but when I log into the machines, I can see the
>> TaskTracker and DataNode processes running (for e.g. using jps).
>>
>> I did not notice anything unusal in the log files. I am not sure what
>> might be the problem but when I use Hadoop version 0.20.0, the scripts
>> work fine.
>>
>> Any idea what time be happening?
>>
>> Thanks,
>> Abhishek
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>

Re: stop scripts not working properly

Posted by Todd Lipcon <to...@cloudera.com>.
Hi Abhishek,

Are you using the tarball or the RPMs/debs? The issue is most likely that
your pid files are ending up in /tmp and thus getting cleaned out
periodically.

-Todd

On Tue, Apr 13, 2010 at 11:07 PM, abhishek sharma <ab...@gmail.com>wrote:

> Hi all,
>
> I am using the Cloudera Hadoop distribution version 0.20.2+228.
>
> I have a small 9 node cluster and when I try to stop the Hadoop DFS
> and Mapred using
> the stop-mapred.sh and stop-dfs.sh scripts, it downs shutdown some of
> the TaskTrackers and DataNodes. I get a message saying no tasktracker
> or datanode to stop, but when I log into the machines, I can see the
> TaskTracker and DataNode processes running (for e.g. using jps).
>
> I did not notice anything unusal in the log files. I am not sure what
> might be the problem but when I use Hadoop version 0.20.0, the scripts
> work fine.
>
> Any idea what time be happening?
>
> Thanks,
> Abhishek
>



-- 
Todd Lipcon
Software Engineer, Cloudera