You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Baran Çakıcı <ba...@gmail.com> on 2011/10/06 18:28:34 UTC

Re: Lost Task Tracker because of no heartbeat

Hi everyone,

I want to say my solution of this problem.  After so many problematic weeks I
understand that this problem caused because of CPU-Overload. If I assign less
tasks in each TaskTracker, it solved my problem.

Before I unsubscribe from mail list, I wanted to say it

Regards

Baran

2011/3/30 baran cakici <ba...@gmail.com>

> Hi,
>
> I saw today, that all datanodes were alive, when I lost task-tracker.
>
> for example: I lost slave1 as tasktracker, but as datanode is slave1 still
> alive.
>
> In addition, I tried to increaes my Java-heap size, because I have so many
> Objects in my Application, that they simultaneously alive. But it was
> useless too...
>
> That is new Info for me. Maybe someone hat an Idea??
> Regards
>
> Baran
> 2011/3/25 baran cakici <ba...@gmail.com>
>
>> I am still waiting for some suggetions??
>>
>> thanks again...
>>
>> Baran
>>
>>   2011/3/16 baran cakici <ba...@gmail.com>
>>
>>> ok...:)
>>> another solution suggetions??
>>>
>>> 2011/3/16 Harsh J <qw...@gmail.com>
>>>
>>>> Hello,
>>>>
>>>> On Thu, Mar 17, 2011 at 1:39 AM, baran cakici <ba...@gmail.com>
>>>> wrote:
>>>> > @Harsh
>>>> >
>>>> > I start daemons with start-dfs.sh and then start-mapred-dfs.sh. do you
>>>> mean
>>>> > this Exception(org.apache.hadoop.ipc.RemoteException) is normal?
>>>>
>>>> Yes. It is additionally logged as INFO. This isn't a problem since NN
>>>> needs to be up before JT can use it.
>>>>
>>>> --
>>>> Harsh J
>>>> http://harshj.com
>>>>
>>>
>>>
>>
>