You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Jean-Marc Spaggiari <je...@spaggiari.org> on 2012/11/08 02:44:13 UTC

MapReduce: Read the logs

Hi,

When I run a MapReduce job, the userlogs are stored on each Datanode.

Like: ~/hadoop_drive/hadoop/mapred/local/userlogs/job_201211011005_0018

Under this directory I have on subdirectory per job, like:
drwx------ 2 hadoop hadoop 4096 nov  5 12:46
attempt_201211011005_0018_m_000010_0
drwx------ 2 hadoop hadoop 4096 nov  5 12:46
attempt_201211011005_0018_m_000016_0
drwx------ 2 hadoop hadoop 4096 nov  5 12:46
attempt_201211011005_0018_m_000017_0
drwx------ 2 hadoop hadoop 4096 nov  5 12:46
attempt_201211011005_0018_m_000021_0

Now, I want to take a look at all the logs. Is there an easy way to
retrieve them? Or do I need to log in each single node and look at
each directory of each task?

I would like to put all the logs into a single file.

Thanks,

JM

Re: MapReduce: Read the logs

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Do you mean under http://servername:50030/jobhistoryhome.jsp ?

>From there I can see the local logs, or the job results/summary. But I
can't see the logs of the other nodes. If I want to see the logs, I
need to click on the job ID, then on the map successful tasks, then on
all the tasks one by one, then on view all log. Ouf. It's long.

On my MapReduce I'm doing that:
Logger.getLogger(TimestampBasedRowMover.class).info ("Moving " +
Bytes.toString(Bytes.tail(values.getRow(), values.getRow().length -
8)));

That's what I'm trying to see on my logs :(



2012/11/7, Binglin Chang <de...@gmail.com>:
> Just watch logs in jobtracker & history UI isn't enough for you?
>
>
> On Thu, Nov 8, 2012 at 9:44 AM, Jean-Marc Spaggiari
> <jean-marc@spaggiari.org
>> wrote:
>
>> Hi,
>>
>> When I run a MapReduce job, the userlogs are stored on each Datanode.
>>
>> Like: ~/hadoop_drive/hadoop/mapred/local/userlogs/job_201211011005_0018
>>
>> Under this directory I have on subdirectory per job, like:
>> drwx------ 2 hadoop hadoop 4096 nov  5 12:46
>> attempt_201211011005_0018_m_000010_0
>> drwx------ 2 hadoop hadoop 4096 nov  5 12:46
>> attempt_201211011005_0018_m_000016_0
>> drwx------ 2 hadoop hadoop 4096 nov  5 12:46
>> attempt_201211011005_0018_m_000017_0
>> drwx------ 2 hadoop hadoop 4096 nov  5 12:46
>> attempt_201211011005_0018_m_000021_0
>>
>> Now, I want to take a look at all the logs. Is there an easy way to
>> retrieve them? Or do I need to log in each single node and look at
>> each directory of each task?
>>
>> I would like to put all the logs into a single file.
>>
>> Thanks,
>>
>> JM
>>
>

Re: MapReduce: Read the logs

Posted by Binglin Chang <de...@gmail.com>.
Just watch logs in jobtracker & history UI isn't enough for you?


On Thu, Nov 8, 2012 at 9:44 AM, Jean-Marc Spaggiari <jean-marc@spaggiari.org
> wrote:

> Hi,
>
> When I run a MapReduce job, the userlogs are stored on each Datanode.
>
> Like: ~/hadoop_drive/hadoop/mapred/local/userlogs/job_201211011005_0018
>
> Under this directory I have on subdirectory per job, like:
> drwx------ 2 hadoop hadoop 4096 nov  5 12:46
> attempt_201211011005_0018_m_000010_0
> drwx------ 2 hadoop hadoop 4096 nov  5 12:46
> attempt_201211011005_0018_m_000016_0
> drwx------ 2 hadoop hadoop 4096 nov  5 12:46
> attempt_201211011005_0018_m_000017_0
> drwx------ 2 hadoop hadoop 4096 nov  5 12:46
> attempt_201211011005_0018_m_000021_0
>
> Now, I want to take a look at all the logs. Is there an easy way to
> retrieve them? Or do I need to log in each single node and look at
> each directory of each task?
>
> I would like to put all the logs into a single file.
>
> Thanks,
>
> JM
>