You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Amandeep Khurana <am...@gmail.com> on 2009/03/13 19:08:36 UTC

Changing logging level

I am using DistributedFileSystem class to read data from the HDFS (with some
source code of HDFS modified by me). When I read a file, I'm getting all
debug level log messages onto the stdout on the client that I wrote. How can
I change the level to info? I havent mentioned the debug level anywhere. on
a different hadoop instance(which has no code modification from 0.19.0), it
runs with info level by default.

Amandeep

Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz

Re: Changing logging level

Posted by Amandeep Khurana <am...@gmail.com>.
Thanks.

So, the logging that I wanted to tweak was at the client end where I am
using the DistributedFileSystem class instead of using the shell to read
data. Changing the logging level there cant be done through these methods.

I got it to work by rebuilding the jars after tweaking the default log4j
properties.

Amandeep




Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz


On Fri, Mar 13, 2009 at 6:20 PM, Richa Khandelwal <ri...@gmail.com>wrote:

> Two ways:
> In hadoop-site.xml- add:
>
> <property>
>  <name>mapred.task.profile</name>
>  <value>true</value>
>  <description>Set profiling option to true.
>  </description>
> </property>
>
> <property>
>  <name>mapred.task.profile.maps</name>
>  <value>1</value>
>  <description>Profiling level of maps.
>  </description>
> </property>
>
> <property>
>  <name>mapred.task.profile.reduces</name>
>  <value>1</value>
>  <description>Profiling level of reducers.
>  </description>
> </property>
>
>
> Or in your code add
>
> JobConf.setProfieEnabled(true);
> JobConf.setProfileTaskRange(true,0-2);
>
> Cheers,
> Richa
>
>
> On Fri, Mar 13, 2009 at 11:08 AM, Amandeep Khurana <am...@gmail.com>
> wrote:
>
> > I am using DistributedFileSystem class to read data from the HDFS (with
> > some
> > source code of HDFS modified by me). When I read a file, I'm getting all
> > debug level log messages onto the stdout on the client that I wrote. How
> > can
> > I change the level to info? I havent mentioned the debug level anywhere.
> on
> > a different hadoop instance(which has no code modification from 0.19.0),
> it
> > runs with info level by default.
> >
> > Amandeep
> >
> > Amandeep Khurana
> > Computer Science Graduate Student
> > University of California, Santa Cruz
> >
>
>
>
> --
> Richa Khandelwal
>
>
> University Of California,
> Santa Cruz.
> Ph:425-241-7763
>

Re: Changing logging level

Posted by Richa Khandelwal <ri...@gmail.com>.
Two ways:
In hadoop-site.xml- add:

<property>
  <name>mapred.task.profile</name>
  <value>true</value>
  <description>Set profiling option to true.
  </description>
</property>

<property>
  <name>mapred.task.profile.maps</name>
  <value>1</value>
  <description>Profiling level of maps.
  </description>
</property>

<property>
  <name>mapred.task.profile.reduces</name>
  <value>1</value>
  <description>Profiling level of reducers.
  </description>
</property>


Or in your code add

JobConf.setProfieEnabled(true);
JobConf.setProfileTaskRange(true,0-2);

Cheers,
Richa


On Fri, Mar 13, 2009 at 11:08 AM, Amandeep Khurana <am...@gmail.com> wrote:

> I am using DistributedFileSystem class to read data from the HDFS (with
> some
> source code of HDFS modified by me). When I read a file, I'm getting all
> debug level log messages onto the stdout on the client that I wrote. How
> can
> I change the level to info? I havent mentioned the debug level anywhere. on
> a different hadoop instance(which has no code modification from 0.19.0), it
> runs with info level by default.
>
> Amandeep
>
> Amandeep Khurana
> Computer Science Graduate Student
> University of California, Santa Cruz
>



-- 
Richa Khandelwal


University Of California,
Santa Cruz.
Ph:425-241-7763