You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Eli Collins (JIRA)" <ji...@apache.org> on 2011/08/11 20:54:27 UTC

[jira] [Resolved] (HADOOP-953) huge log files

     [ https://issues.apache.org/jira/browse/HADOOP-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Eli Collins resolved HADOOP-953.
--------------------------------

    Resolution: Fixed

> huge log files
> --------------
>
>                 Key: HADOOP-953
>                 URL: https://issues.apache.org/jira/browse/HADOOP-953
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 0.10.1
>         Environment: N/A
>            Reporter: Andrew McNabb
>
> On our system, it's not uncommon to get 20 MB of logs with each MapReduce job.  It would be very helpful if it were possible to configure Hadoop daemons to write logs only when major things happen, but the only conf options I could find are for increasing the amount of output.  The disk is really a bottleneck for us, and I believe that short jobs would run much more quickly with less disk usage.  We also believe that the high disk usage might be triggering a kernel bug on some of our machines, causing them to crash.  If the 20 MB of logs went down to 20 KB, we would probably still have all of the information we needed.
> Thanks!

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira