You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by haosdent <ha...@gmail.com> on 2014/03/18 13:29:02 UTC

Is there a better way to handle too much log

Sometimes the call of Log.xxx couldn't return if the disk partition of Log
path is full. And HBase would hang because of this. So I think if there is
a better way to handle too much log. For example, through a configuration
item in hbase-site.xml, we could delete the old logs periodically or delete
old logs when this disk didn't have enough space.

I think HBase hang when disk space isn't enough is unacceptable. Looking
forward your ideas. Thanks in advance.

-- 
Best Regards,
Haosdent Huang

Re: Is there a better way to handle too much log

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Hey, there is some workarounds like what Ted described, but I think it's
still an issue if we block all the operations because we are not able to
write in the logs.


2014-03-18 9:49 GMT-04:00 Ted Yu <yu...@gmail.com>:

> If log grows so fast that disk space is to be exhausted, verbosity should
> be lowered.
>
> Do you turn on DEBUG logging ?
>
> Cheers
>
> On Mar 18, 2014, at 6:08 AM, haosdent <ha...@gmail.com> wrote:
>
> > Thanks for your reply. DailyRollingFileAppender and a cron job could
> works
> > in normal scenario. But sometimes log grow too fast, or disk space may
> use
> > by other applications. Is there a way make Log more "smart" and choose
> > policy according to current disk space?
> >
> >
> > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yu...@gmail.com> wrote:
> >
> >> Can you utilize
> >>
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> ?
> >>
> >> And have a cron job cleanup old logs ?
> >>
> >> Cheers
> >>
> >> On Mar 18, 2014, at 5:29 AM, haosdent <ha...@gmail.com> wrote:
> >>
> >>> Sometimes the call of Log.xxx couldn't return if the disk partition of
> >> Log
> >>> path is full. And HBase would hang because of this. So I think if there
> >> is
> >>> a better way to handle too much log. For example, through a
> configuration
> >>> item in hbase-site.xml, we could delete the old logs periodically or
> >> delete
> >>> old logs when this disk didn't have enough space.
> >>>
> >>> I think HBase hang when disk space isn't enough is unacceptable.
> Looking
> >>> forward your ideas. Thanks in advance.
> >>>
> >>> --
> >>> Best Regards,
> >>> Haosdent Huang
> >
> >
> >
> > --
> > Best Regards,
> > Haosdent Huang
>

Re: Is there a better way to handle too much log

Posted by haosdent <ha...@gmail.com>.
Cool, I don't realize "the max file size, and number of log files" before.
Thank you very much.


On Wed, Mar 19, 2014 at 7:49 AM, Ted Yu <yu...@gmail.com> wrote:

> Here is a related post:
>
> http://stackoverflow.com/questions/13864899/log4j-dailyrollingfileappender-are-rolled-files-deleted-after-some-amount-of
>
>
> On Tue, Mar 18, 2014 at 2:25 PM, Enis Söztutar <en...@gmail.com> wrote:
>
> > DFRA already deletes old logs, you do not necessarily have to have a cron
> > job.
> >
> > You can use RollingFileAppender to limit the max file size, and number of
> > log files to keep around.
> >
> > Check out conf/log4j.properties.
> > Enis
> >
> >
> > On Tue, Mar 18, 2014 at 7:22 AM, haosdent <ha...@gmail.com> wrote:
> >
> > > Yep, I use INFO level. Let me think about this later. If I found a
> better
> > > way, I would open a issue and record it. Thanks for your great help.
> > @tedyu
> > >
> > >
> > > On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu <yu...@gmail.com> wrote:
> > >
> > > > If log grows so fast that disk space is to be exhausted, verbosity
> > should
> > > > be lowered.
> > > >
> > > > Do you turn on DEBUG logging ?
> > > >
> > > > Cheers
> > > >
> > > > On Mar 18, 2014, at 6:08 AM, haosdent <ha...@gmail.com> wrote:
> > > >
> > > > > Thanks for your reply. DailyRollingFileAppender and a cron job
> could
> > > > works
> > > > > in normal scenario. But sometimes log grow too fast, or disk space
> > may
> > > > use
> > > > > by other applications. Is there a way make Log more "smart" and
> > choose
> > > > > policy according to current disk space?
> > > > >
> > > > >
> > > > > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yu...@gmail.com>
> wrote:
> > > > >
> > > > >> Can you utilize
> > > > >>
> > > >
> > >
> >
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> > > > ?
> > > > >>
> > > > >> And have a cron job cleanup old logs ?
> > > > >>
> > > > >> Cheers
> > > > >>
> > > > >> On Mar 18, 2014, at 5:29 AM, haosdent <ha...@gmail.com> wrote:
> > > > >>
> > > > >>> Sometimes the call of Log.xxx couldn't return if the disk
> partition
> > > of
> > > > >> Log
> > > > >>> path is full. And HBase would hang because of this. So I think if
> > > there
> > > > >> is
> > > > >>> a better way to handle too much log. For example, through a
> > > > configuration
> > > > >>> item in hbase-site.xml, we could delete the old logs periodically
> > or
> > > > >> delete
> > > > >>> old logs when this disk didn't have enough space.
> > > > >>>
> > > > >>> I think HBase hang when disk space isn't enough is unacceptable.
> > > > Looking
> > > > >>> forward your ideas. Thanks in advance.
> > > > >>>
> > > > >>> --
> > > > >>> Best Regards,
> > > > >>> Haosdent Huang
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best Regards,
> > > > > Haosdent Huang
> > > >
> > >
> > >
> > >
> > > --
> > > Best Regards,
> > > Haosdent Huang
> > >
> >
>



-- 
Best Regards,
Haosdent Huang

Re: Is there a better way to handle too much log

Posted by Ted Yu <yu...@gmail.com>.
Here is a related post:
http://stackoverflow.com/questions/13864899/log4j-dailyrollingfileappender-are-rolled-files-deleted-after-some-amount-of


On Tue, Mar 18, 2014 at 2:25 PM, Enis Söztutar <en...@gmail.com> wrote:

> DFRA already deletes old logs, you do not necessarily have to have a cron
> job.
>
> You can use RollingFileAppender to limit the max file size, and number of
> log files to keep around.
>
> Check out conf/log4j.properties.
> Enis
>
>
> On Tue, Mar 18, 2014 at 7:22 AM, haosdent <ha...@gmail.com> wrote:
>
> > Yep, I use INFO level. Let me think about this later. If I found a better
> > way, I would open a issue and record it. Thanks for your great help.
> @tedyu
> >
> >
> > On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu <yu...@gmail.com> wrote:
> >
> > > If log grows so fast that disk space is to be exhausted, verbosity
> should
> > > be lowered.
> > >
> > > Do you turn on DEBUG logging ?
> > >
> > > Cheers
> > >
> > > On Mar 18, 2014, at 6:08 AM, haosdent <ha...@gmail.com> wrote:
> > >
> > > > Thanks for your reply. DailyRollingFileAppender and a cron job could
> > > works
> > > > in normal scenario. But sometimes log grow too fast, or disk space
> may
> > > use
> > > > by other applications. Is there a way make Log more "smart" and
> choose
> > > > policy according to current disk space?
> > > >
> > > >
> > > > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yu...@gmail.com> wrote:
> > > >
> > > >> Can you utilize
> > > >>
> > >
> >
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> > > ?
> > > >>
> > > >> And have a cron job cleanup old logs ?
> > > >>
> > > >> Cheers
> > > >>
> > > >> On Mar 18, 2014, at 5:29 AM, haosdent <ha...@gmail.com> wrote:
> > > >>
> > > >>> Sometimes the call of Log.xxx couldn't return if the disk partition
> > of
> > > >> Log
> > > >>> path is full. And HBase would hang because of this. So I think if
> > there
> > > >> is
> > > >>> a better way to handle too much log. For example, through a
> > > configuration
> > > >>> item in hbase-site.xml, we could delete the old logs periodically
> or
> > > >> delete
> > > >>> old logs when this disk didn't have enough space.
> > > >>>
> > > >>> I think HBase hang when disk space isn't enough is unacceptable.
> > > Looking
> > > >>> forward your ideas. Thanks in advance.
> > > >>>
> > > >>> --
> > > >>> Best Regards,
> > > >>> Haosdent Huang
> > > >
> > > >
> > > >
> > > > --
> > > > Best Regards,
> > > > Haosdent Huang
> > >
> >
> >
> >
> > --
> > Best Regards,
> > Haosdent Huang
> >
>

Re: Is there a better way to handle too much log

Posted by Enis Söztutar <en...@gmail.com>.
DFRA already deletes old logs, you do not necessarily have to have a cron
job.

You can use RollingFileAppender to limit the max file size, and number of
log files to keep around.

Check out conf/log4j.properties.
Enis


On Tue, Mar 18, 2014 at 7:22 AM, haosdent <ha...@gmail.com> wrote:

> Yep, I use INFO level. Let me think about this later. If I found a better
> way, I would open a issue and record it. Thanks for your great help. @tedyu
>
>
> On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu <yu...@gmail.com> wrote:
>
> > If log grows so fast that disk space is to be exhausted, verbosity should
> > be lowered.
> >
> > Do you turn on DEBUG logging ?
> >
> > Cheers
> >
> > On Mar 18, 2014, at 6:08 AM, haosdent <ha...@gmail.com> wrote:
> >
> > > Thanks for your reply. DailyRollingFileAppender and a cron job could
> > works
> > > in normal scenario. But sometimes log grow too fast, or disk space may
> > use
> > > by other applications. Is there a way make Log more "smart" and choose
> > > policy according to current disk space?
> > >
> > >
> > > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yu...@gmail.com> wrote:
> > >
> > >> Can you utilize
> > >>
> >
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> > ?
> > >>
> > >> And have a cron job cleanup old logs ?
> > >>
> > >> Cheers
> > >>
> > >> On Mar 18, 2014, at 5:29 AM, haosdent <ha...@gmail.com> wrote:
> > >>
> > >>> Sometimes the call of Log.xxx couldn't return if the disk partition
> of
> > >> Log
> > >>> path is full. And HBase would hang because of this. So I think if
> there
> > >> is
> > >>> a better way to handle too much log. For example, through a
> > configuration
> > >>> item in hbase-site.xml, we could delete the old logs periodically or
> > >> delete
> > >>> old logs when this disk didn't have enough space.
> > >>>
> > >>> I think HBase hang when disk space isn't enough is unacceptable.
> > Looking
> > >>> forward your ideas. Thanks in advance.
> > >>>
> > >>> --
> > >>> Best Regards,
> > >>> Haosdent Huang
> > >
> > >
> > >
> > > --
> > > Best Regards,
> > > Haosdent Huang
> >
>
>
>
> --
> Best Regards,
> Haosdent Huang
>

Re: Is there a better way to handle too much log

Posted by haosdent <ha...@gmail.com>.
Yep, I use INFO level. Let me think about this later. If I found a better
way, I would open a issue and record it. Thanks for your great help. @tedyu


On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu <yu...@gmail.com> wrote:

> If log grows so fast that disk space is to be exhausted, verbosity should
> be lowered.
>
> Do you turn on DEBUG logging ?
>
> Cheers
>
> On Mar 18, 2014, at 6:08 AM, haosdent <ha...@gmail.com> wrote:
>
> > Thanks for your reply. DailyRollingFileAppender and a cron job could
> works
> > in normal scenario. But sometimes log grow too fast, or disk space may
> use
> > by other applications. Is there a way make Log more "smart" and choose
> > policy according to current disk space?
> >
> >
> > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yu...@gmail.com> wrote:
> >
> >> Can you utilize
> >>
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> ?
> >>
> >> And have a cron job cleanup old logs ?
> >>
> >> Cheers
> >>
> >> On Mar 18, 2014, at 5:29 AM, haosdent <ha...@gmail.com> wrote:
> >>
> >>> Sometimes the call of Log.xxx couldn't return if the disk partition of
> >> Log
> >>> path is full. And HBase would hang because of this. So I think if there
> >> is
> >>> a better way to handle too much log. For example, through a
> configuration
> >>> item in hbase-site.xml, we could delete the old logs periodically or
> >> delete
> >>> old logs when this disk didn't have enough space.
> >>>
> >>> I think HBase hang when disk space isn't enough is unacceptable.
> Looking
> >>> forward your ideas. Thanks in advance.
> >>>
> >>> --
> >>> Best Regards,
> >>> Haosdent Huang
> >
> >
> >
> > --
> > Best Regards,
> > Haosdent Huang
>



-- 
Best Regards,
Haosdent Huang

Re: Is there a better way to handle too much log

Posted by Ted Yu <yu...@gmail.com>.
If log grows so fast that disk space is to be exhausted, verbosity should be lowered. 

Do you turn on DEBUG logging ?

Cheers

On Mar 18, 2014, at 6:08 AM, haosdent <ha...@gmail.com> wrote:

> Thanks for your reply. DailyRollingFileAppender and a cron job could works
> in normal scenario. But sometimes log grow too fast, or disk space may use
> by other applications. Is there a way make Log more "smart" and choose
> policy according to current disk space?
> 
> 
> On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yu...@gmail.com> wrote:
> 
>> Can you utilize
>> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html?
>> 
>> And have a cron job cleanup old logs ?
>> 
>> Cheers
>> 
>> On Mar 18, 2014, at 5:29 AM, haosdent <ha...@gmail.com> wrote:
>> 
>>> Sometimes the call of Log.xxx couldn't return if the disk partition of
>> Log
>>> path is full. And HBase would hang because of this. So I think if there
>> is
>>> a better way to handle too much log. For example, through a configuration
>>> item in hbase-site.xml, we could delete the old logs periodically or
>> delete
>>> old logs when this disk didn't have enough space.
>>> 
>>> I think HBase hang when disk space isn't enough is unacceptable. Looking
>>> forward your ideas. Thanks in advance.
>>> 
>>> --
>>> Best Regards,
>>> Haosdent Huang
> 
> 
> 
> -- 
> Best Regards,
> Haosdent Huang

Re: Is there a better way to handle too much log

Posted by haosdent <ha...@gmail.com>.
Thanks for your reply. DailyRollingFileAppender and a cron job could works
in normal scenario. But sometimes log grow too fast, or disk space may use
by other applications. Is there a way make Log more "smart" and choose
policy according to current disk space?


On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu <yu...@gmail.com> wrote:

> Can you utilize
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html?
>
> And have a cron job cleanup old logs ?
>
> Cheers
>
> On Mar 18, 2014, at 5:29 AM, haosdent <ha...@gmail.com> wrote:
>
> > Sometimes the call of Log.xxx couldn't return if the disk partition of
> Log
> > path is full. And HBase would hang because of this. So I think if there
> is
> > a better way to handle too much log. For example, through a configuration
> > item in hbase-site.xml, we could delete the old logs periodically or
> delete
> > old logs when this disk didn't have enough space.
> >
> > I think HBase hang when disk space isn't enough is unacceptable. Looking
> > forward your ideas. Thanks in advance.
> >
> > --
> > Best Regards,
> > Haosdent Huang
>



-- 
Best Regards,
Haosdent Huang

Re: Is there a better way to handle too much log

Posted by Ted Yu <yu...@gmail.com>.
Can you utilize http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html ?

And have a cron job cleanup old logs ?

Cheers

On Mar 18, 2014, at 5:29 AM, haosdent <ha...@gmail.com> wrote:

> Sometimes the call of Log.xxx couldn't return if the disk partition of Log
> path is full. And HBase would hang because of this. So I think if there is
> a better way to handle too much log. For example, through a configuration
> item in hbase-site.xml, we could delete the old logs periodically or delete
> old logs when this disk didn't have enough space.
> 
> I think HBase hang when disk space isn't enough is unacceptable. Looking
> forward your ideas. Thanks in advance.
> 
> -- 
> Best Regards,
> Haosdent Huang