You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by 煜 韦 <yu...@hotmail.com> on 2015/04/02 05:45:26 UTC

Question about log files

Hi there,
If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?

Thanks,
Jared
 		 	   		  

RE: Question about log files

Posted by 煜 韦 <yu...@hotmail.com>.
Is that possible to use similar operations as "w+" to write logs? So if log file is deleted by mistake or on purpose, it could be re-created if some content need to be written.

Thanks,
Jared

From: jrottinghuis@gmail.com
Subject: Re: Question about log files
Date: Mon, 6 Apr 2015 10:39:52 -0700
To: user@hadoop.apache.org

This depends on your OS.When you "delete" a file on Linux, you merely unlink the entry from the directory.The file does not actually get deleted until until the last reference (open handle) goes away. Note that this could lead to an interesting way to fill up a disk.You should be able to see the open files by a process using the lsof command.The process itself does not know that a dentry has been removed, so there is nothing that log4j or the Hadoop code can do about it.Assuming you have some rolling file appender configured, log4j should start logging to a new file at some point, or you have to bounce you daemon process.
Cheers,
Joep

Sent from my iPhone
On Apr 6, 2015, at 6:19 AM, Fabio C. <an...@gmail.com> wrote:

I noticed that too, I think Hadoop keeps the file open all the time and when you delete it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:
I think the log information has lost.
 the hadoop is not designed for that you deleted these files incorrectly
2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:



Hi there,
If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?

Thanks,
Jared
 		 	   		  




 		 	   		  

RE: Question about log files

Posted by 煜 韦 <yu...@hotmail.com>.
Is that possible to use similar operations as "w+" to write logs? So if log file is deleted by mistake or on purpose, it could be re-created if some content need to be written.

Thanks,
Jared

From: jrottinghuis@gmail.com
Subject: Re: Question about log files
Date: Mon, 6 Apr 2015 10:39:52 -0700
To: user@hadoop.apache.org

This depends on your OS.When you "delete" a file on Linux, you merely unlink the entry from the directory.The file does not actually get deleted until until the last reference (open handle) goes away. Note that this could lead to an interesting way to fill up a disk.You should be able to see the open files by a process using the lsof command.The process itself does not know that a dentry has been removed, so there is nothing that log4j or the Hadoop code can do about it.Assuming you have some rolling file appender configured, log4j should start logging to a new file at some point, or you have to bounce you daemon process.
Cheers,
Joep

Sent from my iPhone
On Apr 6, 2015, at 6:19 AM, Fabio C. <an...@gmail.com> wrote:

I noticed that too, I think Hadoop keeps the file open all the time and when you delete it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:
I think the log information has lost.
 the hadoop is not designed for that you deleted these files incorrectly
2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:



Hi there,
If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?

Thanks,
Jared
 		 	   		  




 		 	   		  

RE: Question about log files

Posted by 煜 韦 <yu...@hotmail.com>.
Is that possible to use similar operations as "w+" to write logs? So if log file is deleted by mistake or on purpose, it could be re-created if some content need to be written.

Thanks,
Jared

From: jrottinghuis@gmail.com
Subject: Re: Question about log files
Date: Mon, 6 Apr 2015 10:39:52 -0700
To: user@hadoop.apache.org

This depends on your OS.When you "delete" a file on Linux, you merely unlink the entry from the directory.The file does not actually get deleted until until the last reference (open handle) goes away. Note that this could lead to an interesting way to fill up a disk.You should be able to see the open files by a process using the lsof command.The process itself does not know that a dentry has been removed, so there is nothing that log4j or the Hadoop code can do about it.Assuming you have some rolling file appender configured, log4j should start logging to a new file at some point, or you have to bounce you daemon process.
Cheers,
Joep

Sent from my iPhone
On Apr 6, 2015, at 6:19 AM, Fabio C. <an...@gmail.com> wrote:

I noticed that too, I think Hadoop keeps the file open all the time and when you delete it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:
I think the log information has lost.
 the hadoop is not designed for that you deleted these files incorrectly
2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:



Hi there,
If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?

Thanks,
Jared
 		 	   		  




 		 	   		  

RE: Question about log files

Posted by 煜 韦 <yu...@hotmail.com>.
Is that possible to use similar operations as "w+" to write logs? So if log file is deleted by mistake or on purpose, it could be re-created if some content need to be written.

Thanks,
Jared

From: jrottinghuis@gmail.com
Subject: Re: Question about log files
Date: Mon, 6 Apr 2015 10:39:52 -0700
To: user@hadoop.apache.org

This depends on your OS.When you "delete" a file on Linux, you merely unlink the entry from the directory.The file does not actually get deleted until until the last reference (open handle) goes away. Note that this could lead to an interesting way to fill up a disk.You should be able to see the open files by a process using the lsof command.The process itself does not know that a dentry has been removed, so there is nothing that log4j or the Hadoop code can do about it.Assuming you have some rolling file appender configured, log4j should start logging to a new file at some point, or you have to bounce you daemon process.
Cheers,
Joep

Sent from my iPhone
On Apr 6, 2015, at 6:19 AM, Fabio C. <an...@gmail.com> wrote:

I noticed that too, I think Hadoop keeps the file open all the time and when you delete it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:
I think the log information has lost.
 the hadoop is not designed for that you deleted these files incorrectly
2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:



Hi there,
If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?

Thanks,
Jared
 		 	   		  




 		 	   		  

Re: Question about log files

Posted by Joep Rottinghuis <jr...@gmail.com>.
This depends on your OS.
When you "delete" a file on Linux, you merely unlink the entry from the directory.
The file does not actually get deleted until until the last reference (open handle) goes away. Note that this could lead to an interesting way to fill up a disk.
You should be able to see the open files by a process using the lsof command.
The process itself does not know that a dentry has been removed, so there is nothing that log4j or the Hadoop code can do about it.
Assuming you have some rolling file appender configured, log4j should start logging to a new file at some point, or you have to bounce you daemon process.

Cheers,

Joep

Sent from my iPhone

> On Apr 6, 2015, at 6:19 AM, Fabio C. <an...@gmail.com> wrote:
> 
> I noticed that too, I think Hadoop keeps the file open all the time and when you delete it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
> yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service...
> 
>> On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:
>> I think the log information has lost.
>> 
>>  the hadoop is not designed for that you deleted these files incorrectly
>> 
>> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:
>>> Hi there,
>>> If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
>>> Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?
>>> 
>>> Thanks,
>>> Jared
> 

Re: Question about log files

Posted by Joep Rottinghuis <jr...@gmail.com>.
This depends on your OS.
When you "delete" a file on Linux, you merely unlink the entry from the directory.
The file does not actually get deleted until until the last reference (open handle) goes away. Note that this could lead to an interesting way to fill up a disk.
You should be able to see the open files by a process using the lsof command.
The process itself does not know that a dentry has been removed, so there is nothing that log4j or the Hadoop code can do about it.
Assuming you have some rolling file appender configured, log4j should start logging to a new file at some point, or you have to bounce you daemon process.

Cheers,

Joep

Sent from my iPhone

> On Apr 6, 2015, at 6:19 AM, Fabio C. <an...@gmail.com> wrote:
> 
> I noticed that too, I think Hadoop keeps the file open all the time and when you delete it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
> yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service...
> 
>> On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:
>> I think the log information has lost.
>> 
>>  the hadoop is not designed for that you deleted these files incorrectly
>> 
>> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:
>>> Hi there,
>>> If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
>>> Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?
>>> 
>>> Thanks,
>>> Jared
> 

Re: Question about log files

Posted by Joep Rottinghuis <jr...@gmail.com>.
This depends on your OS.
When you "delete" a file on Linux, you merely unlink the entry from the directory.
The file does not actually get deleted until until the last reference (open handle) goes away. Note that this could lead to an interesting way to fill up a disk.
You should be able to see the open files by a process using the lsof command.
The process itself does not know that a dentry has been removed, so there is nothing that log4j or the Hadoop code can do about it.
Assuming you have some rolling file appender configured, log4j should start logging to a new file at some point, or you have to bounce you daemon process.

Cheers,

Joep

Sent from my iPhone

> On Apr 6, 2015, at 6:19 AM, Fabio C. <an...@gmail.com> wrote:
> 
> I noticed that too, I think Hadoop keeps the file open all the time and when you delete it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
> yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service...
> 
>> On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:
>> I think the log information has lost.
>> 
>>  the hadoop is not designed for that you deleted these files incorrectly
>> 
>> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:
>>> Hi there,
>>> If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
>>> Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?
>>> 
>>> Thanks,
>>> Jared
> 

Re: Question about log files

Posted by Joep Rottinghuis <jr...@gmail.com>.
This depends on your OS.
When you "delete" a file on Linux, you merely unlink the entry from the directory.
The file does not actually get deleted until until the last reference (open handle) goes away. Note that this could lead to an interesting way to fill up a disk.
You should be able to see the open files by a process using the lsof command.
The process itself does not know that a dentry has been removed, so there is nothing that log4j or the Hadoop code can do about it.
Assuming you have some rolling file appender configured, log4j should start logging to a new file at some point, or you have to bounce you daemon process.

Cheers,

Joep

Sent from my iPhone

> On Apr 6, 2015, at 6:19 AM, Fabio C. <an...@gmail.com> wrote:
> 
> I noticed that too, I think Hadoop keeps the file open all the time and when you delete it it is just no more able to write on it and doesn't try to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
> yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service...
> 
>> On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:
>> I think the log information has lost.
>> 
>>  the hadoop is not designed for that you deleted these files incorrectly
>> 
>> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:
>>> Hi there,
>>> If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode.
>>> Why not log files could be re-created when deleted by mistake or on purpose during cluster is running?
>>> 
>>> Thanks,
>>> Jared
> 

Re: Question about log files

Posted by "Fabio C." <an...@gmail.com>.
I noticed that too, I think Hadoop keeps the file open all the time and
when you delete it it is just no more able to write on it and doesn't try
to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't
find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:

> I think the log information has lost.
>
>  the hadoop is not designed for that you deleted these files incorrectly
>
> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:
>
>> Hi there,
>> If log files are deleted without restarting service, it seems that the
>> logs is to be lost for later operation. For example, on namenode, datanode.
>> Why not log files could be re-created when deleted by mistake or on
>> purpose during cluster is running?
>>
>> Thanks,
>> Jared
>>
>
>

Re: Question about log files

Posted by "Fabio C." <an...@gmail.com>.
I noticed that too, I think Hadoop keeps the file open all the time and
when you delete it it is just no more able to write on it and doesn't try
to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't
find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:

> I think the log information has lost.
>
>  the hadoop is not designed for that you deleted these files incorrectly
>
> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:
>
>> Hi there,
>> If log files are deleted without restarting service, it seems that the
>> logs is to be lost for later operation. For example, on namenode, datanode.
>> Why not log files could be re-created when deleted by mistake or on
>> purpose during cluster is running?
>>
>> Thanks,
>> Jared
>>
>
>

Re: Question about log files

Posted by "Fabio C." <an...@gmail.com>.
I noticed that too, I think Hadoop keeps the file open all the time and
when you delete it it is just no more able to write on it and doesn't try
to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't
find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:

> I think the log information has lost.
>
>  the hadoop is not designed for that you deleted these files incorrectly
>
> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:
>
>> Hi there,
>> If log files are deleted without restarting service, it seems that the
>> logs is to be lost for later operation. For example, on namenode, datanode.
>> Why not log files could be re-created when deleted by mistake or on
>> purpose during cluster is running?
>>
>> Thanks,
>> Jared
>>
>
>

Re: Question about log files

Posted by "Fabio C." <an...@gmail.com>.
I noticed that too, I think Hadoop keeps the file open all the time and
when you delete it it is just no more able to write on it and doesn't try
to recreate it. Not sure if it's a Log4j problem or an Hadoop one...
yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't
find anything better than deleting the file and restarting the service...

On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 <ya...@gmail.com> wrote:

> I think the log information has lost.
>
>  the hadoop is not designed for that you deleted these files incorrectly
>
> 2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:
>
>> Hi there,
>> If log files are deleted without restarting service, it seems that the
>> logs is to be lost for later operation. For example, on namenode, datanode.
>> Why not log files could be re-created when deleted by mistake or on
>> purpose during cluster is running?
>>
>> Thanks,
>> Jared
>>
>
>

Re: Question about log files

Posted by 杨浩 <ya...@gmail.com>.
I think the log information has lost.

 the hadoop is not designed for that you deleted these files incorrectly

2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:

> Hi there,
> If log files are deleted without restarting service, it seems that the
> logs is to be lost for later operation. For example, on namenode, datanode.
> Why not log files could be re-created when deleted by mistake or on
> purpose during cluster is running?
>
> Thanks,
> Jared
>

Re: Question about log files

Posted by 杨浩 <ya...@gmail.com>.
I think the log information has lost.

 the hadoop is not designed for that you deleted these files incorrectly

2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:

> Hi there,
> If log files are deleted without restarting service, it seems that the
> logs is to be lost for later operation. For example, on namenode, datanode.
> Why not log files could be re-created when deleted by mistake or on
> purpose during cluster is running?
>
> Thanks,
> Jared
>

Re: Question about log files

Posted by 杨浩 <ya...@gmail.com>.
I think the log information has lost.

 the hadoop is not designed for that you deleted these files incorrectly

2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:

> Hi there,
> If log files are deleted without restarting service, it seems that the
> logs is to be lost for later operation. For example, on namenode, datanode.
> Why not log files could be re-created when deleted by mistake or on
> purpose during cluster is running?
>
> Thanks,
> Jared
>

Re: Question about log files

Posted by 杨浩 <ya...@gmail.com>.
I think the log information has lost.

 the hadoop is not designed for that you deleted these files incorrectly

2015-04-02 11:45 GMT+08:00 煜 韦 <yu...@hotmail.com>:

> Hi there,
> If log files are deleted without restarting service, it seems that the
> logs is to be lost for later operation. For example, on namenode, datanode.
> Why not log files could be re-created when deleted by mistake or on
> purpose during cluster is running?
>
> Thanks,
> Jared
>