You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2015/05/02 18:47:06 UTC
[jira] [Updated] (HADOOP-7556) TaskLogAppender does not check if
closed before flushing
[ https://issues.apache.org/jira/browse/HADOOP-7556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Allen Wittenauer updated HADOOP-7556:
-------------------------------------
Resolution: Fixed
Status: Resolved (was: Patch Available)
This appears to fixed in trunk. Closing.
> TaskLogAppender does not check if closed before flushing
> --------------------------------------------------------
>
> Key: HADOOP-7556
> URL: https://issues.apache.org/jira/browse/HADOOP-7556
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 0.20.203.0
> Reporter: amorton
> Priority: Minor
> Attachments: HADDOP-7556.diff
>
>
> For background see http://groups.google.com/group/brisk-users/browse_thread/thread/3a18f4679673bea8
> http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201108.mbox/%3C4E370C97-1541-4FDA-8456-1067DDDC4D77@thelastpickle.com%3E
> Cassandra is using a log4j PropertyConfigurator which is closing all existing appenders. After a task has completed the TaskLogAppender.flush() is called and it tries to flush without checking if the writer has been closed. I'll upload a patch to check if the writer is closed, and log and silently fail if it is.
> The real problem is the log4j config collision, we're looking to different log4j LoggerRepositories for that.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)