You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2009/05/28 17:59:45 UTC
[jira] Commented: (HADOOP-5933) Make it harder to accidentally
close a shared DFSClient
[ https://issues.apache.org/jira/browse/HADOOP-5933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714027#action_12714027 ]
Steve Loughran commented on HADOOP-5933:
----------------------------------------
Some options
# if the log is at debug level, generate an exception in close() and save it until the next checkOpen() call is reached -and use that exception as the nested cause of the exception that is raised there.
# some complicated reference count mechanism with its own leakage problems
# add the ability to reopen things if they were in cache and got purged.
I've done the first one of these to track down problems, and while I now know where I shouldn't be calling close(), there's a risk that my code will now leak filesystem clients.
> Make it harder to accidentally close a shared DFSClient
> -------------------------------------------------------
>
> Key: HADOOP-5933
> URL: https://issues.apache.org/jira/browse/HADOOP-5933
> Project: Hadoop Core
> Issue Type: Improvement
> Components: fs
> Affects Versions: 0.21.0
> Reporter: Steve Loughran
> Priority: Minor
>
> Every so often I get stack traces telling me that DFSClient is closed, usually in {{org.apache.hadoop.hdfs.DFSClient.checkOpen() }} . The root cause of this is usually that one thread has closed a shared fsclient while another thread still has a reference to it. If the other thread then asks for a new client it will get one -and the cache repopulated- but if has one already, then I get to see a stack trace.
> It's effectively a race condition between clients in different threads.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.