You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Colm Dougan (Jira)" <ji...@apache.org> on 2023/02/02 23:08:00 UTC

[jira] [Comment Edited] (HADOOP-18615) CryptoOutputStream::close leak when encrypted zones + quota exceptions

    [ https://issues.apache.org/jira/browse/HADOOP-18615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17683586#comment-17683586 ] 

Colm Dougan edited comment on HADOOP-18615 at 2/2/23 11:07 PM:
---------------------------------------------------------------

PS - 

* I was unsure how priorities are assigned so I speculatively selected "Critical" but I think it may be higher assuming my findings are correct.
* I suspect the problem also exists in the 3.4.x stream but I haven't validated that.


was (Author: JIRAUSER298873):
PS - I was unsure how priorities are assigned so I speculatively selected "Critical" but I think it may be higher assuming my findings are correct.

> CryptoOutputStream::close leak when encrypted zones + quota exceptions
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-18615
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18615
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: common
>    Affects Versions: 3.3.1, 3.3.2, 3.3.3, 3.3.4
>            Reporter: Colm Dougan
>            Priority: Critical
>         Attachments: hadoop_cryto_stream_close_try_finally.diff
>
>
> {color:#172b4d}I would like to report an issue with a resource leak ({color}DFSOutputStream objects) when using the (java) hadoop-hdfs-client
> And specifically (at least in my case) when there is a combination of:
>  * encrypted zones
>  * quota space exceptions (DSQuotaExceededException)
> As you know, when encrypted zones are in play, when calling fs.create(path) in the hadoop-hdfs-client it will return a HdfsDataOutputStream stream object which wraps a CryptoOutputStream object which then wraps a DFSOutputStream object.
> Even though my code is correctly calling stream.close() on the above I can see from debugging that the underlying DFSOutputStream objects are being leaked. 
> Specifically I see the DFSOutputStream objects being leaked in the filesBeingWritten map in DFSClient.  (i.e. the DFSOutputStream objects remain in the map even though I've called close() on the stream object).
> I suspect this is due to a bug in CryptoOutputStream::close
> {code:java}
>   @Override                                                                                                   
>   public synchronized void close() throws IOException {                                                       
>     if (closed) {                                                                                             
>       return;                                                                                                 
>     }                                                                                                         
>     try {                                                                                                     
>       flush();                                                                                                
>       if (closeOutputStream) {                                                                                
>         super.close();                                                                                        
>         codec.close();                                                                                        
>       }                                                                                                       
>       freeBuffers();                                                                                          
>     } finally {                                                                                               
>       closed = true;                                                                                          
>     }                                                                                                         
>   }{code}
> ... whereby if flush() throws (observed in my case when a DSQuotaExceededException exception is thrown due to quota exceeded) then the super.close() on the underlying DFSOutputStream is skipped.
> In my case I had a space quota set up on a given directory which is also in an encrypted zone and so each attempt to create and write to a file failed and leaked as above.
> I have attached a speculative patch ([^hadoop_cryto_stream_close_try_finally.diff]) which simply wraps the flush() in a try .. finally.  The patch resolves the problem in my testing.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org