You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "Colm Dougan (Jira)" <ji...@apache.org> on 2023/02/04 00:50:00 UTC

[jira] [Comment Edited] (HDFS-16906) CryptoOutputStream::close leak when encrypted zones + quota exceptions

    [ https://issues.apache.org/jira/browse/HDFS-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17684064#comment-17684064 ] 

Colm Dougan edited comment on HDFS-16906 at 2/4/23 12:49 AM:
-------------------------------------------------------------

[~stevel@apache.org] 

thanks.

I have submitted a PR as requested.

Regarding the priority, I will defer to the HDFS project developers to state their own perspective, but in my case, the application which embeds the hadoop-hdfs-client is frequently OOM-ing due to this leak despite, what I believe is, entirely correct usage from my side so my feeling is that the priority should reflect that this.

Thanks.


was (Author: JIRAUSER298873):
[~stevel@apache.org] 

thanks.

I have submitted a PR as requested.

Regarding the priority, I will defer t the HDFS project developers to state their own perspective, but in my case, the application which embeds the hadoop-hdfs-client is frequently OOM-ing due to this leak despite, what I believe is, entirely correct usage from my side so my feeling is that the priority should reflect that this.

Thanks.

> CryptoOutputStream::close leak when encrypted zones + quota exceptions
> ----------------------------------------------------------------------
>
>                 Key: HDFS-16906
>                 URL: https://issues.apache.org/jira/browse/HDFS-16906
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: dfsclient
>    Affects Versions: 3.3.1, 3.3.2, 3.3.3, 3.3.4
>            Reporter: Colm Dougan
>            Assignee: Colm Dougan
>            Priority: Critical
>              Labels: pull-request-available
>         Attachments: hadoop_cryto_stream_close_try_finally.diff
>
>
> {color:#172b4d}I would like to report an issue with a resource leak ({color}DFSOutputStream objects) when using the (java) hadoop-hdfs-client
> And specifically (at least in my case) when there is a combination of:
>  * encrypted zones
>  * quota space exceptions (DSQuotaExceededException)
> As you know, when encrypted zones are in play, when calling fs.create(path) in the hadoop-hdfs-client it will return a HdfsDataOutputStream stream object which wraps a CryptoOutputStream object which then wraps a DFSOutputStream object.
> Even though my code is correctly calling stream.close() on the above I can see from debugging that the underlying DFSOutputStream objects are being leaked. 
> Specifically I see the DFSOutputStream objects being leaked in the filesBeingWritten map in DFSClient.  (i.e. the DFSOutputStream objects remain in the map even though I've called close() on the stream object).
> I suspect this is due to a bug in CryptoOutputStream::close
> {code:java}
>   @Override                                                                                                   
>   public synchronized void close() throws IOException {                                                       
>     if (closed) {                                                                                             
>       return;                                                                                                 
>     }                                                                                                         
>     try {                                                                                                     
>       flush();                                                                                                
>       if (closeOutputStream) {                                                                                
>         super.close();                                                                                        
>         codec.close();                                                                                        
>       }                                                                                                       
>       freeBuffers();                                                                                          
>     } finally {                                                                                               
>       closed = true;                                                                                          
>     }                                                                                                         
>   }{code}
> ... whereby if flush() throws (observed in my case when a DSQuotaExceededException exception is thrown due to quota exceeded) then the super.close() on the underlying DFSOutputStream is skipped.
> In my case I had a space quota set up on a given directory which is also in an encrypted zone and so each attempt to create and write to a file failed and leaked as above.
> I have attached a speculative patch ([^hadoop_cryto_stream_close_try_finally.diff]) which simply wraps the flush() in a try .. finally.  The patch resolves the problem in my testing.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org