You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2023/02/03 10:36:00 UTC

[jira] [Moved] (HDFS-16906) CryptoOutputStream::close leak when encrypted zones + quota exceptions

     [ https://issues.apache.org/jira/browse/HDFS-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran moved HADOOP-18615 to HDFS-16906:
------------------------------------------------

          Component/s: dfsclient
                           (was: common)
                  Key: HDFS-16906  (was: HADOOP-18615)
    Affects Version/s: 3.3.4
                       3.3.3
                       3.3.2
                       3.3.1
                           (was: 3.3.1)
                           (was: 3.3.2)
                           (was: 3.3.3)
                           (was: 3.3.4)
              Project: Hadoop HDFS  (was: Hadoop Common)

> CryptoOutputStream::close leak when encrypted zones + quota exceptions
> ----------------------------------------------------------------------
>
>                 Key: HDFS-16906
>                 URL: https://issues.apache.org/jira/browse/HDFS-16906
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: dfsclient
>    Affects Versions: 3.3.4, 3.3.3, 3.3.2, 3.3.1
>            Reporter: Colm Dougan
>            Priority: Critical
>         Attachments: hadoop_cryto_stream_close_try_finally.diff
>
>
> {color:#172b4d}I would like to report an issue with a resource leak ({color}DFSOutputStream objects) when using the (java) hadoop-hdfs-client
> And specifically (at least in my case) when there is a combination of:
>  * encrypted zones
>  * quota space exceptions (DSQuotaExceededException)
> As you know, when encrypted zones are in play, when calling fs.create(path) in the hadoop-hdfs-client it will return a HdfsDataOutputStream stream object which wraps a CryptoOutputStream object which then wraps a DFSOutputStream object.
> Even though my code is correctly calling stream.close() on the above I can see from debugging that the underlying DFSOutputStream objects are being leaked. 
> Specifically I see the DFSOutputStream objects being leaked in the filesBeingWritten map in DFSClient.  (i.e. the DFSOutputStream objects remain in the map even though I've called close() on the stream object).
> I suspect this is due to a bug in CryptoOutputStream::close
> {code:java}
>   @Override                                                                                                   
>   public synchronized void close() throws IOException {                                                       
>     if (closed) {                                                                                             
>       return;                                                                                                 
>     }                                                                                                         
>     try {                                                                                                     
>       flush();                                                                                                
>       if (closeOutputStream) {                                                                                
>         super.close();                                                                                        
>         codec.close();                                                                                        
>       }                                                                                                       
>       freeBuffers();                                                                                          
>     } finally {                                                                                               
>       closed = true;                                                                                          
>     }                                                                                                         
>   }{code}
> ... whereby if flush() throws (observed in my case when a DSQuotaExceededException exception is thrown due to quota exceeded) then the super.close() on the underlying DFSOutputStream is skipped.
> In my case I had a space quota set up on a given directory which is also in an encrypted zone and so each attempt to create and write to a file failed and leaked as above.
> I have attached a speculative patch ([^hadoop_cryto_stream_close_try_finally.diff]) which simply wraps the flush() in a try .. finally.  The patch resolves the problem in my testing.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org