You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Ahmed Hussein (Jira)" <ji...@apache.org> on 2020/11/10 18:07:00 UTC

[jira] [Created] (HDFS-15679) DFSOutputStream close should be a no-op when called multiple times

Ahmed Hussein created HDFS-15679:
------------------------------------

             Summary: DFSOutputStream close should be a no-op when called multiple times
                 Key: HDFS-15679
                 URL: https://issues.apache.org/jira/browse/HDFS-15679
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Ahmed Hussein
            Assignee: Xiao Chen


While I was looking into the incorrect implementation of HDFS-15678, I found that once I implement the correct logic, the Junit test fails.
It turns out that there is inconsistency in {{DFSOutputStream.closeImpl()}} introduced by HDFS-13164.

The change in [that line|https://github.com/apache/hadoop/commit/51088d323359587dca7831f74c9d065c2fccc60d#diff-3a80b95578dc5079cebf0441e1dab63d5844c02fa2d04071c165ec4f7029f918R860] makes the close() throws exception multiple times which contradicts with HDFS-5335.

Also, I believe the implementation is incorrect and it needs to be reviewed. For example,
the implementation first checks {{isClosed()}} , then inside the {{finally block}} (which is inside {{isClosed() block}}) , there is a second check for {{!closed}}.

*A DFSOutputStream can never opened after being closed.*

{code:java}
if (isClosed()) {
  LOG.debug("Closing an already closed stream. [Stream:{}, streamer:{}]",
    closed, getStreamer().streamerClosed());
  try {
    getStreamer().getLastException().check(true);
  } catch (IOException ioe) {
    cleanupAndRethrowIOException(ioe);
  } finally {
  if (!closed) {
    // If stream is not closed but streamer closed, clean up the stream.
    // Most importantly, end the file lease.
    closeThreads(true);
  }
}
{code}

[~xiaochen] and [~yzhangal] can you please take another look at that patch?

That change breaks the semantic of {{close()}}. For convenience, this is a test code that fails because of the change in HDFS-13164.


{code:java}
  public void testCloseTwice() throws IOException {
    DistributedFileSystem fs = cluster.getFileSystem();
    FSDataOutputStream os = fs.create(new Path("/test"));
    DFSOutputStream dos = (DFSOutputStream) Whitebox.getInternalState(os,
        "wrappedStream");
    DataStreamer streamer = (DataStreamer) Whitebox
        .getInternalState(dos, "streamer");
    @SuppressWarnings("unchecked")
    LastExceptionInStreamer ex = (LastExceptionInStreamer) Whitebox
        .getInternalState(streamer, "lastException");
    Throwable thrown = (Throwable) Whitebox.getInternalState(ex, "thrown");
    Assert.assertNull(thrown);
    // force stream to break. output stream needs to encounter a real
    // error to properly mark it closed with an exception
    cluster.shutdown(true, false);
    try {
      dos.close();
      Assert.fail("should have thrown");
    } catch (IOException e) {
      Assert.assertEquals(e.toString(), EOFException.class, e.getClass());
    }
    thrown = (Throwable) Whitebox.getInternalState(ex, "thrown");
    Assert.assertNull(thrown);
    dos.close();
    // even if the exception is set again, close should not throw it
    ex.set(new IOException("dummy"));
    dos.close();
  }
{code}







--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org