You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Henry Robinson (Created) (JIRA)" <ji...@apache.org> on 2012/03/09 00:09:57 UTC

[jira] [Created] (HDFS-3067) Null pointer in DFSInputStream.readBuffer if read is repeated on singly-replicated corrupted block

Null pointer in DFSInputStream.readBuffer if read is repeated on singly-replicated corrupted block
--------------------------------------------------------------------------------------------------

                 Key: HDFS-3067
                 URL: https://issues.apache.org/jira/browse/HDFS-3067
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Henry Robinson
            Assignee: Henry Robinson


With a singly-replicated block that's corrupted, issuing a read against it twice in succession (e.g. if ChecksumException is caught by the client) gives a NullPointerException.

Here's the body of a test that reproduces the problem:

{code}

    final short REPL_FACTOR = 1;
    final long FILE_LENGTH = 512L;
    cluster.waitActive();
    FileSystem fs = cluster.getFileSystem();

    Path path = new Path("/corrupted");

    DFSTestUtil.createFile(fs, path, FILE_LENGTH, REPL_FACTOR, 12345L);
    DFSTestUtil.waitReplication(fs, path, REPL_FACTOR);

    ExtendedBlock block = DFSTestUtil.getFirstBlock(fs, path);
    int blockFilesCorrupted = cluster.corruptBlockOnDataNodes(block);
    assertEquals("All replicas not corrupted", REPL_FACTOR, blockFilesCorrupted);

    InetSocketAddress nnAddr =
        new InetSocketAddress("localhost", cluster.getNameNodePort());
    DFSClient client = new DFSClient(nnAddr, conf);
    DFSInputStream dis = client.open(path.toString());
    byte[] arr = new byte[(int)FILE_LENGTH];
    boolean sawException = false;
    try {
      dis.read(arr, 0, (int)FILE_LENGTH);
    } catch (ChecksumException ex) {     
      sawException = true;
    }
    
    assertTrue(sawException);
    sawException = false;
    try {
      dis.read(arr, 0, (int)FILE_LENGTH); // <-- NPE thrown here
    } catch (ChecksumException ex) {     
      sawException = true;
    } 
{code}

The stack:

{code}
java.lang.NullPointerException
	at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:492)
	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:545)
        [snip test stack]
{code}

and the problem is that currentNode is null. It's left at null after the first read, which fails, and then is never refreshed because the condition in read that protects blockSeekTo is only triggered if the current position is outside the block's range. 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira