You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Liang Xie (JIRA)" <ji...@apache.org> on 2014/05/26 08:38:02 UTC

[jira] [Created] (HDFS-6448) change BlockReaderLocalLegacy timeout detail

Liang Xie created HDFS-6448:
-------------------------------

             Summary: change BlockReaderLocalLegacy timeout detail
                 Key: HDFS-6448
                 URL: https://issues.apache.org/jira/browse/HDFS-6448
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: hdfs-client
    Affects Versions: 2.4.0, 3.0.0
            Reporter: Liang Xie
            Assignee: Liang Xie


Our hbase deployed upon hadoop2.0, in one accident, we hit HDFS-5016 in HDFS side, but we also found from HBase side, the dfs client was hung at getBlockReader, after reading code, we found there is a timeout setting in current codebase though, but the default hdfsTimeout value is "-1"  ( from Client.java:getTimeout(conf) )which means no timeout...

The hung stack trace like this:
at $Proxy21.getBlockLocalPathInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolTranslatorPB.java:215)
at org.apache.hadoop.hdfs.BlockReaderLocal.getBlockPathInfo(BlockReaderLocal.java:267)
at org.apache.hadoop.hdfs.BlockReaderLocal.newBlockReader(BlockReaderLocal.java:180)
at org.apache.hadoop.hdfs.DFSClient.getLocalBlockReader(DFSClient.java:812)

One feasible fix is replacing it with socketTimeout. see attached.



--
This message was sent by Atlassian JIRA
(v6.2#6252)