You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Daryn Sharp (JIRA)" <ji...@apache.org> on 2012/04/24 23:54:06 UTC
[jira] [Created] (HDFS-3318) Hftp hangs on transfers >2GB
Daryn Sharp created HDFS-3318:
---------------------------------
Summary: Hftp hangs on transfers >2GB
Key: HDFS-3318
URL: https://issues.apache.org/jira/browse/HDFS-3318
Project: Hadoop HDFS
Issue Type: Bug
Components: hdfs client
Affects Versions: 0.24.0, 0.23.3, 2.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
Hftp transfers >2GB hang after the transfer is complete. The problem appears to be caused by java internally using an int for the content length. When it overflows 2GB, it won't check the bounds of the reads on the input stream. The client continues reading after all data is received, and the client blocks until the server times out the connection -- _many_ minutes later. In conjunction with hftp timeouts, all transfers >2G fail with a read timeout.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira