You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Adam Pocock (JIRA)" <ji...@apache.org> on 2013/08/23 17:06:51 UTC
[jira] [Created] (HDFS-5127) TestDataNodeVolumeFailureReporting and
TestDataNodeVolumeFailureToleration timeout on ZFS
Adam Pocock created HDFS-5127:
---------------------------------
Summary: TestDataNodeVolumeFailureReporting and TestDataNodeVolumeFailureToleration timeout on ZFS
Key: HDFS-5127
URL: https://issues.apache.org/jira/browse/HDFS-5127
Project: Hadoop HDFS
Issue Type: Bug
Components: test
Affects Versions: 2.0.3-alpha, 2.0.0-alpha
Environment: Solaris 11.1, x86-64
Reporter: Adam Pocock
Priority: Minor
TestDataNodeVolumeFailureReporting and TestDataNodeVolumeFailureToleration timeout on ZFS, due to an inaccurate method of measuring node capacity. Filesystems like ZFS which store additional metadata do not return absolutely consistent capacities after filesystem changes, so the logic which controls these two tests (in DFSTestUtil.waitForDatanodeStatus) fails as (expectedCapacity != currTotalCapacity), and the tests timeout. This occurs even when the ZFS filesystem and pool live on a ramdisk. The tests pass correctly when using a UFS filesystem on a ramdisk, and so the problem is ZFS rather than Solaris specific.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira