You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Zuoming Zhang (JIRA)" <ji...@apache.org> on 2018/06/13 20:56:00 UTC
[jira] [Resolved] (MAPREDUCE-7111) TestNameNodeMetrics fails on
Windows
[ https://issues.apache.org/jira/browse/MAPREDUCE-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Zuoming Zhang resolved MAPREDUCE-7111.
--------------------------------------
Resolution: Invalid
Target Version/s: 2.9.1, 3.1.0 (was: 3.1.0, 2.9.1)
> TestNameNodeMetrics fails on Windows
> ------------------------------------
>
> Key: MAPREDUCE-7111
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7111
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: test
> Affects Versions: 3.1.0, 2.9.1
> Reporter: Zuoming Zhang
> Assignee: Zuoming Zhang
> Priority: Minor
> Labels: Windows
> Fix For: 2.9.1, 3.1.0
>
>
> _TestNameNodeMetrics_ fails on Windows
>
> Problem:
> This is because in _testVolumeFailures_, it tries to call _DataNodeTestUtils.injectDataDirFailure_ on a volume folder. What _injectDataDirFailure_ does is actually modifying the folder name from _volume_name_ to _volume_name_._origin_ and create a new file named as _volume_name_. Inside the folder, it has two things: 1. a directory named as "_current_", 2. a file named as "_in_use.lock_". Windows behaves different from Linux when renaming the parent folder of a locked file. Windows prevent you from renaming while Linux allows.
> Fix:
> So in order to inject data failure on to the volume. Instead of renaming the volume folder itself. Rename the folder inside it which doesn't hold a lock. Since the folder inside the volume is "_current_". Then we only need to inject data failure to _volume_name/current_.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org