You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by vi...@apache.org on 2017/12/02 02:38:14 UTC
[01/50] [abbrv] hadoop git commit: HADOOP-13493. Compatibility Docs
should clarify the policy for what takes precedence when a conflict is found
(templedf via rkanter) [Forced Update!]
Repository: hadoop
Updated Branches:
refs/heads/HDFS-9806 64be4098a -> 36957f0d2 (forced update)
HADOOP-13493. Compatibility Docs should clarify the policy for what takes precedence when a conflict is found (templedf via rkanter)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/75a3ab88
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/75a3ab88
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/75a3ab88
Branch: refs/heads/HDFS-9806
Commit: 75a3ab88f5f4ea6abf0a56cb8058e17b5a5fe403
Parents: 0e560f3
Author: Robert Kanter <rk...@apache.org>
Authored: Thu Nov 30 07:39:15 2017 -0800
Committer: Robert Kanter <rk...@apache.org>
Committed: Thu Nov 30 07:39:15 2017 -0800
----------------------------------------------------------------------
.../src/site/markdown/Compatibility.md | 29 +++++++++++++++-----
1 file changed, 22 insertions(+), 7 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/75a3ab88/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
index 461ff17..54be412 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md
@@ -117,13 +117,7 @@ Compatibility types
Developers SHOULD annotate all Hadoop interfaces and classes with the
@InterfaceAudience and @InterfaceStability annotations to describe the
-intended audience and stability. Annotations may be at the package, class, or
-member variable or method level. Member variable and method annotations SHALL
-override class annotations, and class annotations SHALL override package
-annotations. A package, class, or member variable or method that is not
-annotated SHALL be interpreted as implicitly
-[Private](./InterfaceClassification.html#Private) and
-[Unstable](./InterfaceClassification.html#Unstable).
+intended audience and stability.
* @InterfaceAudience captures the intended audience. Possible values are
[Public](./InterfaceClassification.html#Public) (for end users and external
@@ -134,6 +128,27 @@ etc.), and [Private](./InterfaceClassification.html#Private)
* @InterfaceStability describes what types of interface changes are permitted. Possible values are [Stable](./InterfaceClassification.html#Stable), [Evolving](./InterfaceClassification.html#Evolving), and [Unstable](./InterfaceClassification.html#Unstable).
* @Deprecated notes that the package, class, or member variable or method could potentially be removed in the future and should not be used.
+Annotations MAY be applied at the package, class, or method level. If a method
+has no privacy or stability annotation, it SHALL inherit its intended audience
+or stability level from the class to which it belongs. If a class has no
+privacy or stability annotation, it SHALL inherit its intended audience or
+stability level from the package to which it belongs. If a package has no
+privacy or stability annotation, it SHALL be assumed to be
+[Private](./InterfaceClassification.html#Private) and
+[Unstable](./InterfaceClassification.html#Unstable),
+respectively.
+
+In the event that an element's audience or stability annotation conflicts with
+the corresponding annotation of its parent (whether explicit or inherited), the
+element's audience or stability (respectively) SHALL be determined by the
+more restrictive annotation. For example, if a
+[Private](./InterfaceClassification.html#Private) method is contained
+in a [Public](./InterfaceClassification.html#Public) class, then the method
+SHALL be treated as [Private](./InterfaceClassification.html#Private). If a
+[Public](./InterfaceClassification.html#Public) method is contained in a
+[Private](./InterfaceClassification.html#Private) class, the method SHALL be
+treated as [Private](./InterfaceClassification.html#Private).
+
#### Use Cases
* [Public](./InterfaceClassification.html#Public)-[Stable](./InterfaceClassification.html#Stable) API compatibility is required to ensure end-user programs and downstream projects continue to work without modification.
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[14/50] [abbrv] hadoop git commit: Revert "HDFS-11576. Block recovery
will fail indefinitely if recovery time > heartbeat interval. Contributed by
Lukas Majercak"
Posted by vi...@apache.org.
Revert "HDFS-11576. Block recovery will fail indefinitely if recovery time > heartbeat interval. Contributed by Lukas Majercak"
This reverts commit 5304698dc8c5667c33e6ed9c4a827ef57172a723.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/53bbef38
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/53bbef38
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/53bbef38
Branch: refs/heads/HDFS-9806
Commit: 53bbef3802194b7a0a3ce5cd3c91def9e88856e3
Parents: 7225ec0
Author: Chris Douglas <cd...@apache.org>
Authored: Fri Dec 1 11:19:01 2017 -0800
Committer: Chris Douglas <cd...@apache.org>
Committed: Fri Dec 1 11:19:38 2017 -0800
----------------------------------------------------------------------
.../apache/hadoop/test/GenericTestUtils.java | 10 +-
.../server/blockmanagement/BlockManager.java | 40 ------
.../blockmanagement/PendingRecoveryBlocks.java | 143 -------------------
.../hdfs/server/namenode/FSNamesystem.java | 40 +++---
.../TestPendingRecoveryBlocks.java | 87 -----------
.../hdfs/server/datanode/TestBlockRecovery.java | 108 --------------
.../namenode/ha/TestPipelinesFailover.java | 5 +-
7 files changed, 20 insertions(+), 413 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/53bbef38/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index cdde48c..0db6c73 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -641,16 +641,10 @@ public abstract class GenericTestUtils {
* conditions.
*/
public static class SleepAnswer implements Answer<Object> {
- private final int minSleepTime;
private final int maxSleepTime;
private static Random r = new Random();
-
+
public SleepAnswer(int maxSleepTime) {
- this(0, maxSleepTime);
- }
-
- public SleepAnswer(int minSleepTime, int maxSleepTime) {
- this.minSleepTime = minSleepTime;
this.maxSleepTime = maxSleepTime;
}
@@ -658,7 +652,7 @@ public abstract class GenericTestUtils {
public Object answer(InvocationOnMock invocation) throws Throwable {
boolean interrupted = false;
try {
- Thread.sleep(r.nextInt(maxSleepTime) + minSleepTime);
+ Thread.sleep(r.nextInt(maxSleepTime));
} catch (InterruptedException ie) {
interrupted = true;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/53bbef38/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 1cdb159..4986027 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -164,8 +164,6 @@ public class BlockManager implements BlockStatsMXBean {
private static final String QUEUE_REASON_FUTURE_GENSTAMP =
"generation stamp is in the future";
- private static final long BLOCK_RECOVERY_TIMEOUT_MULTIPLIER = 30;
-
private final Namesystem namesystem;
private final BlockManagerSafeMode bmSafeMode;
@@ -355,9 +353,6 @@ public class BlockManager implements BlockStatsMXBean {
@VisibleForTesting
final PendingReconstructionBlocks pendingReconstruction;
- /** Stores information about block recovery attempts. */
- private final PendingRecoveryBlocks pendingRecoveryBlocks;
-
/** The maximum number of replicas allowed for a block */
public final short maxReplication;
/**
@@ -554,12 +549,6 @@ public class BlockManager implements BlockStatsMXBean {
}
this.minReplicationToBeInMaintenance = (short)minMaintenanceR;
- long heartbeatIntervalSecs = conf.getTimeDuration(
- DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,
- DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_DEFAULT, TimeUnit.SECONDS);
- long blockRecoveryTimeout = getBlockRecoveryTimeout(heartbeatIntervalSecs);
- pendingRecoveryBlocks = new PendingRecoveryBlocks(blockRecoveryTimeout);
-
this.blockReportLeaseManager = new BlockReportLeaseManager(conf);
bmSafeMode = new BlockManagerSafeMode(this, namesystem, haEnabled, conf);
@@ -4747,25 +4736,6 @@ public class BlockManager implements BlockStatsMXBean {
}
}
- /**
- * Notification of a successful block recovery.
- * @param block for which the recovery succeeded
- */
- public void successfulBlockRecovery(BlockInfo block) {
- pendingRecoveryBlocks.remove(block);
- }
-
- /**
- * Checks whether a recovery attempt has been made for the given block.
- * If so, checks whether that attempt has timed out.
- * @param b block for which recovery is being attempted
- * @return true if no recovery attempt has been made or
- * the previous attempt timed out
- */
- public boolean addBlockRecoveryAttempt(BlockInfo b) {
- return pendingRecoveryBlocks.add(b);
- }
-
@VisibleForTesting
public void flushBlockOps() throws IOException {
runBlockOp(new Callable<Void>(){
@@ -4893,14 +4863,4 @@ public class BlockManager implements BlockStatsMXBean {
}
return i;
}
-
- private static long getBlockRecoveryTimeout(long heartbeatIntervalSecs) {
- return TimeUnit.SECONDS.toMillis(heartbeatIntervalSecs *
- BLOCK_RECOVERY_TIMEOUT_MULTIPLIER);
- }
-
- @VisibleForTesting
- public void setBlockRecoveryTimeout(long blockRecoveryTimeout) {
- pendingRecoveryBlocks.setRecoveryTimeoutInterval(blockRecoveryTimeout);
- }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/53bbef38/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingRecoveryBlocks.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingRecoveryBlocks.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingRecoveryBlocks.java
deleted file mode 100644
index 3f5f27c..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingRecoveryBlocks.java
+++ /dev/null
@@ -1,143 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.server.blockmanagement;
-
-import com.google.common.annotations.VisibleForTesting;
-import org.apache.hadoop.hdfs.util.LightWeightHashSet;
-import org.apache.hadoop.util.Time;
-import org.slf4j.Logger;
-
-import java.util.concurrent.TimeUnit;
-
-/**
- * PendingRecoveryBlocks tracks recovery attempts for each block and their
- * timeouts to ensure we do not have multiple recoveries at the same time
- * and retry only after the timeout for a recovery has expired.
- */
-class PendingRecoveryBlocks {
- private static final Logger LOG = BlockManager.LOG;
-
- /** List of recovery attempts per block and the time they expire. */
- private final LightWeightHashSet<BlockRecoveryAttempt> recoveryTimeouts =
- new LightWeightHashSet<>();
-
- /** The timeout for issuing a block recovery again.
- * (it should be larger than the time to recover a block)
- */
- private long recoveryTimeoutInterval;
-
- PendingRecoveryBlocks(long timeout) {
- this.recoveryTimeoutInterval = timeout;
- }
-
- /**
- * Remove recovery attempt for the given block.
- * @param block whose recovery attempt to remove.
- */
- synchronized void remove(BlockInfo block) {
- recoveryTimeouts.remove(new BlockRecoveryAttempt(block));
- }
-
- /**
- * Checks whether a recovery attempt has been made for the given block.
- * If so, checks whether that attempt has timed out.
- * @param block block for which recovery is being attempted
- * @return true if no recovery attempt has been made or
- * the previous attempt timed out
- */
- synchronized boolean add(BlockInfo block) {
- boolean added = false;
- long curTime = getTime();
- BlockRecoveryAttempt recoveryAttempt =
- recoveryTimeouts.getElement(new BlockRecoveryAttempt(block));
-
- if (recoveryAttempt == null) {
- BlockRecoveryAttempt newAttempt = new BlockRecoveryAttempt(
- block, curTime + recoveryTimeoutInterval);
- added = recoveryTimeouts.add(newAttempt);
- } else if (recoveryAttempt.hasTimedOut(curTime)) {
- // Previous attempt timed out, reset the timeout
- recoveryAttempt.setTimeout(curTime + recoveryTimeoutInterval);
- added = true;
- } else {
- long timeoutIn = TimeUnit.MILLISECONDS.toSeconds(
- recoveryAttempt.timeoutAt - curTime);
- LOG.info("Block recovery attempt for " + block + " rejected, as the " +
- "previous attempt times out in " + timeoutIn + " seconds.");
- }
- return added;
- }
-
- /**
- * Check whether the given block is under recovery.
- * @param b block for which to check
- * @return true if the given block is being recovered
- */
- synchronized boolean isUnderRecovery(BlockInfo b) {
- BlockRecoveryAttempt recoveryAttempt =
- recoveryTimeouts.getElement(new BlockRecoveryAttempt(b));
- return recoveryAttempt != null;
- }
-
- long getTime() {
- return Time.monotonicNow();
- }
-
- @VisibleForTesting
- synchronized void setRecoveryTimeoutInterval(long recoveryTimeoutInterval) {
- this.recoveryTimeoutInterval = recoveryTimeoutInterval;
- }
-
- /**
- * Tracks timeout for block recovery attempt of a given block.
- */
- private static class BlockRecoveryAttempt {
- private final BlockInfo blockInfo;
- private long timeoutAt;
-
- private BlockRecoveryAttempt(BlockInfo blockInfo) {
- this(blockInfo, 0);
- }
-
- BlockRecoveryAttempt(BlockInfo blockInfo, long timeoutAt) {
- this.blockInfo = blockInfo;
- this.timeoutAt = timeoutAt;
- }
-
- boolean hasTimedOut(long currentTime) {
- return currentTime > timeoutAt;
- }
-
- void setTimeout(long newTimeoutAt) {
- this.timeoutAt = newTimeoutAt;
- }
-
- @Override
- public int hashCode() {
- return blockInfo.hashCode();
- }
-
- @Override
- public boolean equals(Object obj) {
- if (obj instanceof BlockRecoveryAttempt) {
- return this.blockInfo.equals(((BlockRecoveryAttempt) obj).blockInfo);
- }
- return false;
- }
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/53bbef38/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 6a890e2..d3d9cdc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -3318,30 +3318,25 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
+ "Removed empty last block and closed file " + src);
return true;
}
- // Start recovery of the last block for this file
- // Only do so if there is no ongoing recovery for this block,
- // or the previous recovery for this block timed out.
- if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
- long blockRecoveryId = nextGenerationStamp(
- blockManager.isLegacyBlock(lastBlock));
- if(copyOnTruncate) {
- lastBlock.setGenerationStamp(blockRecoveryId);
- } else if(truncateRecovery) {
- recoveryBlock.setGenerationStamp(blockRecoveryId);
- }
- uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
-
- // Cannot close file right now, since the last block requires recovery.
- // This may potentially cause infinite loop in lease recovery
- // if there are no valid replicas on data-nodes.
- NameNode.stateChangeLog.warn(
- "DIR* NameSystem.internalReleaseLease: " +
- "File " + src + " has not been closed." +
- " Lease recovery is in progress. " +
- "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
- }
+ // start recovery of the last block for this file
+ long blockRecoveryId = nextGenerationStamp(
+ blockManager.isLegacyBlock(lastBlock));
lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
+ if(copyOnTruncate) {
+ lastBlock.setGenerationStamp(blockRecoveryId);
+ } else if(truncateRecovery) {
+ recoveryBlock.setGenerationStamp(blockRecoveryId);
+ }
+ uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
leaseManager.renewLease(lease);
+ // Cannot close file right now, since the last block requires recovery.
+ // This may potentially cause infinite loop in lease recovery
+ // if there are no valid replicas on data-nodes.
+ NameNode.stateChangeLog.warn(
+ "DIR* NameSystem.internalReleaseLease: " +
+ "File " + src + " has not been closed." +
+ " Lease recovery is in progress. " +
+ "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
break;
}
return false;
@@ -3609,7 +3604,6 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
// If this commit does not want to close the file, persist blocks
FSDirWriteFileOp.persistBlocks(dir, src, iFile, false);
}
- blockManager.successfulBlockRecovery(storedBlock);
} finally {
writeUnlock("commitBlockSynchronization");
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/53bbef38/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingRecoveryBlocks.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingRecoveryBlocks.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingRecoveryBlocks.java
deleted file mode 100644
index baad89f..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingRecoveryBlocks.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.server.blockmanagement;
-
-import org.apache.hadoop.hdfs.protocol.Block;
-import org.junit.Before;
-import org.junit.Test;
-import org.mockito.Mockito;
-
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-
-/**
- * This class contains unit tests for PendingRecoveryBlocks.java functionality.
- */
-public class TestPendingRecoveryBlocks {
-
- private PendingRecoveryBlocks pendingRecoveryBlocks;
- private final long recoveryTimeout = 1000L;
-
- private final BlockInfo blk1 = getBlock(1);
- private final BlockInfo blk2 = getBlock(2);
- private final BlockInfo blk3 = getBlock(3);
-
- @Before
- public void setUp() {
- pendingRecoveryBlocks =
- Mockito.spy(new PendingRecoveryBlocks(recoveryTimeout));
- }
-
- BlockInfo getBlock(long blockId) {
- return new BlockInfoContiguous(new Block(blockId), (short) 0);
- }
-
- @Test
- public void testAddDifferentBlocks() {
- assertTrue(pendingRecoveryBlocks.add(blk1));
- assertTrue(pendingRecoveryBlocks.isUnderRecovery(blk1));
- assertTrue(pendingRecoveryBlocks.add(blk2));
- assertTrue(pendingRecoveryBlocks.isUnderRecovery(blk2));
- assertTrue(pendingRecoveryBlocks.add(blk3));
- assertTrue(pendingRecoveryBlocks.isUnderRecovery(blk3));
- }
-
- @Test
- public void testAddAndRemoveBlocks() {
- // Add blocks
- assertTrue(pendingRecoveryBlocks.add(blk1));
- assertTrue(pendingRecoveryBlocks.add(blk2));
-
- // Remove blk1
- pendingRecoveryBlocks.remove(blk1);
-
- // Adding back blk1 should succeed
- assertTrue(pendingRecoveryBlocks.add(blk1));
- }
-
- @Test
- public void testAddBlockWithPreviousRecoveryTimedOut() {
- // Add blk
- Mockito.doReturn(0L).when(pendingRecoveryBlocks).getTime();
- assertTrue(pendingRecoveryBlocks.add(blk1));
-
- // Should fail, has not timed out yet
- Mockito.doReturn(recoveryTimeout / 2).when(pendingRecoveryBlocks).getTime();
- assertFalse(pendingRecoveryBlocks.add(blk1));
-
- // Should succeed after timing out
- Mockito.doReturn(recoveryTimeout * 2).when(pendingRecoveryBlocks).getTime();
- assertTrue(pendingRecoveryBlocks.add(blk1));
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/53bbef38/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
index 208447d..311d5a6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
@@ -18,10 +18,7 @@
package org.apache.hadoop.hdfs.server.datanode;
-import org.apache.hadoop.hdfs.AppendTestUtil;
-import org.apache.hadoop.hdfs.server.namenode.NameNode;
import org.apache.hadoop.hdfs.server.protocol.SlowDiskReports;
-
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
import static org.mockito.Matchers.any;
@@ -46,7 +43,6 @@ import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
-import java.util.Random;
import java.util.concurrent.Semaphore;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
@@ -98,7 +94,6 @@ import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
import org.apache.hadoop.hdfs.server.protocol.StorageReport;
import org.apache.hadoop.hdfs.server.protocol.VolumeFailureSummary;
import org.apache.hadoop.test.GenericTestUtils;
-import org.apache.hadoop.test.GenericTestUtils.SleepAnswer;
import org.apache.hadoop.util.DataChecksum;
import org.apache.hadoop.util.Time;
import org.apache.log4j.Level;
@@ -1040,107 +1035,4 @@ public class TestBlockRecovery {
Assert.fail("Thread failure: " + failureReason);
}
}
-
- /**
- * Test for block recovery taking longer than the heartbeat interval.
- */
- @Test(timeout = 300000L)
- public void testRecoverySlowerThanHeartbeat() throws Exception {
- tearDown(); // Stop the Mocked DN started in startup()
-
- SleepAnswer delayer = new SleepAnswer(3000, 6000);
- testRecoveryWithDatanodeDelayed(delayer);
- }
-
- /**
- * Test for block recovery timeout. All recovery attempts will be delayed
- * and the first attempt will be lost to trigger recovery timeout and retry.
- */
- @Test(timeout = 300000L)
- public void testRecoveryTimeout() throws Exception {
- tearDown(); // Stop the Mocked DN started in startup()
- final Random r = new Random();
-
- // Make sure first commitBlockSynchronization call from the DN gets lost
- // for the recovery timeout to expire and new recovery attempt
- // to be started.
- SleepAnswer delayer = new SleepAnswer(3000) {
- private final AtomicBoolean callRealMethod = new AtomicBoolean();
-
- @Override
- public Object answer(InvocationOnMock invocation) throws Throwable {
- boolean interrupted = false;
- try {
- Thread.sleep(r.nextInt(3000) + 6000);
- } catch (InterruptedException ie) {
- interrupted = true;
- }
- try {
- if (callRealMethod.get()) {
- return invocation.callRealMethod();
- }
- callRealMethod.set(true);
- return null;
- } finally {
- if (interrupted) {
- Thread.currentThread().interrupt();
- }
- }
- }
- };
- testRecoveryWithDatanodeDelayed(delayer);
- }
-
- private void testRecoveryWithDatanodeDelayed(
- GenericTestUtils.SleepAnswer recoveryDelayer) throws Exception {
- Configuration configuration = new HdfsConfiguration();
- configuration.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
- MiniDFSCluster cluster = null;
-
- try {
- cluster = new MiniDFSCluster.Builder(configuration)
- .numDataNodes(2).build();
- cluster.waitActive();
- final FSNamesystem ns = cluster.getNamesystem();
- final NameNode nn = cluster.getNameNode();
- final DistributedFileSystem dfs = cluster.getFileSystem();
- ns.getBlockManager().setBlockRecoveryTimeout(
- TimeUnit.SECONDS.toMillis(10));
-
- // Create a file and never close the output stream to trigger recovery
- FSDataOutputStream out = dfs.create(new Path("/testSlowRecovery"),
- (short) 2);
- out.write(AppendTestUtil.randomBytes(0, 4096));
- out.hsync();
-
- List<DataNode> dataNodes = cluster.getDataNodes();
- for (DataNode datanode : dataNodes) {
- DatanodeProtocolClientSideTranslatorPB nnSpy =
- InternalDataNodeTestUtils.spyOnBposToNN(datanode, nn);
-
- Mockito.doAnswer(recoveryDelayer).when(nnSpy).
- commitBlockSynchronization(
- Mockito.any(ExtendedBlock.class), Mockito.anyInt(),
- Mockito.anyLong(), Mockito.anyBoolean(),
- Mockito.anyBoolean(), Mockito.anyObject(),
- Mockito.anyObject());
- }
-
- // Make sure hard lease expires to trigger replica recovery
- cluster.setLeasePeriod(100L, 100L);
-
- // Wait for recovery to succeed
- GenericTestUtils.waitFor(new Supplier<Boolean>() {
- @Override
- public Boolean get() {
- return ns.getCompleteBlocksTotal() > 0;
- }
- }, 300, 300000);
-
- } finally {
- if (cluster != null) {
- cluster.shutdown();
- }
- }
- }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/53bbef38/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
index a565578..dc7f47a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
@@ -25,7 +25,6 @@ import static org.junit.Assert.fail;
import java.io.IOException;
import java.security.PrivilegedExceptionAction;
import java.util.Random;
-import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import org.apache.commons.logging.Log;
@@ -279,14 +278,12 @@ public class TestPipelinesFailover {
// Disable permissions so that another user can recover the lease.
conf.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, BLOCK_SIZE);
-
+
FSDataOutputStream stm = null;
final MiniDFSCluster cluster = newMiniCluster(conf, 3);
try {
cluster.waitActive();
cluster.transitionToActive(0);
- cluster.getNamesystem().getBlockManager().setBlockRecoveryTimeout(
- TimeUnit.SECONDS.toMillis(1));
Thread.sleep(500);
LOG.info("Starting with NN 0 active");
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[07/50] [abbrv] hadoop git commit: YARN-6124. Make
SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin
-refreshQueues. (Zian Chen via wangda)
Posted by vi...@apache.org.
YARN-6124. Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues. (Zian Chen via wangda)
Change-Id: Id93656f3af7dcd78cafa94e33663c78d410d43c2
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a63d19d3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a63d19d3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a63d19d3
Branch: refs/heads/HDFS-9806
Commit: a63d19d36520fa55bf523483f14329756f6eadd3
Parents: 0780fdb
Author: Wangda Tan <wa...@apache.org>
Authored: Thu Nov 30 15:56:53 2017 -0800
Committer: Wangda Tan <wa...@apache.org>
Committed: Thu Nov 30 15:57:22 2017 -0800
----------------------------------------------------------------------
.../server/resourcemanager/AdminService.java | 21 ++-
.../server/resourcemanager/ResourceManager.java | 31 +---
.../monitor/SchedulingMonitor.java | 3 +-
.../monitor/SchedulingMonitorManager.java | 184 +++++++++++++++++++
.../scheduler/AbstractYarnScheduler.java | 25 ++-
.../scheduler/capacity/CapacityScheduler.java | 6 +
.../scheduler/fair/FairScheduler.java | 6 +
.../scheduler/fifo/FifoScheduler.java | 6 +
.../server/resourcemanager/RMHATestBase.java | 30 ++-
.../monitor/TestSchedulingMonitor.java | 41 +++++
...estProportionalCapacityPreemptionPolicy.java | 22 ++-
.../TestCapacitySchedulerLazyPreemption.java | 36 +++-
...TestCapacitySchedulerSurgicalPreemption.java | 40 +++-
13 files changed, 391 insertions(+), 60 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
index 6c0a854..accf901 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java
@@ -400,14 +400,31 @@ public class AdminService extends CompositeService implements
}
}
+ protected Configuration loadNewConfiguration()
+ throws IOException, YarnException {
+ // Retrieve yarn-site.xml in order to refresh scheduling monitor properties.
+ Configuration conf = getConfiguration(new Configuration(false),
+ YarnConfiguration.YARN_SITE_CONFIGURATION_FILE);
+ // The reason we call Configuration#size() is because when getConfiguration
+ // been called, it invokes Configuration#addResouce, which invokes
+ // Configuration#reloadConfiguration which triggers the reload process in a
+ // lazy way, the properties will only be reload when it's needed rather than
+ // reload it right after getConfiguration been called. So here we call
+ // Configuration#size() to force the Configuration#getProps been called to
+ // reload all the properties.
+ conf.size();
+ return conf;
+ }
+
@Private
public void refreshQueues() throws IOException, YarnException {
- rm.getRMContext().getScheduler().reinitialize(getConfig(),
+ Configuration conf = loadNewConfiguration();
+ rm.getRMContext().getScheduler().reinitialize(conf,
this.rm.getRMContext());
// refresh the reservation system
ReservationSystem rSystem = rm.getRMContext().getReservationSystem();
if (rSystem != null) {
- rSystem.reinitialize(getConfig(), rm.getRMContext());
+ rSystem.reinitialize(conf, rm.getRMContext());
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
index 6f8a0a4..a0317f6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
@@ -18,6 +18,7 @@
package org.apache.hadoop.yarn.server.resourcemanager;
+import com.google.common.annotations.VisibleForTesting;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.curator.framework.AuthInfo;
@@ -67,8 +68,6 @@ import org.apache.hadoop.yarn.server.resourcemanager.metrics.NoOpSystemMetricPub
import org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher;
import org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV1Publisher;
import org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher;
-import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingEditPolicy;
-import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMDelegatedNodeLabelsUpdater;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
import org.apache.hadoop.yarn.server.resourcemanager.recovery.NullRMStateStore;
@@ -113,8 +112,6 @@ import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
import org.apache.zookeeper.server.auth.DigestAuthenticationProvider;
import org.eclipse.jetty.webapp.WebAppContext;
-import com.google.common.annotations.VisibleForTesting;
-
import java.io.IOException;
import java.io.InputStream;
import java.io.PrintStream;
@@ -711,8 +708,6 @@ public class ResourceManager extends CompositeService implements Recoverable {
}
}
- createSchedulerMonitors();
-
masterService = createApplicationMasterService();
addService(masterService) ;
rmContext.setApplicationMasterService(masterService);
@@ -811,30 +806,6 @@ public class ResourceManager extends CompositeService implements Recoverable {
}
}
-
- protected void createSchedulerMonitors() {
- if (conf.getBoolean(YarnConfiguration.RM_SCHEDULER_ENABLE_MONITORS,
- YarnConfiguration.DEFAULT_RM_SCHEDULER_ENABLE_MONITORS)) {
- LOG.info("Loading policy monitors");
- List<SchedulingEditPolicy> policies = conf.getInstances(
- YarnConfiguration.RM_SCHEDULER_MONITOR_POLICIES,
- SchedulingEditPolicy.class);
- if (policies.size() > 0) {
- for (SchedulingEditPolicy policy : policies) {
- LOG.info("LOADING SchedulingEditPolicy:" + policy.getPolicyName());
- // periodically check whether we need to take action to guarantee
- // constraints
- SchedulingMonitor mon = new SchedulingMonitor(rmContext, policy);
- addService(mon);
- }
- } else {
- LOG.warn("Policy monitors configured (" +
- YarnConfiguration.RM_SCHEDULER_ENABLE_MONITORS +
- ") but none specified (" +
- YarnConfiguration.RM_SCHEDULER_MONITOR_POLICIES + ")");
- }
- }
- }
}
@Private
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitor.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitor.java
index 2a741ed..09edb98 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitor.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitor.java
@@ -27,7 +27,6 @@ import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.service.AbstractService;
-import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
import com.google.common.annotations.VisibleForTesting;
@@ -58,6 +57,7 @@ public class SchedulingMonitor extends AbstractService {
}
public void serviceInit(Configuration conf) throws Exception {
+ LOG.info("Initializing SchedulingMonitor=" + getName());
scheduleEditPolicy.init(conf, rmContext, rmContext.getScheduler());
this.monitorInterval = scheduleEditPolicy.getMonitoringInterval();
super.serviceInit(conf);
@@ -65,6 +65,7 @@ public class SchedulingMonitor extends AbstractService {
@Override
public void serviceStart() throws Exception {
+ LOG.info("Starting SchedulingMonitor=" + getName());
assert !stopped : "starting when already stopped";
ses = Executors.newSingleThreadScheduledExecutor(new ThreadFactory() {
public Thread newThread(Runnable r) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitorManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitorManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitorManager.java
new file mode 100644
index 0000000..0cc700d
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/SchedulingMonitorManager.java
@@ -0,0 +1,184 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.monitor;
+
+import com.google.common.collect.Sets;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * Manages scheduling monitors.
+ */
+public class SchedulingMonitorManager {
+ private static final Log LOG = LogFactory.getLog(
+ SchedulingMonitorManager.class);
+
+ private Map<String, SchedulingMonitor> runningSchedulingMonitors =
+ new HashMap<>();
+ private RMContext rmContext;
+
+ private void updateSchedulingMonitors(Configuration conf,
+ boolean startImmediately) throws YarnException {
+ boolean monitorsEnabled = conf.getBoolean(
+ YarnConfiguration.RM_SCHEDULER_ENABLE_MONITORS,
+ YarnConfiguration.DEFAULT_RM_SCHEDULER_ENABLE_MONITORS);
+
+ if (!monitorsEnabled) {
+ if (!runningSchedulingMonitors.isEmpty()) {
+ // If monitors disabled while we have some running monitors, we should
+ // stop them.
+ LOG.info("Scheduling Monitor disabled, stopping all services");
+ stopAndRemoveAll();
+ }
+
+ return;
+ }
+
+ // When monitor is enabled, loading policies
+ String[] configuredPolicies = conf.getStrings(
+ YarnConfiguration.RM_SCHEDULER_MONITOR_POLICIES);
+ if (configuredPolicies == null || configuredPolicies.length == 0) {
+ return;
+ }
+
+ Set<String> configurePoliciesSet = new HashSet<>();
+ for (String s : configuredPolicies) {
+ configurePoliciesSet.add(s);
+ }
+
+ // Add new monitor when needed
+ for (String s : configurePoliciesSet) {
+ if (!runningSchedulingMonitors.containsKey(s)) {
+ Class<?> policyClass;
+ try {
+ policyClass = Class.forName(s);
+ } catch (ClassNotFoundException e) {
+ String message = "Failed to find class of specified policy=" + s;
+ LOG.warn(message);
+ throw new YarnException(message);
+ }
+
+ if (SchedulingEditPolicy.class.isAssignableFrom(policyClass)) {
+ SchedulingEditPolicy policyInstance =
+ (SchedulingEditPolicy) ReflectionUtils.newInstance(policyClass,
+ null);
+ SchedulingMonitor mon = new SchedulingMonitor(rmContext,
+ policyInstance);
+ mon.init(conf);
+ if (startImmediately) {
+ mon.start();
+ }
+ runningSchedulingMonitors.put(s, mon);
+ } else {
+ String message =
+ "Specified policy=" + s + " is not a SchedulingEditPolicy class.";
+ LOG.warn(message);
+ throw new YarnException(message);
+ }
+ }
+ }
+
+ // Stop monitor when needed.
+ Set<String> disabledPolicies = Sets.difference(
+ runningSchedulingMonitors.keySet(), configurePoliciesSet);
+ for (String disabledPolicy : disabledPolicies) {
+ LOG.info("SchedulingEditPolicy=" + disabledPolicy
+ + " removed, stopping it now ...");
+ silentlyStopSchedulingMonitor(disabledPolicy);
+ runningSchedulingMonitors.remove(disabledPolicy);
+ }
+ }
+
+ public synchronized void initialize(RMContext rmContext,
+ Configuration configuration) throws YarnException {
+ this.rmContext = rmContext;
+ stopAndRemoveAll();
+
+ updateSchedulingMonitors(configuration, false);
+ }
+
+ public synchronized void reinitialize(RMContext rmContext,
+ Configuration configuration) throws YarnException {
+ this.rmContext = rmContext;
+
+ updateSchedulingMonitors(configuration, true);
+ }
+
+ public synchronized void startAll() {
+ for (SchedulingMonitor schedulingMonitor : runningSchedulingMonitors
+ .values()) {
+ schedulingMonitor.start();
+ }
+ }
+
+ private void silentlyStopSchedulingMonitor(String name) {
+ SchedulingMonitor mon = runningSchedulingMonitors.get(name);
+ try {
+ mon.stop();
+ LOG.info("Sucessfully stopped monitor=" + mon.getName());
+ } catch (Exception e) {
+ LOG.warn("Exception while stopping monitor=" + mon.getName(), e);
+ }
+ }
+
+ private void stopAndRemoveAll() {
+ if (!runningSchedulingMonitors.isEmpty()) {
+ for (String schedulingMonitorName : runningSchedulingMonitors
+ .keySet()) {
+ silentlyStopSchedulingMonitor(schedulingMonitorName);
+ }
+ runningSchedulingMonitors.clear();
+ }
+ }
+
+ public boolean isRSMEmpty() {
+ return runningSchedulingMonitors.isEmpty();
+ }
+
+ public boolean isSameConfiguredPolicies(Set<String> configurePoliciesSet) {
+ return configurePoliciesSet.equals(runningSchedulingMonitors.keySet());
+ }
+
+ public SchedulingMonitor getAvailableSchedulingMonitor() {
+ if (isRSMEmpty()) {
+ return null;
+ }
+ for (SchedulingMonitor smon : runningSchedulingMonitors.values()) {
+ if (smon.getSchedulingEditPolicy()
+ instanceof ProportionalCapacityPreemptionPolicy) {
+ return smon;
+ }
+ }
+ return null;
+ }
+
+ public synchronized void stop() throws YarnException {
+ stopAndRemoveAll();
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index e818dab..4749c3d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -68,6 +68,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
import org.apache.hadoop.yarn.server.resourcemanager.RMCriticalThreadUncaughtExceptionHandler;
import org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils;
import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEvent;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEventType;
@@ -168,6 +169,8 @@ public abstract class AbstractYarnScheduler
// the NM in the next heartbeat.
private boolean autoUpdateContainers = false;
+ protected SchedulingMonitorManager schedulingMonitorManager;
+
/**
* Construct the service.
*
@@ -207,8 +210,8 @@ public abstract class AbstractYarnScheduler
new RMCriticalThreadUncaughtExceptionHandler(rmContext));
updateThread.setDaemon(true);
}
-
super.serviceInit(conf);
+
}
@Override
@@ -216,6 +219,7 @@ public abstract class AbstractYarnScheduler
if (updateThread != null) {
updateThread.start();
}
+ schedulingMonitorManager.startAll();
super.serviceStart();
}
@@ -225,6 +229,9 @@ public abstract class AbstractYarnScheduler
updateThread.interrupt();
updateThread.join(THREAD_JOIN_TIMEOUT_MS);
}
+ if (schedulingMonitorManager != null) {
+ schedulingMonitorManager.stop();
+ }
super.serviceStop();
}
@@ -233,6 +240,11 @@ public abstract class AbstractYarnScheduler
return nodeTracker;
}
+ @VisibleForTesting
+ public SchedulingMonitorManager getSchedulingMonitorManager() {
+ return schedulingMonitorManager;
+ }
+
/*
* YARN-3136 removed synchronized lock for this method for performance
* purposes
@@ -1415,4 +1427,15 @@ public abstract class AbstractYarnScheduler
updateThreadMonitor.notify();
}
}
+
+ @Override
+ public void reinitialize(Configuration conf, RMContext rmContext)
+ throws IOException {
+ try {
+ LOG.info("Reinitializing SchedulingMonitorManager ...");
+ schedulingMonitorManager.reinitialize(rmContext, conf);
+ } catch (YarnException e) {
+ throw new IOException(e);
+ }
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 218adf3..de93a6a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -62,6 +62,7 @@ import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
import org.apache.hadoop.yarn.proto.YarnServiceProtos.SchedulerResourceTypes;
import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
import org.apache.hadoop.yarn.server.resourcemanager.placement.ApplicationPlacementContext;
import org.apache.hadoop.yarn.server.resourcemanager.placement.PlacementFactory;
@@ -390,6 +391,9 @@ public class CapacityScheduler extends
Configuration configuration = new Configuration(conf);
super.serviceInit(conf);
initScheduler(configuration);
+ // Initialize SchedulingMonitorManager
+ schedulingMonitorManager = new SchedulingMonitorManager();
+ schedulingMonitorManager.initialize(rmContext, conf);
}
@Override
@@ -444,6 +448,8 @@ public class CapacityScheduler extends
// Setup how many containers we can allocate for each round
offswitchPerHeartbeatLimit = this.conf.getOffSwitchPerHeartbeatLimit();
+
+ super.reinitialize(newConf, rmContext);
} finally {
writeLock.unlock();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
index 625009d..ebc7222 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
@@ -52,6 +52,7 @@ import org.apache.hadoop.yarn.security.YarnAuthorizationProvider;
import org.apache.hadoop.yarn.server.api.protocolrecords.NMContainerStatus;
import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
import org.apache.hadoop.yarn.server.resourcemanager.RMCriticalThreadUncaughtExceptionHandler;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager;
import org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.RMState;
import org.apache.hadoop.yarn.server.resourcemanager.reservation.ReservationConstants;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
@@ -1352,6 +1353,10 @@ public class FairScheduler extends
public void serviceInit(Configuration conf) throws Exception {
initScheduler(conf);
super.serviceInit(conf);
+
+ // Initialize SchedulingMonitorManager
+ schedulingMonitorManager = new SchedulingMonitorManager();
+ schedulingMonitorManager.initialize(rmContext, conf);
}
@Override
@@ -1389,6 +1394,7 @@ public class FairScheduler extends
throws IOException {
try {
allocsLoader.reloadAllocations();
+ super.reinitialize(conf, rmContext);
} catch (Exception e) {
LOG.error("Failed to reload allocations file", e);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
index 3288912..826575d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.yarn.factories.RecordFactory;
import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
import org.apache.hadoop.yarn.server.api.protocolrecords.NMContainerStatus;
import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
import org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.RMState;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEvent;
@@ -255,6 +256,10 @@ public class FifoScheduler extends
public void serviceInit(Configuration conf) throws Exception {
initScheduler(conf);
super.serviceInit(conf);
+
+ // Initialize SchedulingMonitorManager
+ schedulingMonitorManager = new SchedulingMonitorManager();
+ schedulingMonitorManager.initialize(rmContext, conf);
}
@Override
@@ -312,6 +317,7 @@ public class FifoScheduler extends
reinitialize(Configuration conf, RMContext rmContext) throws IOException
{
setConf(conf);
+ super.reinitialize(conf, rmContext);
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java
index 4ac4fc3..439a449 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/RMHATestBase.java
@@ -105,9 +105,35 @@ public abstract class RMHATestBase extends ClientBaseWithFixes{
return am;
}
+ private MockRM initMockRMWithOldConf(Configuration confForRM1) {
+ return new MockRM(confForRM1, null, false, false) {
+ @Override
+ protected AdminService createAdminService() {
+ return new AdminService(this) {
+ @Override
+ protected void startServer() {
+ // override to not start rpc handler
+ }
+
+ @Override
+ protected void stopServer() {
+ // don't do anything
+ }
+
+ @Override
+ protected Configuration loadNewConfiguration()
+ throws IOException, YarnException {
+ return confForRM1;
+ }
+ };
+ }
+ };
+ }
+
protected void startRMs() throws IOException {
- rm1 = new MockRM(confForRM1, null, false, false);
- rm2 = new MockRM(confForRM2, null, false, false);
+ rm1 = initMockRMWithOldConf(confForRM1);
+ rm2 = initMockRMWithOldConf(confForRM2);
+
startRMs(rm1, confForRM1, rm2, confForRM2);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/TestSchedulingMonitor.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/TestSchedulingMonitor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/TestSchedulingMonitor.java
index c38236d..84126c7 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/TestSchedulingMonitor.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/TestSchedulingMonitor.java
@@ -23,8 +23,15 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
import org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
import org.junit.Test;
+import java.util.HashSet;
+import java.util.Set;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.timeout;
import static org.mockito.Mockito.verify;
@@ -51,4 +58,38 @@ public class TestSchedulingMonitor {
monitor.close();
rm.close();
}
+
+ @Test(timeout = 10000)
+ public void testRMUpdateSchedulingEditPolicy() throws Exception {
+ CapacitySchedulerConfiguration conf = new CapacitySchedulerConfiguration();
+ conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+ ResourceScheduler.class);
+ conf.setBoolean(YarnConfiguration.RM_SCHEDULER_ENABLE_MONITORS, true);
+ MockRM rm = new MockRM(conf);
+ rm.start();
+ CapacityScheduler cs = (CapacityScheduler) rm.getResourceScheduler();
+ SchedulingMonitorManager smm = cs.getSchedulingMonitorManager();
+
+ // runningSchedulingMonitors should not be empty when initialize RM
+ // scheduler monitor
+ cs.reinitialize(conf, rm.getRMContext());
+ assertFalse(smm.isRSMEmpty());
+
+ // make sure runningSchedulingPolicies contains all the configured policy
+ // in YARNConfiguration
+ String[] configuredPolicies = conf.getStrings(
+ YarnConfiguration.RM_SCHEDULER_MONITOR_POLICIES);
+ Set<String> configurePoliciesSet = new HashSet<>();
+ for (String s : configuredPolicies) {
+ configurePoliciesSet.add(s);
+ }
+ assertTrue(smm.isSameConfiguredPolicies(configurePoliciesSet));
+
+ // disable RM scheduler monitor
+ conf.setBoolean(
+ YarnConfiguration.RM_SCHEDULER_ENABLE_MONITORS,
+ YarnConfiguration.DEFAULT_RM_SCHEDULER_ENABLE_MONITORS);
+ cs.reinitialize(conf, rm.getRMContext());
+ assertTrue(smm.isRSMEmpty());
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
index 694be09..f0ca466 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicy.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.yarn.event.EventHandler;
import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
@@ -48,7 +49,6 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.preempti
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.ContainerPreemptEvent;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEvent;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.policy.OrderingPolicy;
import org.apache.hadoop.yarn.util.Clock;
@@ -792,21 +792,23 @@ public class TestProportionalCapacityPreemptionPolicy {
@SuppressWarnings("resource")
MockRM rm = new MockRM(conf);
rm.init(conf);
-
+
// ProportionalCapacityPreemptionPolicy should be initialized after
// CapacityScheduler initialized. We will
// 1) find SchedulingMonitor from RMActiveService's service list,
// 2) check if ResourceCalculator in policy is null or not.
// If it's not null, we can come to a conclusion that policy initialized
// after scheduler got initialized
- for (Service service : rm.getRMActiveService().getServices()) {
- if (service instanceof SchedulingMonitor) {
- ProportionalCapacityPreemptionPolicy policy =
- (ProportionalCapacityPreemptionPolicy) ((SchedulingMonitor) service)
- .getSchedulingEditPolicy();
- assertNotNull(policy.getResourceCalculator());
- return;
- }
+ // Get SchedulingMonitor from SchedulingMonitorManager instead
+ CapacityScheduler cs = (CapacityScheduler) rm.getResourceScheduler();
+ SchedulingMonitorManager smm = cs.getSchedulingMonitorManager();
+ Service service = smm.getAvailableSchedulingMonitor();
+ if (service instanceof SchedulingMonitor) {
+ ProportionalCapacityPreemptionPolicy policy =
+ (ProportionalCapacityPreemptionPolicy) ((SchedulingMonitor) service)
+ .getSchedulingEditPolicy();
+ assertNotNull(policy.getResourceCalculator());
+ return;
}
fail("Failed to find SchedulingMonitor service, please check what happened");
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerLazyPreemption.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerLazyPreemption.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerLazyPreemption.java
index 4e4e3c2..a4c7d61 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerLazyPreemption.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerLazyPreemption.java
@@ -26,6 +26,8 @@ import org.apache.hadoop.yarn.server.resourcemanager.MockAM;
import org.apache.hadoop.yarn.server.resourcemanager.MockNM;
import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingEditPolicy;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager;
import org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
@@ -126,7 +128,11 @@ public class TestCapacitySchedulerLazyPreemption
Resources.createResource(1 * GB), 1)), null);
// Get edit policy and do one update
- SchedulingEditPolicy editPolicy = getSchedulingEditPolicy(rm1);
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
+ ProportionalCapacityPreemptionPolicy editPolicy =
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
// Call edit schedule twice, and check if one container from app1 marked
// to be "killable"
@@ -209,7 +215,11 @@ public class TestCapacitySchedulerLazyPreemption
Resources.createResource(1 * GB), 1)), null);
// Get edit policy and do one update
- SchedulingEditPolicy editPolicy = getSchedulingEditPolicy(rm1);
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
+ ProportionalCapacityPreemptionPolicy editPolicy =
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
// Call edit schedule twice, and check if one container from app1 marked
// to be "killable"
@@ -301,7 +311,11 @@ public class TestCapacitySchedulerLazyPreemption
Resources.createResource(1 * GB), 1, false)), null);
// Get edit policy and do one update
- SchedulingEditPolicy editPolicy = getSchedulingEditPolicy(rm1);
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
+ ProportionalCapacityPreemptionPolicy editPolicy =
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
// Call edit schedule twice, and check if one container from app1 marked
// to be "killable"
@@ -387,8 +401,11 @@ public class TestCapacitySchedulerLazyPreemption
am2.allocate("*", 1 * GB, 1, new ArrayList<ContainerId>());
// Get edit policy and do one update
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
ProportionalCapacityPreemptionPolicy editPolicy =
- (ProportionalCapacityPreemptionPolicy) getSchedulingEditPolicy(rm1);
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
// Call edit schedule twice, and check if one container from app1 marked
// to be "killable"
@@ -487,8 +504,11 @@ public class TestCapacitySchedulerLazyPreemption
am2.allocate("*", 3 * GB, 1, new ArrayList<ContainerId>());
// Get edit policy and do one update
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
ProportionalCapacityPreemptionPolicy editPolicy =
- (ProportionalCapacityPreemptionPolicy) getSchedulingEditPolicy(rm1);
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
// Call edit schedule twice, and check if 3 container from app1 marked
// to be "killable"
@@ -582,7 +602,11 @@ public class TestCapacitySchedulerLazyPreemption
Resources.createResource(1 * GB), 1)), null);
// Get edit policy and do one update
- SchedulingEditPolicy editPolicy = getSchedulingEditPolicy(rm1);
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
+ ProportionalCapacityPreemptionPolicy editPolicy =
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
// Call edit schedule twice, and check if no container from app1 marked
// to be "killable"
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a63d19d3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java
index 9146373..8a7e03f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacitySchedulerSurgicalPreemption.java
@@ -26,7 +26,8 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.server.resourcemanager.MockAM;
import org.apache.hadoop.yarn.server.resourcemanager.MockNM;
import org.apache.hadoop.yarn.server.resourcemanager.MockRM;
-import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingEditPolicy;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor;
+import org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitorManager;
import org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
@@ -138,7 +139,11 @@ public class TestCapacitySchedulerSurgicalPreemption
Assert.assertNotNull(cs.getNode(nm1.getNodeId()).getReservedContainer());
// Get edit policy and do one update
- SchedulingEditPolicy editPolicy = getSchedulingEditPolicy(rm1);
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
+ ProportionalCapacityPreemptionPolicy editPolicy =
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
// Call edit schedule twice, and check if 4 containers from app1 at n1 killed
editPolicy.editSchedule();
@@ -217,8 +222,11 @@ public class TestCapacitySchedulerSurgicalPreemption
ApplicationAttemptId.newInstance(app2.getApplicationId(), 1));
// Call editSchedule: containers are selected to be preemption candidate
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
ProportionalCapacityPreemptionPolicy editPolicy =
- (ProportionalCapacityPreemptionPolicy) getSchedulingEditPolicy(rm1);
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
editPolicy.editSchedule();
Assert.assertEquals(3, editPolicy.getToPreemptContainers().size());
@@ -323,8 +331,11 @@ public class TestCapacitySchedulerSurgicalPreemption
}
// Call editSchedule immediately: containers are not selected
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
ProportionalCapacityPreemptionPolicy editPolicy =
- (ProportionalCapacityPreemptionPolicy) getSchedulingEditPolicy(rm1);
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
editPolicy.editSchedule();
Assert.assertEquals(0, editPolicy.getToPreemptContainers().size());
@@ -434,8 +445,11 @@ public class TestCapacitySchedulerSurgicalPreemption
cs.getNode(rmNode3.getNodeID()).getReservedContainer());
// Call editSchedule immediately: nothing happens
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
ProportionalCapacityPreemptionPolicy editPolicy =
- (ProportionalCapacityPreemptionPolicy) getSchedulingEditPolicy(rm1);
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
editPolicy.editSchedule();
Assert.assertNotNull(
cs.getNode(rmNode3.getNodeID()).getReservedContainer());
@@ -562,8 +576,11 @@ public class TestCapacitySchedulerSurgicalPreemption
// 6 (selected) + 1 (allocated) which makes target capacity to 70%
Thread.sleep(1000);
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
ProportionalCapacityPreemptionPolicy editPolicy =
- (ProportionalCapacityPreemptionPolicy) getSchedulingEditPolicy(rm1);
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
editPolicy.editSchedule();
checkNumberOfPreemptionCandidateFromApp(editPolicy, 6,
am1.getApplicationAttemptId());
@@ -715,8 +732,11 @@ public class TestCapacitySchedulerSurgicalPreemption
Thread.sleep(1000);
/* 1st container preempted is on n2 */
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
ProportionalCapacityPreemptionPolicy editPolicy =
- (ProportionalCapacityPreemptionPolicy) getSchedulingEditPolicy(rm1);
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
editPolicy.editSchedule();
// We should have one to-preempt container, on node[2]
@@ -887,7 +907,11 @@ public class TestCapacitySchedulerSurgicalPreemption
waitNumberOfReservedContainersFromApp(schedulerApp2, 1);
// Call editSchedule twice and allocation once, container should get allocated
- SchedulingEditPolicy editPolicy = getSchedulingEditPolicy(rm1);
+ SchedulingMonitorManager smm = ((CapacityScheduler) rm1.
+ getResourceScheduler()).getSchedulingMonitorManager();
+ SchedulingMonitor smon = smm.getAvailableSchedulingMonitor();
+ ProportionalCapacityPreemptionPolicy editPolicy =
+ (ProportionalCapacityPreemptionPolicy) smon.getSchedulingEditPolicy();
editPolicy.editSchedule();
editPolicy.editSchedule();
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[33/50] [abbrv] hadoop git commit: HDFS-12093. [READ] Share remoteFS
between ProvidedReplica instances.
Posted by vi...@apache.org.
HDFS-12093. [READ] Share remoteFS between ProvidedReplica instances.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/30f2de1d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/30f2de1d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/30f2de1d
Branch: refs/heads/HDFS-9806
Commit: 30f2de1dd6f2c59b69e867bdc1134d6607b5cc28
Parents: 6fdb52d
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Mon Aug 7 14:31:15 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../datanode/FinalizedProvidedReplica.java | 6 +++--
.../hdfs/server/datanode/ProvidedReplica.java | 25 +++++++++++---------
.../hdfs/server/datanode/ReplicaBuilder.java | 11 +++++++--
.../fsdataset/impl/ProvidedVolumeImpl.java | 17 +++++++++----
.../datanode/TestProvidedReplicaImpl.java | 2 +-
5 files changed, 40 insertions(+), 21 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/30f2de1d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
index 722d573..e23d6be 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.server.datanode;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
@@ -31,8 +32,9 @@ public class FinalizedProvidedReplica extends ProvidedReplica {
public FinalizedProvidedReplica(long blockId, URI fileURI,
long fileOffset, long blockLen, long genStamp,
- FsVolumeSpi volume, Configuration conf) {
- super(blockId, fileURI, fileOffset, blockLen, genStamp, volume, conf);
+ FsVolumeSpi volume, Configuration conf, FileSystem remoteFS) {
+ super(blockId, fileURI, fileOffset, blockLen, genStamp, volume, conf,
+ remoteFS);
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/30f2de1d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
index 946ab5a..2b3bd13 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
@@ -65,16 +65,23 @@ public abstract class ProvidedReplica extends ReplicaInfo {
* @param volume the volume this block belongs to
*/
public ProvidedReplica(long blockId, URI fileURI, long fileOffset,
- long blockLen, long genStamp, FsVolumeSpi volume, Configuration conf) {
+ long blockLen, long genStamp, FsVolumeSpi volume, Configuration conf,
+ FileSystem remoteFS) {
super(volume, blockId, blockLen, genStamp);
this.fileURI = fileURI;
this.fileOffset = fileOffset;
this.conf = conf;
- try {
- this.remoteFS = FileSystem.get(fileURI, this.conf);
- } catch (IOException e) {
- LOG.warn("Failed to obtain filesystem for " + fileURI);
- this.remoteFS = null;
+ if (remoteFS != null) {
+ this.remoteFS = remoteFS;
+ } else {
+ LOG.warn(
+ "Creating an reference to the remote FS for provided block " + this);
+ try {
+ this.remoteFS = FileSystem.get(fileURI, this.conf);
+ } catch (IOException e) {
+ LOG.warn("Failed to obtain filesystem for " + fileURI);
+ this.remoteFS = null;
+ }
}
}
@@ -83,11 +90,7 @@ public abstract class ProvidedReplica extends ReplicaInfo {
this.fileURI = r.fileURI;
this.fileOffset = r.fileOffset;
this.conf = r.conf;
- try {
- this.remoteFS = FileSystem.newInstance(fileURI, this.conf);
- } catch (IOException e) {
- this.remoteFS = null;
- }
+ this.remoteFS = r.remoteFS;
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/30f2de1d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
index 639467f..c5cb6a5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.server.datanode;
import java.io.File;
import java.net.URI;
+import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.StorageType;
@@ -50,6 +51,7 @@ public class ReplicaBuilder {
private long offset;
private Configuration conf;
private FileRegion fileRegion;
+ private FileSystem remoteFS;
public ReplicaBuilder(ReplicaState state) {
volume = null;
@@ -138,6 +140,11 @@ public class ReplicaBuilder {
return this;
}
+ public ReplicaBuilder setRemoteFS(FileSystem remoteFS) {
+ this.remoteFS = remoteFS;
+ return this;
+ }
+
public LocalReplicaInPipeline buildLocalReplicaInPipeline()
throws IllegalArgumentException {
LocalReplicaInPipeline info = null;
@@ -275,14 +282,14 @@ public class ReplicaBuilder {
}
if (fileRegion == null) {
info = new FinalizedProvidedReplica(blockId, uri, offset,
- length, genStamp, volume, conf);
+ length, genStamp, volume, conf, remoteFS);
} else {
info = new FinalizedProvidedReplica(fileRegion.getBlock().getBlockId(),
fileRegion.getPath().toUri(),
fileRegion.getOffset(),
fileRegion.getBlock().getNumBytes(),
fileRegion.getBlock().getGenerationStamp(),
- volume, conf);
+ volume, conf, remoteFS);
}
return info;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/30f2de1d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index 5cd28c7..d1a7015 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -28,6 +28,7 @@ import java.util.Map.Entry;
import java.util.concurrent.ConcurrentHashMap;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.protocol.Block;
@@ -96,7 +97,8 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
}
public void getVolumeMap(ReplicaMap volumeMap,
- RamDiskReplicaTracker ramDiskReplicaMap) throws IOException {
+ RamDiskReplicaTracker ramDiskReplicaMap, FileSystem remoteFS)
+ throws IOException {
Iterator<FileRegion> iter = provider.iterator();
while (iter.hasNext()) {
FileRegion region = iter.next();
@@ -112,9 +114,10 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
.setGenerationStamp(region.getBlock().getGenerationStamp())
.setFsVolume(providedVolume)
.setConf(conf)
+ .setRemoteFS(remoteFS)
.build();
- // check if the replica already exists
- ReplicaInfo oldReplica = volumeMap.get(bpid, newReplica.getBlockId());
+ ReplicaInfo oldReplica =
+ volumeMap.get(bpid, newReplica.getBlockId());
if (oldReplica == null) {
volumeMap.add(bpid, newReplica);
bpVolumeMap.add(bpid, newReplica);
@@ -163,6 +166,8 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
new ConcurrentHashMap<String, ProvidedBlockPoolSlice>();
private ProvidedVolumeDF df;
+ //the remote FileSystem to which this ProvidedVolume points to.
+ private FileSystem remoteFS;
ProvidedVolumeImpl(FsDatasetImpl dataset, String storageID,
StorageDirectory sd, FileIoProvider fileIoProvider,
@@ -176,6 +181,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
conf.getClass(DFSConfigKeys.DFS_PROVIDER_DF_CLASS,
DefaultProvidedVolumeDF.class, ProvidedVolumeDF.class);
df = ReflectionUtils.newInstance(dfClass, conf);
+ remoteFS = FileSystem.get(baseURI, conf);
}
@Override
@@ -397,7 +403,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
throws IOException {
LOG.info("Creating volumemap for provided volume " + this);
for(ProvidedBlockPoolSlice s : bpSlices.values()) {
- s.getVolumeMap(volumeMap, ramDiskReplicaMap);
+ s.getVolumeMap(volumeMap, ramDiskReplicaMap, remoteFS);
}
}
@@ -414,7 +420,8 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
void getVolumeMap(String bpid, ReplicaMap volumeMap,
final RamDiskReplicaTracker ramDiskReplicaMap)
throws IOException {
- getProvidedBlockPoolSlice(bpid).getVolumeMap(volumeMap, ramDiskReplicaMap);
+ getProvidedBlockPoolSlice(bpid).getVolumeMap(volumeMap, ramDiskReplicaMap,
+ remoteFS);
}
@VisibleForTesting
http://git-wip-us.apache.org/repos/asf/hadoop/blob/30f2de1d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
index 8258c21..967e94d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
@@ -87,7 +87,7 @@ public class TestProvidedReplicaImpl {
FILE_LEN >= (i+1)*BLK_LEN ? BLK_LEN : FILE_LEN - i*BLK_LEN;
replicas.add(
new FinalizedProvidedReplica(i, providedFile.toURI(), i*BLK_LEN,
- currentReplicaLength, 0, null, conf));
+ currentReplicaLength, 0, null, conf, null));
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[04/50] [abbrv] hadoop git commit: HDFS-12594. snapshotDiff fails if
the report exceeds the RPC response limit. Contributed by Shashikant Banerjee
Posted by vi...@apache.org.
HDFS-12594. snapshotDiff fails if the report exceeds the RPC response limit. Contributed by Shashikant Banerjee
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b1c7654e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b1c7654e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b1c7654e
Branch: refs/heads/HDFS-9806
Commit: b1c7654ee40b372ed777525a42981c7cf55b5c72
Parents: 5cfaee2
Author: Tsz-Wo Nicholas Sze <sz...@hortonworks.com>
Authored: Thu Nov 30 12:18:29 2017 -0800
Committer: Tsz-Wo Nicholas Sze <sz...@hortonworks.com>
Committed: Thu Nov 30 12:18:29 2017 -0800
----------------------------------------------------------------------
.../dev-support/findbugsExcludeFile.xml | 2 +
.../java/org/apache/hadoop/hdfs/DFSClient.java | 14 +-
.../org/apache/hadoop/hdfs/DFSUtilClient.java | 57 +++-
.../hadoop/hdfs/DistributedFileSystem.java | 38 ++-
.../impl/SnapshotDiffReportGenerator.java | 262 +++++++++++++++++++
.../hadoop/hdfs/protocol/ClientProtocol.java | 29 ++
.../protocol/SnapshotDiffReportListing.java | 160 +++++++++++
.../ClientNamenodeProtocolTranslatorPB.java | 24 ++
.../hadoop/hdfs/protocolPB/PBHelperClient.java | 127 +++++++++
.../src/main/proto/ClientNamenodeProtocol.proto | 12 +
.../src/main/proto/hdfs.proto | 26 ++
.../org/apache/hadoop/hdfs/DFSConfigKeys.java | 5 +
.../java/org/apache/hadoop/hdfs/DFSUtil.java | 42 +--
...tNamenodeProtocolServerSideTranslatorPB.java | 22 ++
.../federation/router/RouterRpcServer.java | 9 +
.../hdfs/server/namenode/FSDirSnapshotOp.java | 24 ++
.../hdfs/server/namenode/FSNamesystem.java | 77 +++++-
.../hdfs/server/namenode/NameNodeRpcServer.java | 13 +
.../snapshot/DirectorySnapshottableFeature.java | 136 +++++++++-
.../snapshot/SnapshotDiffListingInfo.java | 207 +++++++++++++++
.../namenode/snapshot/SnapshotManager.java | 28 ++
.../src/main/resources/hdfs-default.xml | 11 +
.../snapshot/TestSnapshotDiffReport.java | 116 ++++++++
23 files changed, 1384 insertions(+), 57 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml b/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
index 22ef722..8e2bc94 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/findbugsExcludeFile.xml
@@ -19,6 +19,8 @@
<Class name="org.apache.hadoop.hdfs.DFSPacket"/>
<Class name="org.apache.hadoop.hdfs.protocol.LocatedStripedBlock"/>
<Class name="org.apache.hadoop.hdfs.util.StripedBlockUtil$ChunkByteArray"/>
+ <Class name="org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing$DiffReportListingEntry"/>
+ <Class name="org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing"/>
</Or>
<Bug pattern="EI_EXPOSE_REP,EI_EXPOSE_REP2" />
</Match>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 25e0f6c..3df36d6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -139,10 +139,10 @@ import org.apache.hadoop.hdfs.protocol.QuotaByStorageTypeExceededException;
import org.apache.hadoop.hdfs.protocol.ReencryptionStatusIterator;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException;
-import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.protocol.UnresolvedPathException;
import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil;
import org.apache.hadoop.hdfs.protocol.datatransfer.IOStreamPair;
import org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure;
@@ -2140,14 +2140,16 @@ public class DFSClient implements java.io.Closeable, RemotePeerFactory,
/**
* Get the difference between two snapshots, or between a snapshot and the
* current tree of a directory.
- * @see ClientProtocol#getSnapshotDiffReport(String, String, String)
+ * @see ClientProtocol#getSnapshotDiffReportListing
*/
- public SnapshotDiffReport getSnapshotDiffReport(String snapshotDir,
- String fromSnapshot, String toSnapshot) throws IOException {
+ public SnapshotDiffReportListing getSnapshotDiffReportListing(
+ String snapshotDir, String fromSnapshot, String toSnapshot,
+ byte[] startPath, int index) throws IOException {
checkOpen();
try (TraceScope ignored = tracer.newScope("getSnapshotDiffReport")) {
- return namenode.getSnapshotDiffReport(snapshotDir,
- fromSnapshot, toSnapshot);
+ return namenode
+ .getSnapshotDiffReportListing(snapshotDir, fromSnapshot, toSnapshot,
+ startPath, index);
} catch (RemoteException re) {
throw re.unwrapRemoteException();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
index 2a8bf0d..f6b28e0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
@@ -89,6 +89,7 @@ import java.util.concurrent.SynchronousQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
+import java.util.Arrays;
import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_DATA_TRANSFER_CLIENT_TCPNODELAY_DEFAULT;
import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_DATA_TRANSFER_CLIENT_TCPNODELAY_KEY;
@@ -124,6 +125,56 @@ public class DFSUtilClient {
return bytes2String(bytes, 0, bytes.length);
}
+ /**
+ * Converts a byte array to array of arrays of bytes
+ * on byte separator.
+ */
+ public static byte[][] bytes2byteArray(byte[] bytes) {
+ return bytes2byteArray(bytes, bytes.length, (byte)Path.SEPARATOR_CHAR);
+ }
+ /**
+ * Splits first len bytes in bytes to array of arrays of bytes
+ * on byte separator.
+ * @param bytes the byte array to split
+ * @param len the number of bytes to split
+ * @param separator the delimiting byte
+ */
+ public static byte[][] bytes2byteArray(byte[] bytes, int len,
+ byte separator) {
+ Preconditions.checkPositionIndex(len, bytes.length);
+ if (len == 0) {
+ return new byte[][]{null};
+ }
+ // Count the splits. Omit multiple separators and the last one by
+ // peeking at prior byte.
+ int splits = 0;
+ for (int i = 1; i < len; i++) {
+ if (bytes[i-1] == separator && bytes[i] != separator) {
+ splits++;
+ }
+ }
+ if (splits == 0 && bytes[0] == separator) {
+ return new byte[][]{null};
+ }
+ splits++;
+ byte[][] result = new byte[splits][];
+ int nextIndex = 0;
+ // Build the splits.
+ for (int i = 0; i < splits; i++) {
+ int startIndex = nextIndex;
+ // find next separator in the bytes.
+ while (nextIndex < len && bytes[nextIndex] != separator) {
+ nextIndex++;
+ }
+ result[i] = (nextIndex > 0)
+ ? Arrays.copyOfRange(bytes, startIndex, nextIndex)
+ : DFSUtilClient.EMPTY_BYTES; // reuse empty bytes for root.
+ do { // skip over separators.
+ nextIndex++;
+ } while (nextIndex < len && bytes[nextIndex] == separator);
+ }
+ return result;
+ }
/** Return used as percentage of capacity */
public static float getPercentUsed(long used, long capacity) {
return capacity <= 0 ? 100 : (used * 100.0f)/capacity;
@@ -277,11 +328,9 @@ public class DFSUtilClient {
* Given a list of path components returns a byte array
*/
public static byte[] byteArray2bytes(byte[][] pathComponents) {
- if (pathComponents.length == 0) {
+ if (pathComponents.length == 0 || (pathComponents.length == 1
+ && (pathComponents[0] == null || pathComponents[0].length == 0))) {
return EMPTY_BYTES;
- } else if (pathComponents.length == 1
- && (pathComponents[0] == null || pathComponents[0].length == 0)) {
- return new byte[]{(byte) Path.SEPARATOR_CHAR};
}
int length = 0;
for (int i = 0; i < pathComponents.length; i++) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
index 9db12e1..c010c8a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.hdfs;
import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
+import org.apache.commons.collections.list.TreeList;
import org.apache.hadoop.HadoopIllegalArgumentException;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
@@ -90,12 +91,16 @@ import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing.DiffReportListingEntry;
+import org.apache.hadoop.hdfs.client.impl.SnapshotDiffReportGenerator;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.security.Credentials;
import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.util.ChunkedArrayList;
import org.apache.hadoop.util.Progressable;
import javax.annotation.Nonnull;
@@ -1971,19 +1976,46 @@ public class DistributedFileSystem extends FileSystem {
}.resolve(this, absF);
}
+ private SnapshotDiffReport getSnapshotDiffReportInternal(
+ final String snapshotDir, final String fromSnapshot,
+ final String toSnapshot) throws IOException {
+ byte[] startPath = DFSUtilClient.EMPTY_BYTES;
+ int index = -1;
+ SnapshotDiffReportGenerator snapshotDiffReport;
+ List<DiffReportListingEntry> modifiedList = new TreeList();
+ List<DiffReportListingEntry> createdList = new ChunkedArrayList<>();
+ List<DiffReportListingEntry> deletedList = new ChunkedArrayList<>();
+ SnapshotDiffReportListing report;
+ do {
+ report = dfs.getSnapshotDiffReportListing(snapshotDir, fromSnapshot,
+ toSnapshot, startPath, index);
+ startPath = report.getLastPath();
+ index = report.getLastIndex();
+ modifiedList.addAll(report.getModifyList());
+ createdList.addAll(report.getCreateList());
+ deletedList.addAll(report.getDeleteList());
+ } while (!(Arrays.equals(startPath, DFSUtilClient.EMPTY_BYTES)
+ && index == -1));
+ snapshotDiffReport =
+ new SnapshotDiffReportGenerator(snapshotDir, fromSnapshot, toSnapshot,
+ report.getIsFromEarlier(), modifiedList, createdList, deletedList);
+ return snapshotDiffReport.generateReport();
+ }
+
/**
* Get the difference between two snapshots, or between a snapshot and the
* current tree of a directory.
*
- * @see DFSClient#getSnapshotDiffReport(String, String, String)
+ * @see DFSClient#getSnapshotDiffReportListing
*/
public SnapshotDiffReport getSnapshotDiffReport(final Path snapshotDir,
final String fromSnapshot, final String toSnapshot) throws IOException {
Path absF = fixRelativePart(snapshotDir);
return new FileSystemLinkResolver<SnapshotDiffReport>() {
@Override
- public SnapshotDiffReport doCall(final Path p) throws IOException {
- return dfs.getSnapshotDiffReport(getPathName(p), fromSnapshot,
+ public SnapshotDiffReport doCall(final Path p)
+ throws IOException {
+ return getSnapshotDiffReportInternal(getPathName(p), fromSnapshot,
toSnapshot);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/SnapshotDiffReportGenerator.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/SnapshotDiffReportGenerator.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/SnapshotDiffReportGenerator.java
new file mode 100644
index 0000000..4dbe988
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/impl/SnapshotDiffReportGenerator.java
@@ -0,0 +1,262 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.client.impl;
+
+import java.util.*;
+
+import com.google.common.primitives.SignedBytes;
+
+import org.apache.hadoop.util.ChunkedArrayList;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing.DiffReportListingEntry;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport.DiffReportEntry;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport.DiffType;
+/**
+ * This class represents to end users the difference between two snapshots of
+ * the same directory, or the difference between a snapshot of the directory and
+ * its current state. Instead of capturing all the details of the diff, this
+ * class only lists where the changes happened and their types.
+ */
+public class SnapshotDiffReportGenerator {
+ /**
+ * Compare two inodes based on their full names.
+ */
+ public static final Comparator<DiffReportListingEntry> INODE_COMPARATOR =
+ new Comparator<DiffReportListingEntry>() {
+ @Override
+ public int compare(DiffReportListingEntry left,
+ DiffReportListingEntry right) {
+ final Comparator<byte[]> cmp =
+ SignedBytes.lexicographicalComparator();
+ //source path can never be null
+ final byte[][] l = left.getSourcePath();
+ final byte[][] r = right.getSourcePath();
+ if (l.length == 1 && l[0] == null) {
+ return -1;
+ } else if (r.length == 1 && r[0] == null) {
+ return 1;
+ } else {
+ for (int i = 0; i < l.length && i < r.length; i++) {
+ final int diff = cmp.compare(l[i], r[i]);
+ if (diff != 0) {
+ return diff;
+ }
+ }
+ return l.length == r.length ? 0 : l.length > r.length ? 1 : -1;
+ }
+ }
+ };
+
+ static class RenameEntry {
+ private byte[][] sourcePath;
+ private byte[][] targetPath;
+
+ void setSource(byte[][] srcPath) {
+ this.sourcePath = srcPath;
+ }
+
+ void setTarget(byte[][] target) {
+ this.targetPath = target;
+ }
+
+ boolean isRename() {
+ return sourcePath != null && targetPath != null;
+ }
+
+ byte[][] getSourcePath() {
+ return sourcePath;
+ }
+
+ byte[][] getTargetPath() {
+ return targetPath;
+ }
+ }
+
+ /*
+ * A class represnting the diff in a directory between two given snapshots
+ * in two lists: createdList and deleted list.
+ */
+ static class ChildrenDiff {
+ private final List<DiffReportListingEntry> createdList;
+ private final List<DiffReportListingEntry> deletedList;
+
+ ChildrenDiff(List<DiffReportListingEntry> createdList,
+ List<DiffReportListingEntry> deletedList) {
+ this.createdList = createdList != null ? createdList :
+ Collections.emptyList();
+ this.deletedList = deletedList != null ? deletedList :
+ Collections.emptyList();
+ }
+
+ public List<DiffReportListingEntry> getCreatedList() {
+ return createdList;
+ }
+
+ public List<DiffReportListingEntry> getDeletedList() {
+ return deletedList;
+ }
+ }
+
+ /**
+ * snapshot root full path.
+ */
+ private final String snapshotRoot;
+
+ /**
+ * start point of the diff.
+ */
+ private final String fromSnapshot;
+
+ /**
+ * end point of the diff.
+ */
+ private final String toSnapshot;
+
+ /**
+ * Flag to indicate the diff is calculated from older to newer snapshot
+ * or not.
+ */
+ private final boolean isFromEarlier;
+
+ /**
+ * A map capturing the detailed difference about file creation/deletion.
+ * Each key indicates a directory inode whose children have been changed
+ * between the two snapshots, while its associated value is a
+ * {@link ChildrenDiff} storing the changes (creation/deletion) happened to
+ * the children (files).
+ */
+ private final Map<Long, ChildrenDiff> dirDiffMap =
+ new HashMap<>();
+
+ private final Map<Long, RenameEntry> renameMap =
+ new HashMap<>();
+
+ private List<DiffReportListingEntry> mlist = null;
+ private List<DiffReportListingEntry> clist = null;
+ private List<DiffReportListingEntry> dlist = null;
+
+ public SnapshotDiffReportGenerator(String snapshotRoot, String fromSnapshot,
+ String toSnapshot, boolean isFromEarlier,
+ List<DiffReportListingEntry> mlist, List<DiffReportListingEntry> clist,
+ List<DiffReportListingEntry> dlist) {
+ this.snapshotRoot = snapshotRoot;
+ this.fromSnapshot = fromSnapshot;
+ this.toSnapshot = toSnapshot;
+ this.isFromEarlier = isFromEarlier;
+ this.mlist =
+ mlist != null ? mlist : Collections.emptyList();
+ this.clist =
+ clist != null ? clist : Collections.emptyList();
+ this.dlist =
+ dlist != null ? dlist : Collections.emptyList();
+ }
+
+ private RenameEntry getEntry(long inodeId) {
+ RenameEntry entry = renameMap.get(inodeId);
+ if (entry == null) {
+ entry = new RenameEntry();
+ renameMap.put(inodeId, entry);
+ }
+ return entry;
+ }
+
+ public void generateReportList() {
+ mlist.sort(INODE_COMPARATOR);
+ for (DiffReportListingEntry created : clist) {
+ ChildrenDiff entry = dirDiffMap.get(created.getDirId());
+ if (entry == null) {
+ List<DiffReportListingEntry> createdList = new ChunkedArrayList<>();
+ createdList.add(created);
+ ChildrenDiff list = new ChildrenDiff(createdList, null);
+ dirDiffMap.put(created.getDirId(), list);
+ } else {
+ dirDiffMap.get(created.getDirId()).getCreatedList().add(created);
+ }
+ if (created.isReference()) {
+ RenameEntry renameEntry = getEntry(created.getFileId());
+ if (renameEntry.getTargetPath() != null) {
+ renameEntry.setTarget(created.getSourcePath());
+ }
+ }
+ }
+ for (DiffReportListingEntry deleted : dlist) {
+ ChildrenDiff entry = dirDiffMap.get(deleted.getDirId());
+ if (entry == null || (entry.getDeletedList().isEmpty())) {
+ ChildrenDiff list;
+ List<DiffReportListingEntry> deletedList = new ChunkedArrayList<>();
+ deletedList.add(deleted);
+ if (entry == null) {
+ list = new ChildrenDiff(null, deletedList);
+ } else {
+ list = new ChildrenDiff(entry.getCreatedList(), deletedList);
+ }
+ dirDiffMap.put(deleted.getDirId(), list);
+ } else {
+ entry.getDeletedList().add(deleted);
+ }
+ if (deleted.isReference()) {
+ RenameEntry renameEntry = getEntry(deleted.getFileId());
+ renameEntry.setTarget(deleted.getTargetPath());
+ renameEntry.setSource(deleted.getSourcePath());
+ }
+ }
+ }
+
+ public SnapshotDiffReport generateReport() {
+ List<DiffReportEntry> diffReportList = new ChunkedArrayList<>();
+ generateReportList();
+ for (DiffReportListingEntry modified : mlist) {
+ diffReportList.add(
+ new DiffReportEntry(DiffType.MODIFY, modified.getSourcePath(), null));
+ if (modified.isReference()
+ && dirDiffMap.get(modified.getDirId()) != null) {
+ List<DiffReportEntry> subList = generateReport(modified);
+ diffReportList.addAll(subList);
+ }
+ }
+ return new SnapshotDiffReport(snapshotRoot, fromSnapshot, toSnapshot,
+ diffReportList);
+ }
+
+ private List<DiffReportEntry> generateReport(
+ DiffReportListingEntry modified) {
+ List<DiffReportEntry> diffReportList = new ChunkedArrayList<>();
+ ChildrenDiff list = dirDiffMap.get(modified.getDirId());
+ for (DiffReportListingEntry created : list.getCreatedList()) {
+ RenameEntry entry = renameMap.get(created.getFileId());
+ if (entry == null || !entry.isRename()) {
+ diffReportList.add(new DiffReportEntry(
+ isFromEarlier ? DiffType.CREATE : DiffType.DELETE,
+ created.getSourcePath()));
+ }
+ }
+ for (DiffReportListingEntry deleted : list.getDeletedList()) {
+ RenameEntry entry = renameMap.get(deleted.getFileId());
+ if (entry != null && entry.isRename()) {
+ diffReportList.add(new DiffReportEntry(DiffType.RENAME,
+ isFromEarlier ? entry.getSourcePath() : entry.getTargetPath(),
+ isFromEarlier ? entry.getTargetPath() : entry.getSourcePath()));
+ } else {
+ diffReportList.add(new DiffReportEntry(
+ isFromEarlier ? DiffType.DELETE : DiffType.CREATE,
+ deleted.getSourcePath()));
+ }
+ }
+ return diffReportList;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
index f61ec75..eb2e11c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
@@ -1289,6 +1289,35 @@ public interface ClientProtocol {
String fromSnapshot, String toSnapshot) throws IOException;
/**
+ * Get the difference between two snapshots, or between a snapshot and the
+ * current tree of a directory.
+ *
+ * @param snapshotRoot
+ * full path of the directory where snapshots are taken
+ * @param fromSnapshot
+ * snapshot name of the from point. Null indicates the current
+ * tree
+ * @param toSnapshot
+ * snapshot name of the to point. Null indicates the current
+ * tree.
+ * @param startPath
+ * path relative to the snapshottable root directory from where the
+ * snapshotdiff computation needs to start across multiple rpc calls
+ * @param index
+ * index in the created or deleted list of the directory at which
+ * the snapshotdiff computation stopped during the last rpc call
+ * as the no of entries exceeded the snapshotdiffentry limit. -1
+ * indicates, the snapshotdiff compuatation needs to start right
+ * from the startPath provided.
+ * @return The difference report represented as a {@link SnapshotDiffReport}.
+ * @throws IOException on error
+ */
+ @Idempotent
+ SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot,
+ String fromSnapshot, String toSnapshot, byte[] startPath, int index)
+ throws IOException;
+
+ /**
* Add a CacheDirective to the CacheManager.
*
* @param directive A CacheDirectiveInfo to be added
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReportListing.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReportListing.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReportListing.java
new file mode 100644
index 0000000..a0e35f6
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReportListing.java
@@ -0,0 +1,160 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocol;
+
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.curator.shaded.com.google.common.base.Preconditions;
+
+import org.apache.hadoop.hdfs.DFSUtilClient;
+
+/**
+ * This class represents to the difference between two snapshots of
+ * the same directory, or the difference between a snapshot of the directory and
+ * its current state. This Class serves the purpose of collecting diff entries
+ * in 3 lists : created, deleted and modified list combined size of which is set
+ * by dfs.snapshotdiff-report.limit over one rpc call to the namenode.
+ */
+public class SnapshotDiffReportListing {
+ /**
+ * Representing the full path and diff type of a file/directory where changes
+ * have happened.
+ */
+ public static class DiffReportListingEntry {
+ /**
+ * The type of the difference.
+ */
+ private final long fileId;
+ private final long dirId;
+ private final boolean isReference;
+ /**
+ * The relative path (related to the snapshot root) of 1) the file/directory
+ * where changes have happened, or 2) the source file/dir of a rename op.
+ * or 3) target file/dir for a rename op.
+ */
+ private final byte[][] sourcePath;
+ private final byte[][] targetPath;
+
+ public DiffReportListingEntry(long dirId, long fileId, byte[][] sourcePath,
+ boolean isReference, byte[][] targetPath) {
+ Preconditions.checkNotNull(sourcePath);
+ this.dirId = dirId;
+ this.fileId = fileId;
+ this.sourcePath = sourcePath;
+ this.isReference = isReference;
+ this.targetPath = targetPath;
+ }
+
+ public DiffReportListingEntry(long dirId, long fileId, byte[] sourcePath,
+ boolean isReference, byte[] targetpath) {
+ Preconditions.checkNotNull(sourcePath);
+ this.dirId = dirId;
+ this.fileId = fileId;
+ this.sourcePath = DFSUtilClient.bytes2byteArray(sourcePath);
+ this.isReference = isReference;
+ this.targetPath =
+ targetpath == null ? null : DFSUtilClient.bytes2byteArray(targetpath);
+ }
+
+ public long getDirId() {
+ return dirId;
+ }
+
+ public long getFileId() {
+ return fileId;
+ }
+
+ public byte[][] getSourcePath() {
+ return sourcePath;
+ }
+
+ public byte[][] getTargetPath() {
+ return targetPath;
+ }
+
+ public boolean isReference() {
+ return isReference;
+ }
+ }
+
+ /** store the starting path to process across RPC's for snapshot diff. */
+ private final byte[] lastPath;
+
+ private final int lastIndex;
+
+ private final boolean isFromEarlier;
+
+ /** list of diff. */
+ private final List<DiffReportListingEntry> modifyList;
+
+ private final List<DiffReportListingEntry> createList;
+
+ private final List<DiffReportListingEntry> deleteList;
+
+ public SnapshotDiffReportListing() {
+ this.modifyList = Collections.emptyList();
+ this.createList = Collections.emptyList();
+ this.deleteList = Collections.emptyList();
+ this.lastPath = DFSUtilClient.string2Bytes("");
+ this.lastIndex = -1;
+ this.isFromEarlier = false;
+ }
+
+ public SnapshotDiffReportListing(byte[] startPath,
+ List<DiffReportListingEntry> modifiedEntryList,
+ List<DiffReportListingEntry> createdEntryList,
+ List<DiffReportListingEntry> deletedEntryList, int index,
+ boolean isFromEarlier) {
+ this.modifyList = modifiedEntryList;
+ this.createList = createdEntryList;
+ this.deleteList = deletedEntryList;
+ this.lastPath =
+ startPath != null ? startPath : DFSUtilClient.string2Bytes("");
+ this.lastIndex = index;
+ this.isFromEarlier = isFromEarlier;
+ }
+
+ public List<DiffReportListingEntry> getModifyList() {
+ return modifyList;
+ }
+
+ public List<DiffReportListingEntry> getCreateList() {
+ return createList;
+ }
+
+ public List<DiffReportListingEntry> getDeleteList() {
+ return deleteList;
+ }
+
+ /**
+ * @return {@link #lastPath}
+ */
+ public byte[] getLastPath() {
+ return lastPath;
+ }
+
+ public int getLastIndex() {
+ return lastIndex;
+ }
+
+ public boolean getIsFromEarlier() {
+ return isFromEarlier;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
index aef7c1e..38dc44b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
@@ -79,6 +79,7 @@ import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.protocol.proto.AclProtos.GetAclStatusRequestProto;
import org.apache.hadoop.hdfs.protocol.proto.AclProtos.GetAclStatusResponseProto;
@@ -133,6 +134,8 @@ import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetQuo
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetServerDefaultsRequestProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshotDiffReportRequestProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshotDiffReportResponseProto;
+import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshotDiffReportListingRequestProto;
+import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshotDiffReportListingResponseProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshottableDirListingRequestProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshottableDirListingResponseProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetStoragePoliciesRequestProto;
@@ -1206,6 +1209,27 @@ public class ClientNamenodeProtocolTranslatorPB implements
}
@Override
+ public SnapshotDiffReportListing getSnapshotDiffReportListing(
+ String snapshotRoot, String fromSnapshot, String toSnapshot,
+ byte[] startPath, int index) throws IOException {
+ GetSnapshotDiffReportListingRequestProto req =
+ GetSnapshotDiffReportListingRequestProto.newBuilder()
+ .setSnapshotRoot(snapshotRoot).setFromSnapshot(fromSnapshot)
+ .setToSnapshot(toSnapshot).setCursor(
+ HdfsProtos.SnapshotDiffReportCursorProto.newBuilder()
+ .setStartPath(PBHelperClient.getByteString(startPath))
+ .setIndex(index).build()).build();
+ try {
+ GetSnapshotDiffReportListingResponseProto result =
+ rpcProxy.getSnapshotDiffReportListing(null, req);
+
+ return PBHelperClient.convert(result.getDiffReport());
+ } catch (ServiceException e) {
+ throw ProtobufHelper.getRemoteException(e);
+ }
+ }
+
+ @Override
public long addCacheDirective(CacheDirectiveInfo directive,
EnumSet<CacheFlag> flags) throws IOException {
try {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
index d3b7f6d..fbc6dbf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
@@ -99,6 +99,8 @@ import org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats;
import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeStatus;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing.DiffReportListingEntry;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport.DiffReportEntry;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport.DiffType;
@@ -169,6 +171,8 @@ import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlocksProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.QuotaUsageProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ReencryptionInfoProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.RollingUpgradeStatusProto;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.SnapshotDiffReportListingEntryProto;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.SnapshotDiffReportListingProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.SnapshotDiffReportEntryProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.SnapshotDiffReportProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.SnapshottableDirectoryListingProto;
@@ -1489,6 +1493,61 @@ public class PBHelperClient {
.toByteArray() : null);
}
+ public static SnapshotDiffReportListing convert(
+ SnapshotDiffReportListingProto reportProto) {
+ if (reportProto == null) {
+ return null;
+ }
+ List<SnapshotDiffReportListingEntryProto> modifyList =
+ reportProto.getModifiedEntriesList();
+ List<DiffReportListingEntry> modifiedEntries = new ChunkedArrayList<>();
+ for (SnapshotDiffReportListingEntryProto entryProto : modifyList) {
+ DiffReportListingEntry entry = convert(entryProto);
+ if (entry != null) {
+ modifiedEntries.add(entry);
+ }
+ }
+ List<SnapshotDiffReportListingEntryProto> createList =
+ reportProto.getCreatedEntriesList();
+ List<DiffReportListingEntry> createdEntries = new ChunkedArrayList<>();
+ for (SnapshotDiffReportListingEntryProto entryProto : createList) {
+ DiffReportListingEntry entry = convert(entryProto);
+ if (entry != null) {
+ createdEntries.add(entry);
+ }
+ }
+ List<SnapshotDiffReportListingEntryProto> deletedList =
+ reportProto.getDeletedEntriesList();
+ List<DiffReportListingEntry> deletedEntries = new ChunkedArrayList<>();
+ for (SnapshotDiffReportListingEntryProto entryProto : deletedList) {
+ DiffReportListingEntry entry = convert(entryProto);
+ if (entry != null) {
+ deletedEntries.add(entry);
+ }
+ }
+ byte[] startPath = reportProto.getCursor().getStartPath().toByteArray();
+ boolean isFromEarlier = reportProto.getIsFromEarlier();
+
+ int index = reportProto.getCursor().getIndex();
+ return new SnapshotDiffReportListing(startPath, modifiedEntries,
+ createdEntries, deletedEntries, index, isFromEarlier);
+ }
+
+ public static DiffReportListingEntry convert(
+ SnapshotDiffReportListingEntryProto entry) {
+ if (entry == null) {
+ return null;
+ }
+ long dirId = entry.getDirId();
+ long fileId = entry.getFileId();
+ boolean isReference = entry.getIsReference();
+ byte[] sourceName = entry.getFullpath().toByteArray();
+ byte[] targetName =
+ entry.hasTargetPath() ? entry.getTargetPath().toByteArray() : null;
+ return new DiffReportListingEntry(dirId, fileId, sourceName, isReference,
+ targetName);
+ }
+
public static SnapshottableDirectoryStatus[] convert(
SnapshottableDirectoryListingProto sdlp) {
if (sdlp == null)
@@ -2508,6 +2567,74 @@ public class PBHelperClient {
return builder.build();
}
+ public static SnapshotDiffReportListingEntryProto convert(
+ DiffReportListingEntry entry) {
+ if (entry == null) {
+ return null;
+ }
+ ByteString sourcePath = getByteString(
+ entry.getSourcePath() == null ? DFSUtilClient.EMPTY_BYTES :
+ DFSUtilClient.byteArray2bytes(entry.getSourcePath()));
+ long dirId = entry.getDirId();
+ long fileId = entry.getFileId();
+ boolean isReference = entry.isReference();
+ ByteString targetPath = getByteString(
+ entry.getTargetPath() == null ? DFSUtilClient.EMPTY_BYTES :
+ DFSUtilClient.byteArray2bytes(entry.getTargetPath()));
+ SnapshotDiffReportListingEntryProto.Builder builder =
+ SnapshotDiffReportListingEntryProto.newBuilder().setFullpath(sourcePath)
+ .setDirId(dirId).setFileId(fileId).setIsReference(isReference)
+ .setTargetPath(targetPath);
+ return builder.build();
+ }
+
+ public static SnapshotDiffReportListingProto convert(
+ SnapshotDiffReportListing report) {
+ if (report == null) {
+ return null;
+ }
+ ByteString startPath = getByteString(
+ report.getLastPath() == null ? DFSUtilClient.EMPTY_BYTES :
+ report.getLastPath());
+ List<DiffReportListingEntry> modifiedEntries = report.getModifyList();
+ List<DiffReportListingEntry> createdEntries = report.getCreateList();
+ List<DiffReportListingEntry> deletedEntries = report.getDeleteList();
+ List<SnapshotDiffReportListingEntryProto> modifiedEntryProtos =
+ new ChunkedArrayList<>();
+ for (DiffReportListingEntry entry : modifiedEntries) {
+ SnapshotDiffReportListingEntryProto entryProto = convert(entry);
+ if (entryProto != null) {
+ modifiedEntryProtos.add(entryProto);
+ }
+ }
+ List<SnapshotDiffReportListingEntryProto> createdEntryProtos =
+ new ChunkedArrayList<>();
+ for (DiffReportListingEntry entry : createdEntries) {
+ SnapshotDiffReportListingEntryProto entryProto = convert(entry);
+ if (entryProto != null) {
+ createdEntryProtos.add(entryProto);
+ }
+ }
+ List<SnapshotDiffReportListingEntryProto> deletedEntryProtos =
+ new ChunkedArrayList<>();
+ for (DiffReportListingEntry entry : deletedEntries) {
+ SnapshotDiffReportListingEntryProto entryProto = convert(entry);
+ if (entryProto != null) {
+ deletedEntryProtos.add(entryProto);
+ }
+ }
+
+ return SnapshotDiffReportListingProto.newBuilder()
+ .addAllModifiedEntries(modifiedEntryProtos)
+ .addAllCreatedEntries(createdEntryProtos)
+ .addAllDeletedEntries(deletedEntryProtos)
+ .setIsFromEarlier(report.getIsFromEarlier())
+ .setCursor(HdfsProtos.SnapshotDiffReportCursorProto.newBuilder()
+ .setStartPath(startPath)
+ .setIndex(report.getLastIndex()).build())
+ .build();
+ }
+
public static SnapshotDiffReportProto convert(SnapshotDiffReport report) {
if (report == null) {
return null;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
index 6db6ad0..eb6da25 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
@@ -297,6 +297,16 @@ message GetSnapshotDiffReportResponseProto {
required SnapshotDiffReportProto diffReport = 1;
}
+message GetSnapshotDiffReportListingRequestProto {
+ required string snapshotRoot = 1;
+ required string fromSnapshot = 2;
+ required string toSnapshot = 3;
+ optional SnapshotDiffReportCursorProto cursor = 4;
+}
+
+message GetSnapshotDiffReportListingResponseProto {
+ required SnapshotDiffReportListingProto diffReport = 1;
+}
message RenewLeaseRequestProto {
required string clientName = 1;
}
@@ -913,6 +923,8 @@ service ClientNamenodeProtocol {
returns(DeleteSnapshotResponseProto);
rpc getSnapshotDiffReport(GetSnapshotDiffReportRequestProto)
returns(GetSnapshotDiffReportResponseProto);
+ rpc getSnapshotDiffReportListing(GetSnapshotDiffReportListingRequestProto)
+ returns(GetSnapshotDiffReportListingResponseProto);
rpc isFileClosed(IsFileClosedRequestProto)
returns(IsFileClosedResponseProto);
rpc modifyAclEntries(ModifyAclEntriesRequestProto)
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
index 953bf19..a423a4b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
@@ -529,6 +529,32 @@ message SnapshotDiffReportProto {
}
/**
+ * Snapshot diff report listing entry
+ */
+message SnapshotDiffReportListingEntryProto {
+ required bytes fullpath = 1;
+ required uint64 dirId = 2;
+ required bool isReference = 3;
+ optional bytes targetPath = 4;
+ optional uint64 fileId = 5;
+}
+
+message SnapshotDiffReportCursorProto {
+ required bytes startPath = 1;
+ required int32 index = 2 [default = -1];
+}
+/**
+ * Snapshot diff report listing
+ */
+message SnapshotDiffReportListingProto {
+ // full path of the directory where snapshots were taken
+ repeated SnapshotDiffReportListingEntryProto modifiedEntries = 1;
+ repeated SnapshotDiffReportListingEntryProto createdEntries = 2;
+ repeated SnapshotDiffReportListingEntryProto deletedEntries = 3;
+ required bool isFromEarlier = 4;
+ optional SnapshotDiffReportCursorProto cursor = 5;
+}
+/**
* Block information
*
* Please be wary of adding additional fields here, since INodeFiles
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 37071b6..97b8b1a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -381,6 +381,11 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
DFS_NAMENODE_SNAPSHOT_DIFF_ALLOW_SNAP_ROOT_DESCENDANT_DEFAULT =
true;
+ public static final String
+ DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT =
+ "dfs.namenode.snapshotdiff.listing.limit";
+ public static final int
+ DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT = 1000;
// Whether to enable datanode's stale state detection and usage for reads
public static final String DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_KEY = "dfs.namenode.avoid.read.stale.datanode";
public static final boolean DFS_NAMENODE_AVOID_STALE_DATANODE_FOR_READ_DEFAULT = false;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
index 2f9781a..3f6c3d7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
@@ -349,7 +349,8 @@ public class DFSUtil {
public static byte[][] getPathComponents(String path) {
// avoid intermediate split to String[]
final byte[] bytes = string2Bytes(path);
- return bytes2byteArray(bytes, bytes.length, (byte)Path.SEPARATOR_CHAR);
+ return DFSUtilClient
+ .bytes2byteArray(bytes, bytes.length, (byte) Path.SEPARATOR_CHAR);
}
/**
@@ -369,42 +370,9 @@ public class DFSUtil {
* @param len the number of bytes to split
* @param separator the delimiting byte
*/
- public static byte[][] bytes2byteArray(byte[] bytes,
- int len,
- byte separator) {
- Preconditions.checkPositionIndex(len, bytes.length);
- if (len == 0) {
- return new byte[][]{null};
- }
- // Count the splits. Omit multiple separators and the last one by
- // peeking at prior byte.
- int splits = 0;
- for (int i = 1; i < len; i++) {
- if (bytes[i-1] == separator && bytes[i] != separator) {
- splits++;
- }
- }
- if (splits == 0 && bytes[0] == separator) {
- return new byte[][]{null};
- }
- splits++;
- byte[][] result = new byte[splits][];
- int nextIndex = 0;
- // Build the splits.
- for (int i = 0; i < splits; i++) {
- int startIndex = nextIndex;
- // find next separator in the bytes.
- while (nextIndex < len && bytes[nextIndex] != separator) {
- nextIndex++;
- }
- result[i] = (nextIndex > 0)
- ? Arrays.copyOfRange(bytes, startIndex, nextIndex)
- : DFSUtilClient.EMPTY_BYTES; // reuse empty bytes for root.
- do { // skip over separators.
- nextIndex++;
- } while (nextIndex < len && bytes[nextIndex] == separator);
- }
- return result;
+ public static byte[][] bytes2byteArray(byte[] bytes, int len,
+ byte separator) {
+ return DFSUtilClient.bytes2byteArray(bytes, len, separator);
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
index f5bbae1..2ae41e4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
@@ -55,6 +55,7 @@ import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus;
import org.apache.hadoop.hdfs.protocol.proto.AclProtos.GetAclStatusRequestProto;
@@ -143,6 +144,8 @@ import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSer
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetServerDefaultsResponseProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshotDiffReportRequestProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshotDiffReportResponseProto;
+import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshotDiffReportListingRequestProto;
+import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshotDiffReportListingResponseProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshottableDirListingRequestProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetSnapshottableDirListingResponseProto;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetStoragePoliciesRequestProto;
@@ -1246,6 +1249,25 @@ public class ClientNamenodeProtocolServerSideTranslatorPB implements
}
@Override
+ public GetSnapshotDiffReportListingResponseProto getSnapshotDiffReportListing(
+ RpcController controller,
+ GetSnapshotDiffReportListingRequestProto request)
+ throws ServiceException {
+ try {
+ SnapshotDiffReportListing report = server
+ .getSnapshotDiffReportListing(request.getSnapshotRoot(),
+ request.getFromSnapshot(), request.getToSnapshot(),
+ request.getCursor().getStartPath().toByteArray(),
+ request.getCursor().getIndex());
+ //request.getStartPath(), request.getIndex());
+ return GetSnapshotDiffReportListingResponseProto.newBuilder()
+ .setDiffReport(PBHelperClient.convert(report)).build();
+ } catch (IOException e) {
+ throw new ServiceException(e);
+ }
+ }
+
+ @Override
public IsFileClosedResponseProto isFileClosed(
RpcController controller, IsFileClosedRequestProto request)
throws ServiceException {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 3bb5ca4..b5acf12 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -92,6 +92,7 @@ import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
import org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus;
import org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.ClientNamenodeProtocol;
@@ -1509,6 +1510,14 @@ public class RouterRpcServer extends AbstractService implements ClientProtocol {
}
@Override // ClientProtocol
+ public SnapshotDiffReportListing getSnapshotDiffReportListing(
+ String snapshotRoot, String earlierSnapshotName, String laterSnapshotName,
+ byte[] startPath, int index) throws IOException {
+ checkOperation(OperationCategory.READ, false);
+ return null;
+ }
+
+ @Override // ClientProtocol
public long addCacheDirective(CacheDirectiveInfo path,
EnumSet<CacheFlag> flags) throws IOException {
checkOperation(OperationCategory.WRITE, false);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
index 9dd75bc..1842707 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.fs.permission.FsAction;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.protocol.FSLimitException;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
import org.apache.hadoop.hdfs.protocol.SnapshotException;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.server.namenode.FSDirectory.DirOp;
@@ -164,6 +165,29 @@ class FSDirSnapshotOp {
return diffs;
}
+ static SnapshotDiffReportListing getSnapshotDiffReportListing(FSDirectory fsd,
+ SnapshotManager snapshotManager, String path, String fromSnapshot,
+ String toSnapshot, byte[] startPath, int index,
+ int snapshotDiffReportLimit) throws IOException {
+ SnapshotDiffReportListing diffs;
+ final FSPermissionChecker pc = fsd.getPermissionChecker();
+ fsd.readLock();
+ try {
+ INodesInPath iip = fsd.resolvePath(pc, path, DirOp.READ);
+ if (fsd.isPermissionEnabled()) {
+ checkSubtreeReadPermission(fsd, pc, path, fromSnapshot);
+ checkSubtreeReadPermission(fsd, pc, path, toSnapshot);
+ }
+ diffs = snapshotManager
+ .diff(iip, path, fromSnapshot, toSnapshot, startPath, index,
+ snapshotDiffReportLimit);
+ } catch (Exception e) {
+ throw e;
+ } finally {
+ fsd.readUnlock();
+ }
+ return diffs;
+ }
/** Get a collection of full snapshot paths given file and snapshot dir.
* @param lsf a list of snapshottable features
* @param file full path of the file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index d594f2a..d3d9cdc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -87,6 +87,8 @@ import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROU
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_KEY;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_DEFAULT;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_REPLICATION_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import static org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.*;
@@ -95,6 +97,8 @@ import org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats;
import org.apache.hadoop.hdfs.protocol.ECBlockGroupStats;
import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
import org.apache.hadoop.hdfs.server.namenode.metrics.ReplicatedBlocksMBean;
import org.apache.hadoop.hdfs.server.protocol.SlowDiskReports;
import static org.apache.hadoop.util.Time.now;
@@ -211,7 +215,6 @@ import org.apache.hadoop.hdfs.protocol.RollingUpgradeException;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException;
import org.apache.hadoop.hdfs.protocol.SnapshotException;
-import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure;
import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
@@ -426,6 +429,7 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
private final UserGroupInformation fsOwner;
private final String supergroup;
private final boolean standbyShouldCheckpoint;
+ private final int snapshotDiffReportLimit;
/** Interval between each check of lease to release. */
private final long leaseRecheckIntervalMs;
@@ -761,6 +765,10 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT);
this.isPermissionEnabled = conf.getBoolean(DFS_PERMISSIONS_ENABLED_KEY,
DFS_PERMISSIONS_ENABLED_DEFAULT);
+ this.snapshotDiffReportLimit =
+ conf.getInt(DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT,
+ DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT);
+
LOG.info("fsOwner = " + fsOwner);
LOG.info("supergroup = " + supergroup);
LOG.info("isPermissionEnabled = " + isPermissionEnabled);
@@ -6364,16 +6372,16 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
/**
* Get the difference between two snapshots (or between a snapshot and the
* current status) of a snapshottable directory.
- *
+ *
* @param path The full path of the snapshottable directory.
* @param fromSnapshot Name of the snapshot to calculate the diff from. Null
* or empty string indicates the current tree.
* @param toSnapshot Name of the snapshot to calculated the diff to. Null or
* empty string indicates the current tree.
- * @return A report about the difference between {@code fromSnapshot} and
- * {@code toSnapshot}. Modified/deleted/created/renamed files and
- * directories belonging to the snapshottable directories are listed
- * and labeled as M/-/+/R respectively.
+ * @return A report about the difference between {@code fromSnapshot} and
+ * {@code toSnapshot}. Modified/deleted/created/renamed files and
+ * directories belonging to the snapshottable directories are listed
+ * and labeled as M/-/+/R respectively.
* @throws IOException
*/
SnapshotDiffReport getSnapshotDiffReport(String path,
@@ -6403,6 +6411,63 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
toSnapshotRoot, null);
return diffs;
}
+
+ /**
+ * Get the difference between two snapshots (or between a snapshot and the
+ * current status) of a snapshottable directory.
+ *
+ * @param path The full path of the snapshottable directory.
+ * @param fromSnapshot Name of the snapshot to calculate the diff from. Null
+ * or empty string indicates the current tree.
+ * @param toSnapshot Name of the snapshot to calculated the diff to. Null or
+ * empty string indicates the current tree.
+ * @param startPath
+ * path relative to the snapshottable root directory from where the
+ * snapshotdiff computation needs to start across multiple rpc calls
+ * @param index
+ * index in the created or deleted list of the directory at which
+ * the snapshotdiff computation stopped during the last rpc call
+ * as the no of entries exceeded the snapshotdiffentry limit. -1
+ * indicates, the snapshotdiff compuatation needs to start right
+ * from the startPath provided.
+ * @return A partial report about the difference between {@code fromSnapshot}
+ * and {@code toSnapshot}. Modified/deleted/created/renamed files and
+ * directories belonging to the snapshottable directories are listed
+ * and labeled as M/-/+/R respectively.
+ * @throws IOException
+ */
+ SnapshotDiffReportListing getSnapshotDiffReportListing(String path,
+ String fromSnapshot, String toSnapshot, byte[] startPath, int index)
+ throws IOException {
+ final String operationName = "computeSnapshotDiff";
+ SnapshotDiffReportListing diffs = null;
+ checkOperation(OperationCategory.READ);
+ boolean success = false;
+ String fromSnapshotRoot =
+ (fromSnapshot == null || fromSnapshot.isEmpty()) ? path :
+ Snapshot.getSnapshotPath(path, fromSnapshot);
+ String toSnapshotRoot =
+ (toSnapshot == null || toSnapshot.isEmpty()) ? path :
+ Snapshot.getSnapshotPath(path, toSnapshot);
+ readLock();
+ try {
+ checkOperation(OperationCategory.READ);
+ diffs = FSDirSnapshotOp
+ .getSnapshotDiffReportListing(dir, snapshotManager, path,
+ fromSnapshot, toSnapshot, startPath, index,
+ snapshotDiffReportLimit);
+ success = true;
+ } catch (AccessControlException ace) {
+ logAuditEvent(success, operationName, fromSnapshotRoot, toSnapshotRoot,
+ null);
+ throw ace;
+ } finally {
+ readUnlock(operationName);
+ }
+ logAuditEvent(success, operationName, fromSnapshotRoot, toSnapshotRoot,
+ null);
+ return diffs;
+ }
/**
* Delete a snapshot of a snapshottable directory
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
index 895e873..36d33a6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
@@ -121,6 +121,7 @@ import org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats;
import org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
import org.apache.hadoop.hdfs.protocol.UnregisteredNodeException;
import org.apache.hadoop.hdfs.protocol.UnresolvedPathException;
@@ -1863,6 +1864,18 @@ public class NameNodeRpcServer implements NamenodeProtocols {
}
@Override // ClientProtocol
+ public SnapshotDiffReportListing getSnapshotDiffReportListing(
+ String snapshotRoot, String earlierSnapshotName, String laterSnapshotName,
+ byte[] startPath, int index) throws IOException {
+ checkNNStartup();
+ SnapshotDiffReportListing report = namesystem
+ .getSnapshotDiffReportListing(snapshotRoot, earlierSnapshotName,
+ laterSnapshotName, startPath, index);
+ metrics.incrSnapshotDiffReportOps();
+ return report;
+ }
+
+ @Override // ClientProtocol
public long addCacheDirective(
CacheDirectiveInfo path, EnumSet<CacheFlag> flags) throws IOException {
checkNNStartup();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
index 076b78f..217ad01 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
@@ -24,10 +24,12 @@ import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Set;
+import java.util.Arrays;
import org.apache.hadoop.HadoopIllegalArgumentException;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.hdfs.DFSUtil;
+import org.apache.hadoop.hdfs.DFSUtilClient;
import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
import org.apache.hadoop.hdfs.protocol.SnapshotException;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
@@ -285,6 +287,54 @@ public class DirectorySnapshottableFeature extends DirectoryWithSnapshotFeature
}
/**
+ * Compute the difference between two snapshots (or a snapshot and the current
+ * directory) of the directory. The diff calculation can be scoped to either
+ * the snapshot root or any descendant directory under the snapshot root.
+ *
+ * @param snapshotRootDir the snapshot root directory
+ * @param snapshotDiffScopeDir the descendant directory under snapshot root
+ * to scope the diff calculation to.
+ * @param from The name of the start point of the comparison. Null indicating
+ * the current tree.
+ * @param to The name of the end point. Null indicating the current tree.
+ * @param startPath
+ * path relative to the snapshottable root directory from where the
+ * snapshotdiff computation needs to start across multiple rpc calls
+ * @param index
+ * index in the created or deleted list of the directory at which
+ * the snapshotdiff computation stopped during the last rpc call
+ * as the no of entries exceeded the snapshotdiffentry limit. -1
+ * indicates, the snapshotdiff computation needs to start right
+ * from the startPath provided.
+ *
+ * @return The difference between the start/end points.
+ * @throws SnapshotException If there is no snapshot matching the starting
+ * point, or if endSnapshotName is not null but cannot be identified
+ * as a previous snapshot.
+ */
+ SnapshotDiffListingInfo computeDiff(final INodeDirectory snapshotRootDir,
+ final INodeDirectory snapshotDiffScopeDir, final String from,
+ final String to, byte[] startPath, int index,
+ int snapshotDiffReportEntriesLimit) throws SnapshotException {
+ Preconditions.checkArgument(
+ snapshotDiffScopeDir.isDescendantOfSnapshotRoot(snapshotRootDir));
+ Snapshot fromSnapshot = getSnapshotByName(snapshotRootDir, from);
+ Snapshot toSnapshot = getSnapshotByName(snapshotRootDir, to);
+ boolean toProcess = Arrays.equals(startPath, DFSUtilClient.EMPTY_BYTES);
+ byte[][] resumePath = DFSUtilClient.bytes2byteArray(startPath);
+ if (from.equals(to)) {
+ return null;
+ }
+ SnapshotDiffListingInfo diffs =
+ new SnapshotDiffListingInfo(snapshotRootDir, snapshotDiffScopeDir,
+ fromSnapshot, toSnapshot, snapshotDiffReportEntriesLimit);
+ diffs.setLastIndex(index);
+ computeDiffRecursively(snapshotDiffScopeDir, snapshotDiffScopeDir,
+ new ArrayList<byte[]>(), diffs, resumePath, 0, toProcess);
+ return diffs;
+ }
+
+ /**
* Find the snapshot matching the given name.
*
* @param snapshotRoot The directory where snapshots were taken.
@@ -368,11 +418,95 @@ public class DirectorySnapshottableFeature extends DirectoryWithSnapshotFeature
}
/**
+ * Recursively compute the difference between snapshots under a given
+ * directory/file partially.
+ * @param snapshotDir The directory where snapshots were taken. Can be a
+ * snapshot root directory or any descendant directory
+ * under snapshot root directory.
+ * @param node The directory/file under which the diff is computed.
+ * @param parentPath Relative path (corresponding to the snapshot root) of
+ * the node's parent.
+ * @param diffReport data structure used to store the diff.
+ * @param resume path from where to resume the snapshotdiff computation
+ * in one rpc call
+ * @param level indicates the level of the directory tree rooted at
+ * snapshotRoot.
+ * @param processFlag indicates that the dir/file where the snapshotdiff
+ * computation has to start is processed or not.
+ */
+ private boolean computeDiffRecursively(final INodeDirectory snapshotDir,
+ INode node, List<byte[]> parentPath, SnapshotDiffListingInfo diffReport,
+ final byte[][] resume, int level, boolean processFlag) {
+ final Snapshot earlier = diffReport.getEarlier();
+ final Snapshot later = diffReport.getLater();
+ byte[][] relativePath = parentPath.toArray(new byte[parentPath.size()][]);
+ if (!processFlag && level == resume.length
+ && Arrays.equals(resume[resume.length - 1], node.getLocalNameBytes())) {
+ processFlag = true;
+ }
+
+ if (node.isDirectory()) {
+ final ChildrenDiff diff = new ChildrenDiff();
+ INodeDirectory dir = node.asDirectory();
+ if (processFlag) {
+ DirectoryWithSnapshotFeature sf = dir.getDirectoryWithSnapshotFeature();
+ if (sf != null) {
+ boolean change =
+ sf.computeDiffBetweenSnapshots(earlier, later, diff, dir);
+ if (change) {
+ if (!diffReport.addDirDiff(dir.getId(), relativePath, diff)) {
+ return false;
+ }
+ }
+ }
+ }
+
+ ReadOnlyList<INode> children = dir.getChildrenList(earlier.getId());
+ boolean iterate = false;
+ for (INode child : children) {
+ final byte[] name = child.getLocalNameBytes();
+ if (!processFlag && !iterate && !Arrays.equals(resume[level], name)) {
+ continue;
+ }
+ iterate = true;
+ level = level + 1;
+ boolean toProcess = diff.searchIndex(ListType.DELETED, name) < 0;
+ if (!toProcess && child instanceof INodeReference.WithName) {
+ byte[][] renameTargetPath = findRenameTargetPath(snapshotDir,
+ (WithName) child, Snapshot.getSnapshotId(later));
+ if (renameTargetPath != null) {
+ toProcess = true;
+ }
+ }
+ if (toProcess) {
+ parentPath.add(name);
+ processFlag = computeDiffRecursively(snapshotDir, child, parentPath,
+ diffReport, resume, level, processFlag);
+ parentPath.remove(parentPath.size() - 1);
+ if (!processFlag) {
+ return false;
+ }
+ }
+ }
+ } else if (node.isFile() && node.asFile().isWithSnapshot() && processFlag) {
+ INodeFile file = node.asFile();
+ boolean change = file.getFileWithSnapshotFeature()
+ .changedBetweenSnapshots(file, earlier, later);
+ if (change) {
+ if (!diffReport.addFileDiff(file, relativePath)) {
+ return false;
+ }
+ }
+ }
+ return true;
+ }
+
+ /**
* We just found a deleted WithName node as the source of a rename operation.
* However, we should include it in our snapshot diff report as rename only
* if the rename target is also under the same snapshottable directory.
*/
- private byte[][] findRenameTargetPath(final INodeDirectory snapshotRoot,
+ public byte[][] findRenameTargetPath(final INodeDirectory snapshotRoot,
INodeReference.WithName wn, final int snapshotId) {
INode inode = wn.getReferredINode();
final LinkedList<byte[]> ancestors = Lists.newLinkedList();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffListingInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffListingInfo.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffListingInfo.java
new file mode 100644
index 0000000..738aa23
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDiffListingInfo.java
@@ -0,0 +1,207 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode.snapshot;
+
+import java.util.List;
+import java.util.ListIterator;
+
+import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing.DiffReportListingEntry;
+import org.apache.hadoop.hdfs.server.namenode.INode;
+import org.apache.hadoop.hdfs.server.namenode.INodeDirectory;
+import org.apache.hadoop.hdfs.server.namenode.INodeFile;
+import org.apache.hadoop.hdfs.server.namenode.INodeReference;
+import org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.ChildrenDiff;
+import org.apache.hadoop.hdfs.util.Diff.ListType;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.util.ChunkedArrayList;
+
+/**
+ * A class describing the difference between snapshots of a snapshottable
+ * directory where the difference is limited by dfs.snapshotDiff-report.limit.
+ */
+
+class SnapshotDiffListingInfo {
+ private final int maxEntries;
+
+ /** The root directory of the snapshots. */
+ private final INodeDirectory snapshotRoot;
+ /**
+ * The scope directory under which snapshot diff is calculated.
+ */
+ private final INodeDirectory snapshotDiffScopeDir;
+ /** The starting point of the difference. */
+ private final Snapshot from;
+ /** The end point of the difference. */
+ private final Snapshot to;
+
+ /** The path of the file to start for computing the snapshot diff. */
+ private byte[] lastPath = DFSUtilClient.EMPTY_BYTES;
+
+ private int lastIndex = -1;
+
+ /*
+ * A list containing all the modified entries between the given snapshots
+ * within a single rpc call.
+ */
+ private final List<DiffReportListingEntry> modifiedList =
+ new ChunkedArrayList<>();
+
+ private final List<DiffReportListingEntry> createdList =
+ new ChunkedArrayList<>();
+
+ private final List<DiffReportListingEntry> deletedList =
+ new ChunkedArrayList<>();
+
+ SnapshotDiffListingInfo(INodeDirectory snapshotRootDir,
+ INodeDirectory snapshotDiffScopeDir, Snapshot start, Snapshot end,
+ int snapshotDiffReportLimit) {
+ Preconditions.checkArgument(
+ snapshotRootDir.isSnapshottable() && snapshotDiffScopeDir
+ .isDescendantOfSnapshotRoot(snapshotRootDir));
+ this.snapshotRoot = snapshotRootDir;
+ this.snapshotDiffScopeDir = snapshotDiffScopeDir;
+ this.from = start;
+ this.to = end;
+ this.maxEntries = snapshotDiffReportLimit;
+ }
+
+ boolean addDirDiff(long dirId, byte[][] parent, ChildrenDiff diff) {
+ final Snapshot laterSnapshot = getLater();
+ if (lastIndex == -1) {
+ if (getTotalEntries() < maxEntries) {
+ modifiedList.add(new DiffReportListingEntry(
+ dirId, dirId, parent, true, null));
+ } else {
+ setLastPath(parent);
+ setLastIndex(-1);
+ return false;
+ }
+ }
+
+ if (lastIndex == -1 || lastIndex < diff.getList(ListType.CREATED).size()) {
+ ListIterator<INode> iterator = lastIndex != -1 ?
+ diff.getList(ListType.CREATED).listIterator(lastIndex)
+ : diff.getList(ListType.CREATED).listIterator();
+ while (iterator.hasNext()) {
+ if (getTotalEntries() < maxEntries) {
+ INode created = iterator.next();
+ byte[][] path = newPath(parent, created.getLocalNameBytes());
+ createdList.add(new DiffReportListingEntry(dirId, created.getId(),
+ path, created.isReference(), null));
+ } else {
+ setLastPath(parent);
+ setLastIndex(iterator.nextIndex());
+ return false;
+ }
+ }
+ setLastIndex(-1);
+ }
+
+ if (lastIndex == -1 || lastIndex >= diff.getList(ListType.CREATED).size()) {
+ int size = diff.getList(ListType.DELETED).size();
+ ListIterator<INode> iterator = lastIndex != -1 ?
+ diff.getList(ListType.DELETED).listIterator(lastIndex - size)
+ : diff.getList(ListType.DELETED).listIterator();
+ while (iterator.hasNext()) {
+ if (getTotalEntries() < maxEntries) {
+ final INode d = iterator.next();
+ byte[][] path = newPath(parent, d.getLocalNameBytes());
+ byte[][] target = findRenameTargetPath(d, laterSnapshot);
+ final DiffReportListingEntry e = target != null ?
+ new DiffReportListingEntry(dirId, d.getId(), path, true, target) :
+ new DiffReportListingEntry(dirId, d.getId(), path, false, null);
+ deletedList.add(e);
+ } else {
+ setLastPath(parent);
+ setLastIndex(size + iterator.nextIndex());
+ return false;
+ }
+ }
+ setLastIndex(-1);
+ }
+ return true;
+ }
+
+ private byte[][] findRenameTargetPath(INode deleted, Snapshot laterSnapshot) {
+ if (deleted instanceof INodeReference.WithName) {
+ return snapshotRoot.getDirectorySnapshottableFeature()
+ .findRenameTargetPath(snapshotDiffScopeDir,
+ (INodeReference.WithName) deleted,
+ Snapshot.getSnapshotId(laterSnapshot));
+ }
+ return null;
+ }
+
+ private static byte[][] newPath(byte[][] parent, byte[] name) {
+ byte[][] fullPath = new byte[parent.length + 1][];
+ System.arraycopy(parent, 0, fullPath, 0, parent.length);
+ fullPath[fullPath.length - 1] = name;
+ return fullPath;
+ }
+
+ Snapshot getEarlier() {
+ return isFromEarlier()? from: to;
+ }
+
+ Snapshot getLater() {
+ return isFromEarlier()? to: from;
+ }
+
+
+ public void setLastPath(byte[][] lastPath) {
+ this.lastPath = DFSUtilClient.byteArray2bytes(lastPath);
+ }
+
+ public void setLastIndex(int idx) {
+ this.lastIndex = idx;
+ }
+
+ boolean addFileDiff(INodeFile file, byte[][] relativePath) {
+ if (getTotalEntries() < maxEntries) {
+ modifiedList.add(new DiffReportListingEntry(file.getId(),
+ file.getId(), relativePath,false, null));
+ } else {
+ setLastPath(relativePath);
+ return false;
+ }
+ return true;
+ }
+ /** @return True if {@link #from} is earlier than {@link #to} */
+ boolean isFromEarlier() {
+ return Snapshot.ID_COMPARATOR.compare(from, to) < 0;
+ }
+
+
+ private int getTotalEntries() {
+ return createdList.size() + modifiedList.size() + deletedList.size();
+ }
+
+ /**
+ * Generate a {@link SnapshotDiffReportListing} based on detailed diff
+ * information.
+ *
+ * @return A {@link SnapshotDiffReportListing} describing the difference
+ */
+ public SnapshotDiffReportListing generateReport() {
+ return new SnapshotDiffReportListing(lastPath, modifiedList, createdList,
+ deletedList, lastIndex, isFromEarlier());
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
index 58a218e..87985de 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
@@ -44,6 +44,7 @@ import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.DFSUtilClient;
import org.apache.hadoop.hdfs.protocol.HdfsFileStatus;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
+import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
import org.apache.hadoop.hdfs.protocol.SnapshotException;
import org.apache.hadoop.hdfs.protocol.SnapshotInfo;
import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
@@ -466,6 +467,33 @@ public class SnapshotManager implements SnapshotStatsMXBean {
return diffs != null ? diffs.generateReport() : new SnapshotDiffReport(
snapshotPath, from, to, Collections.<DiffReportEntry> emptyList());
}
+
+ /**
+ * Compute the partial difference between two snapshots of a directory,
+ * or between a snapshot of the directory and its current tree.
+ */
+ public SnapshotDiffReportListing diff(final INodesInPath iip,
+ final String snapshotPath, final String from, final String to,
+ byte[] startPath, int index, int snapshotDiffReportLimit)
+ throws IOException {
+ // Find the source root directory path where the snapshots were taken.
+ // All the check for path has been included in the valueOf method.
+ INodeDirectory snapshotRootDir;
+ if (this.snapshotDiffAllowSnapRootDescendant) {
+ snapshotRootDir = getSnapshottableAncestorDir(iip);
+ } else {
+ snapshotRootDir = getSnapshottableRoot(iip);
+ }
+ Preconditions.checkNotNull(snapshotRootDir);
+ INodeDirectory snapshotDescendantDir = INodeDirectory.valueOf(
+ iip.getLastINode(), snapshotPath);
+ final SnapshotDiffListingInfo diffs =
+ snapshotRootDir.getDirectorySnapshottableFeature()
+ .computeDiff(snapshotRootDir, snapshotDescendantDir, from, to,
+ startPath, index, snapshotDiffReportLimit);
+ return diffs != null ? diffs.generateReport() :
+ new SnapshotDiffReportListing();
+ }
public void clearSnapshottableDirs() {
snapshottables.clear();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 79c2d8e..dedf987 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -4333,6 +4333,17 @@
</property>
<property>
+ <name>dfs.namenode.snapshotdiff.listing.limit</name>
+ <value>1000</value>
+ <description>
+ Limit the number of entries generated by getSnapshotDiffReportListing within
+ one rpc call to the namenode.If less or equal to zero, at most
+ DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT_DEFAULT (= 1000) will be sent
+ across to the client within one rpc call.
+ </description>
+</property>
+
+<property>
<name>dfs.pipeline.ecn</name>
<value>false</value>
<description>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[48/50] [abbrv] hadoop git commit: HDFS-12776. [READ] Increasing
replication for PROVIDED files should create local replicas
Posted by vi...@apache.org.
HDFS-12776. [READ] Increasing replication for PROVIDED files should create local replicas
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5baee3d5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5baee3d5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5baee3d5
Branch: refs/heads/HDFS-9806
Commit: 5baee3d56c29bcb88b9d8965c95e01ff7c02694b
Parents: 3ed1348
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu Nov 9 13:03:41 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../hdfs/server/blockmanagement/BlockInfo.java | 7 ++--
.../datanode/fsdataset/impl/FsDatasetImpl.java | 25 +++++++++++---
.../TestNameNodeProvidedImplementation.java | 36 +++++++++++---------
3 files changed, 45 insertions(+), 23 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5baee3d5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index eb09b7b..8f59df6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -187,20 +187,23 @@ public abstract class BlockInfo extends Block
*/
DatanodeStorageInfo findStorageInfo(DatanodeDescriptor dn) {
int len = getCapacity();
+ DatanodeStorageInfo providedStorageInfo = null;
for(int idx = 0; idx < len; idx++) {
DatanodeStorageInfo cur = getStorageInfo(idx);
if(cur != null) {
if (cur.getStorageType() == StorageType.PROVIDED) {
//if block resides on provided storage, only match the storage ids
if (dn.getStorageInfo(cur.getStorageID()) != null) {
- return cur;
+ // do not return here as we have to check the other
+ // DatanodeStorageInfos for this block which could be local
+ providedStorageInfo = cur;
}
} else if (cur.getDatanodeDescriptor() == dn) {
return cur;
}
}
}
- return null;
+ return providedStorageInfo;
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5baee3d5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index 81056db..82394f5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -1510,6 +1510,13 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
}
}
+ private boolean isReplicaProvided(ReplicaInfo replicaInfo) {
+ if (replicaInfo == null) {
+ return false;
+ }
+ return replicaInfo.getVolume().getStorageType() == StorageType.PROVIDED;
+ }
+
@Override // FsDatasetSpi
public ReplicaHandler createTemporary(StorageType storageType,
String storageId, ExtendedBlock b, boolean isTransfer)
@@ -1528,12 +1535,14 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
isInPipeline = currentReplicaInfo.getState() == ReplicaState.TEMPORARY
|| currentReplicaInfo.getState() == ReplicaState.RBW;
/*
- * If the current block is old, reject.
+ * If the current block is not PROVIDED and old, reject.
* else If transfer request, then accept it.
* else if state is not RBW/Temporary, then reject
+ * If current block is PROVIDED, ignore the replica.
*/
- if ((currentReplicaInfo.getGenerationStamp() >= b.getGenerationStamp())
- || (!isTransfer && !isInPipeline)) {
+ if (((currentReplicaInfo.getGenerationStamp() >= b
+ .getGenerationStamp()) || (!isTransfer && !isInPipeline))
+ && !isReplicaProvided(currentReplicaInfo)) {
throw new ReplicaAlreadyExistsException("Block " + b
+ " already exists in state " + currentReplicaInfo.getState()
+ " and thus cannot be created.");
@@ -1553,11 +1562,17 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
+ " after " + writerStopMs + " miniseconds.");
}
+ // if lastFoundReplicaInfo is PROVIDED and FINALIZED,
+ // stopWriter isn't required.
+ if (isReplicaProvided(lastFoundReplicaInfo) &&
+ lastFoundReplicaInfo.getState() == ReplicaState.FINALIZED) {
+ continue;
+ }
// Stop the previous writer
((ReplicaInPipeline)lastFoundReplicaInfo).stopWriter(writerStopTimeoutMs);
} while (true);
-
- if (lastFoundReplicaInfo != null) {
+ if (lastFoundReplicaInfo != null
+ && !isReplicaProvided(lastFoundReplicaInfo)) {
// Old blockfile should be deleted synchronously as it might collide
// with the new block if allocated in same volume.
// Do the deletion outside of lock as its DISK IO.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5baee3d5/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index f0303b5..1f6aebb 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -401,33 +401,37 @@ public class TestNameNodeProvidedImplementation {
public void testSetReplicationForProvidedFiles() throws Exception {
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
- startCluster(NNDIRPATH, 2, null,
- new StorageType[][]{
- {StorageType.PROVIDED},
- {StorageType.DISK}},
+ // 10 Datanodes with both DISK and PROVIDED storage
+ startCluster(NNDIRPATH, 10,
+ new StorageType[]{
+ StorageType.PROVIDED, StorageType.DISK},
+ null,
false);
String filename = "/" + filePrefix + (numFiles - 1) + fileSuffix;
Path file = new Path(filename);
FileSystem fs = cluster.getFileSystem();
- //set the replication to 2, and test that the file has
- //the required replication.
- fs.setReplication(file, (short) 2);
+ // set the replication to 4, and test that the file has
+ // the required replication.
+ short newReplication = 4;
+ LOG.info("Setting replication of file {} to {}", filename, newReplication);
+ fs.setReplication(file, newReplication);
DFSTestUtil.waitForReplication((DistributedFileSystem) fs,
- file, (short) 2, 10000);
+ file, newReplication, 10000);
DFSClient client = new DFSClient(new InetSocketAddress("localhost",
cluster.getNameNodePort()), cluster.getConfiguration(0));
- getAndCheckBlockLocations(client, filename, 2);
+ getAndCheckBlockLocations(client, filename, newReplication);
- //set the replication back to 1
- fs.setReplication(file, (short) 1);
+ // set the replication back to 1
+ newReplication = 1;
+ LOG.info("Setting replication of file {} back to {}",
+ filename, newReplication);
+ fs.setReplication(file, newReplication);
DFSTestUtil.waitForReplication((DistributedFileSystem) fs,
- file, (short) 1, 10000);
- //the only replica left should be the PROVIDED datanode
- DatanodeInfo[] infos = getAndCheckBlockLocations(client, filename, 1);
- assertEquals(cluster.getDataNodes().get(0).getDatanodeUuid(),
- infos[0].getDatanodeUuid());
+ file, newReplication, 10000);
+ // the only replica left should be the PROVIDED datanode
+ getAndCheckBlockLocations(client, filename, newReplication);
}
@Test
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[25/50] [abbrv] hadoop git commit: HDFS-10675. Datanode support to
read from external stores.
Posted by vi...@apache.org.
HDFS-10675. Datanode support to read from external stores.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/970028f0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/970028f0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/970028f0
Branch: refs/heads/HDFS-9806
Commit: 970028f04bafd9b2aac52ee8969c42a8fb6f6b25
Parents: 60f95fb
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Wed Mar 29 14:29:28 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:57 2017 -0800
----------------------------------------------------------------------
.../java/org/apache/hadoop/fs/StorageType.java | 3 +-
.../org/apache/hadoop/fs/shell/TestCount.java | 3 +-
.../hadoop/hdfs/protocol/HdfsConstants.java | 4 +
.../hadoop/hdfs/protocolPB/PBHelperClient.java | 4 +
.../src/main/proto/hdfs.proto | 1 +
.../org/apache/hadoop/hdfs/DFSConfigKeys.java | 15 +
.../hadoop/hdfs/server/common/BlockAlias.java | 29 +
.../hadoop/hdfs/server/common/BlockFormat.java | 82 +++
.../hadoop/hdfs/server/common/FileRegion.java | 121 +++++
.../hdfs/server/common/FileRegionProvider.java | 37 ++
.../hadoop/hdfs/server/common/Storage.java | 71 ++-
.../hadoop/hdfs/server/common/StorageInfo.java | 6 +
.../server/common/TextFileRegionFormat.java | 442 ++++++++++++++++
.../server/common/TextFileRegionProvider.java | 88 ++++
.../server/datanode/BlockPoolSliceStorage.java | 21 +-
.../hdfs/server/datanode/DataStorage.java | 44 +-
.../hdfs/server/datanode/DirectoryScanner.java | 19 +-
.../datanode/FinalizedProvidedReplica.java | 91 ++++
.../hdfs/server/datanode/ProvidedReplica.java | 248 +++++++++
.../hdfs/server/datanode/ReplicaBuilder.java | 100 +++-
.../hdfs/server/datanode/ReplicaInfo.java | 20 +-
.../hdfs/server/datanode/StorageLocation.java | 26 +-
.../server/datanode/fsdataset/FsDatasetSpi.java | 4 +-
.../server/datanode/fsdataset/FsVolumeSpi.java | 32 +-
.../fsdataset/impl/DefaultProvidedVolumeDF.java | 58 ++
.../datanode/fsdataset/impl/FsDatasetImpl.java | 40 +-
.../datanode/fsdataset/impl/FsDatasetUtil.java | 25 +-
.../datanode/fsdataset/impl/FsVolumeImpl.java | 19 +-
.../fsdataset/impl/FsVolumeImplBuilder.java | 6 +
.../fsdataset/impl/ProvidedVolumeDF.java | 34 ++
.../fsdataset/impl/ProvidedVolumeImpl.java | 526 +++++++++++++++++++
.../apache/hadoop/hdfs/server/mover/Mover.java | 2 +-
.../server/namenode/FSImageCompression.java | 2 +-
.../hadoop/hdfs/server/namenode/NNStorage.java | 10 +-
.../src/main/resources/hdfs-default.xml | 78 +++
.../org/apache/hadoop/hdfs/TestDFSRollback.java | 6 +-
.../hadoop/hdfs/TestDFSStartupVersions.java | 2 +-
.../org/apache/hadoop/hdfs/TestDFSUpgrade.java | 4 +-
.../apache/hadoop/hdfs/UpgradeUtilities.java | 16 +-
.../hdfs/server/common/TestTextBlockFormat.java | 160 ++++++
.../server/datanode/SimulatedFSDataset.java | 6 +-
.../extdataset/ExternalDatasetImpl.java | 5 +-
.../fsdataset/impl/TestFsDatasetImpl.java | 17 +-
.../fsdataset/impl/TestProvidedImpl.java | 426 +++++++++++++++
.../hdfs/server/namenode/TestClusterId.java | 5 +-
45 files changed, 2873 insertions(+), 85 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
index 0948801..2ecd206 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
@@ -37,7 +37,8 @@ public enum StorageType {
RAM_DISK(true),
SSD(false),
DISK(false),
- ARCHIVE(false);
+ ARCHIVE(false),
+ PROVIDED(false);
private final boolean isTransient;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java
index a782958..b5adfcf 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestCount.java
@@ -285,7 +285,7 @@ public class TestCount {
// <----13---> <-------17------> <----13-----> <------17------->
" SSD_QUOTA REM_SSD_QUOTA DISK_QUOTA REM_DISK_QUOTA " +
// <----13---> <-------17------>
- "ARCHIVE_QUOTA REM_ARCHIVE_QUOTA " +
+ "ARCHIVE_QUOTA REM_ARCHIVE_QUOTA PROVIDED_QUOTA REM_PROVIDED_QUOTA " +
"PATHNAME";
verify(out).println(withStorageTypeHeader);
verifyNoMoreInteractions(out);
@@ -340,6 +340,7 @@ public class TestCount {
" SSD_QUOTA REM_SSD_QUOTA " +
" DISK_QUOTA REM_DISK_QUOTA " +
"ARCHIVE_QUOTA REM_ARCHIVE_QUOTA " +
+ "PROVIDED_QUOTA REM_PROVIDED_QUOTA " +
"PATHNAME";
verify(out).println(withStorageTypeHeader);
verifyNoMoreInteractions(out);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 8245d1b..e9e6103 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -47,6 +47,10 @@ public final class HdfsConstants {
public static final String WARM_STORAGE_POLICY_NAME = "WARM";
public static final byte COLD_STORAGE_POLICY_ID = 2;
public static final String COLD_STORAGE_POLICY_NAME = "COLD";
+ // branch HDFS-9806 XXX temporary until HDFS-7076
+ public static final byte PROVIDED_STORAGE_POLICY_ID = 1;
+ public static final String PROVIDED_STORAGE_POLICY_NAME = "PROVIDED";
+
public static final int DEFAULT_DATA_SOCKET_SIZE = 0;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
index fbc6dbf..460112e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
@@ -405,6 +405,8 @@ public class PBHelperClient {
return StorageTypeProto.ARCHIVE;
case RAM_DISK:
return StorageTypeProto.RAM_DISK;
+ case PROVIDED:
+ return StorageTypeProto.PROVIDED;
default:
throw new IllegalStateException(
"BUG: StorageType not found, type=" + type);
@@ -421,6 +423,8 @@ public class PBHelperClient {
return StorageType.ARCHIVE;
case RAM_DISK:
return StorageType.RAM_DISK;
+ case PROVIDED:
+ return StorageType.PROVIDED;
default:
throw new IllegalStateException(
"BUG: StorageTypeProto not found, type=" + type);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
index a423a4b..06578ca 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
@@ -205,6 +205,7 @@ enum StorageTypeProto {
SSD = 2;
ARCHIVE = 3;
RAM_DISK = 4;
+ PROVIDED = 5;
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 97b8b1a..ca753ce 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -328,6 +328,21 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
"dfs.namenode.edits.asynclogging";
public static final boolean DFS_NAMENODE_EDITS_ASYNC_LOGGING_DEFAULT = true;
+ public static final String DFS_PROVIDER_CLASS = "dfs.provider.class";
+ public static final String DFS_PROVIDER_DF_CLASS = "dfs.provided.df.class";
+ public static final String DFS_PROVIDER_STORAGEUUID = "dfs.provided.storage.id";
+ public static final String DFS_PROVIDER_STORAGEUUID_DEFAULT = "DS-PROVIDED";
+ public static final String DFS_PROVIDER_BLK_FORMAT_CLASS = "dfs.provided.blockformat.class";
+
+ public static final String DFS_PROVIDED_BLOCK_MAP_DELIMITER = "dfs.provided.textprovider.delimiter";
+ public static final String DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT = ",";
+
+ public static final String DFS_PROVIDED_BLOCK_MAP_READ_PATH = "dfs.provided.textprovider.read.path";
+ public static final String DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT = "file:///tmp/blocks.csv";
+
+ public static final String DFS_PROVIDED_BLOCK_MAP_CODEC = "dfs.provided.textprovider.read.codec";
+ public static final String DFS_PROVIDED_BLOCK_MAP_WRITE_PATH = "dfs.provided.textprovider.write.path";
+
public static final String DFS_LIST_LIMIT = "dfs.ls.limit";
public static final int DFS_LIST_LIMIT_DEFAULT = 1000;
public static final String DFS_CONTENT_SUMMARY_LIMIT_KEY = "dfs.content-summary.limit";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockAlias.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockAlias.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockAlias.java
new file mode 100644
index 0000000..b2fac97
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockAlias.java
@@ -0,0 +1,29 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common;
+
+import org.apache.hadoop.hdfs.protocol.Block;
+
+/**
+ * Interface used to load provided blocks.
+ */
+public interface BlockAlias {
+
+ Block getBlock();
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
new file mode 100644
index 0000000..66e7fdf
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+import org.apache.hadoop.hdfs.protocol.Block;
+
+/**
+ * An abstract class used to read and write block maps for provided blocks.
+ */
+public abstract class BlockFormat<T extends BlockAlias> {
+
+ /**
+ * An abstract class that is used to read {@link BlockAlias}es
+ * for provided blocks.
+ */
+ public static abstract class Reader<U extends BlockAlias>
+ implements Iterable<U>, Closeable {
+
+ /**
+ * reader options.
+ */
+ public interface Options { }
+
+ public abstract U resolve(Block ident) throws IOException;
+
+ }
+
+ /**
+ * Returns the reader for the provided block map.
+ * @param opts reader options
+ * @return {@link Reader} to the block map.
+ * @throws IOException
+ */
+ public abstract Reader<T> getReader(Reader.Options opts) throws IOException;
+
+ /**
+ * An abstract class used as a writer for the provided block map.
+ */
+ public static abstract class Writer<U extends BlockAlias>
+ implements Closeable {
+ /**
+ * writer options.
+ */
+ public interface Options { }
+
+ public abstract void store(U token) throws IOException;
+
+ }
+
+ /**
+ * Returns the writer for the provided block map.
+ * @param opts writer options.
+ * @return {@link Writer} to the block map.
+ * @throws IOException
+ */
+ public abstract Writer<T> getWriter(Writer.Options opts) throws IOException;
+
+ /**
+ * Refresh based on the underlying block map.
+ * @throws IOException
+ */
+ public abstract void refresh() throws IOException;
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
new file mode 100644
index 0000000..c568b90
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
@@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+
+/**
+ * This class is used to represent provided blocks that are file regions,
+ * i.e., can be described using (path, offset, length).
+ */
+public class FileRegion implements BlockAlias {
+
+ private final Path path;
+ private final long offset;
+ private final long length;
+ private final long blockId;
+ private final String bpid;
+ private final long genStamp;
+
+ public FileRegion(long blockId, Path path, long offset,
+ long length, String bpid, long genStamp) {
+ this.path = path;
+ this.offset = offset;
+ this.length = length;
+ this.blockId = blockId;
+ this.bpid = bpid;
+ this.genStamp = genStamp;
+ }
+
+ public FileRegion(long blockId, Path path, long offset,
+ long length, String bpid) {
+ this(blockId, path, offset, length, bpid,
+ HdfsConstants.GRANDFATHER_GENERATION_STAMP);
+
+ }
+
+ public FileRegion(long blockId, Path path, long offset,
+ long length, long genStamp) {
+ this(blockId, path, offset, length, null, genStamp);
+
+ }
+
+ public FileRegion(long blockId, Path path, long offset, long length) {
+ this(blockId, path, offset, length, null);
+ }
+
+ @Override
+ public Block getBlock() {
+ return new Block(blockId, length, genStamp);
+ }
+
+ @Override
+ public boolean equals(Object other) {
+ if (!(other instanceof FileRegion)) {
+ return false;
+ }
+ FileRegion o = (FileRegion) other;
+ return blockId == o.blockId
+ && offset == o.offset
+ && length == o.length
+ && genStamp == o.genStamp
+ && path.equals(o.path);
+ }
+
+ @Override
+ public int hashCode() {
+ return (int)(blockId & Integer.MIN_VALUE);
+ }
+
+ public Path getPath() {
+ return path;
+ }
+
+ public long getOffset() {
+ return offset;
+ }
+
+ public long getLength() {
+ return length;
+ }
+
+ public long getGenerationStamp() {
+ return genStamp;
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("{ block=\"").append(getBlock()).append("\"");
+ sb.append(", path=\"").append(getPath()).append("\"");
+ sb.append(", off=\"").append(getOffset()).append("\"");
+ sb.append(", len=\"").append(getBlock().getNumBytes()).append("\"");
+ sb.append(", genStamp=\"").append(getBlock()
+ .getGenerationStamp()).append("\"");
+ sb.append(", bpid=\"").append(bpid).append("\"");
+ sb.append(" }");
+ return sb.toString();
+ }
+
+ public String getBlockPoolId() {
+ return this.bpid;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
new file mode 100644
index 0000000..2e94239
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
@@ -0,0 +1,37 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.common;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.Iterator;
+
+/**
+ * This class is a stub for reading file regions from the block map.
+ */
+public class FileRegionProvider implements Iterable<FileRegion> {
+ @Override
+ public Iterator<FileRegion> iterator() {
+ return Collections.emptyListIterator();
+ }
+
+ public void refresh() throws IOException {
+ return;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
index 414d3a7..9ad61d7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java
@@ -40,6 +40,7 @@ import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
@@ -196,7 +197,10 @@ public abstract class Storage extends StorageInfo {
Iterator<StorageDirectory> it =
(dirType == null) ? dirIterator() : dirIterator(dirType);
for ( ;it.hasNext(); ) {
- list.add(new File(it.next().getCurrentDir(), fileName));
+ File currentDir = it.next().getCurrentDir();
+ if (currentDir != null) {
+ list.add(new File(currentDir, fileName));
+ }
}
return list;
}
@@ -328,10 +332,20 @@ public abstract class Storage extends StorageInfo {
*/
public StorageDirectory(String bpid, StorageDirType dirType,
boolean isShared, StorageLocation location) {
- this(new File(location.getBpURI(bpid, STORAGE_DIR_CURRENT)), dirType,
+ this(getBlockPoolCurrentDir(bpid, location), dirType,
isShared, location);
}
+ private static File getBlockPoolCurrentDir(String bpid,
+ StorageLocation location) {
+ if (location == null ||
+ location.getStorageType() == StorageType.PROVIDED) {
+ return null;
+ } else {
+ return new File(location.getBpURI(bpid, STORAGE_DIR_CURRENT));
+ }
+ }
+
private StorageDirectory(File dir, StorageDirType dirType,
boolean isShared, StorageLocation location) {
this.root = dir;
@@ -347,7 +361,8 @@ public abstract class Storage extends StorageInfo {
}
private static File getStorageLocationFile(StorageLocation location) {
- if (location == null) {
+ if (location == null ||
+ location.getStorageType() == StorageType.PROVIDED) {
return null;
}
try {
@@ -406,6 +421,10 @@ public abstract class Storage extends StorageInfo {
*/
public void clearDirectory() throws IOException {
File curDir = this.getCurrentDir();
+ if (curDir == null) {
+ //if the directory is null, there is nothing to do.
+ return;
+ }
if (curDir.exists()) {
File[] files = FileUtil.listFiles(curDir);
LOG.info("Will remove files: " + Arrays.toString(files));
@@ -423,6 +442,9 @@ public abstract class Storage extends StorageInfo {
* @return the directory path
*/
public File getCurrentDir() {
+ if (root == null) {
+ return null;
+ }
return new File(root, STORAGE_DIR_CURRENT);
}
@@ -443,6 +465,9 @@ public abstract class Storage extends StorageInfo {
* @return the version file path
*/
public File getVersionFile() {
+ if (root == null) {
+ return null;
+ }
return new File(new File(root, STORAGE_DIR_CURRENT), STORAGE_FILE_VERSION);
}
@@ -452,6 +477,9 @@ public abstract class Storage extends StorageInfo {
* @return the previous version file path
*/
public File getPreviousVersionFile() {
+ if (root == null) {
+ return null;
+ }
return new File(new File(root, STORAGE_DIR_PREVIOUS), STORAGE_FILE_VERSION);
}
@@ -462,6 +490,9 @@ public abstract class Storage extends StorageInfo {
* @return the directory path
*/
public File getPreviousDir() {
+ if (root == null) {
+ return null;
+ }
return new File(root, STORAGE_DIR_PREVIOUS);
}
@@ -476,6 +507,9 @@ public abstract class Storage extends StorageInfo {
* @return the directory path
*/
public File getPreviousTmp() {
+ if (root == null) {
+ return null;
+ }
return new File(root, STORAGE_TMP_PREVIOUS);
}
@@ -490,6 +524,9 @@ public abstract class Storage extends StorageInfo {
* @return the directory path
*/
public File getRemovedTmp() {
+ if (root == null) {
+ return null;
+ }
return new File(root, STORAGE_TMP_REMOVED);
}
@@ -503,6 +540,9 @@ public abstract class Storage extends StorageInfo {
* @return the directory path
*/
public File getFinalizedTmp() {
+ if (root == null) {
+ return null;
+ }
return new File(root, STORAGE_TMP_FINALIZED);
}
@@ -517,6 +557,9 @@ public abstract class Storage extends StorageInfo {
* @return the directory path
*/
public File getLastCheckpointTmp() {
+ if (root == null) {
+ return null;
+ }
return new File(root, STORAGE_TMP_LAST_CKPT);
}
@@ -530,6 +573,9 @@ public abstract class Storage extends StorageInfo {
* @return the directory path
*/
public File getPreviousCheckpoint() {
+ if (root == null) {
+ return null;
+ }
return new File(root, STORAGE_PREVIOUS_CKPT);
}
@@ -543,7 +589,7 @@ public abstract class Storage extends StorageInfo {
private void checkEmptyCurrent() throws InconsistentFSStateException,
IOException {
File currentDir = getCurrentDir();
- if(!currentDir.exists()) {
+ if(currentDir == null || !currentDir.exists()) {
// if current/ does not exist, it's safe to format it.
return;
}
@@ -589,6 +635,13 @@ public abstract class Storage extends StorageInfo {
public StorageState analyzeStorage(StartupOption startOpt, Storage storage,
boolean checkCurrentIsEmpty)
throws IOException {
+
+ if (location != null &&
+ location.getStorageType() == StorageType.PROVIDED) {
+ //currently we assume that PROVIDED storages are always NORMAL
+ return StorageState.NORMAL;
+ }
+
assert root != null : "root is null";
boolean hadMkdirs = false;
String rootPath = root.getCanonicalPath();
@@ -710,6 +763,10 @@ public abstract class Storage extends StorageInfo {
*/
public void doRecover(StorageState curState) throws IOException {
File curDir = getCurrentDir();
+ if (curDir == null || root == null) {
+ //at this point, we do not support recovery on PROVIDED storages
+ return;
+ }
String rootPath = root.getCanonicalPath();
switch(curState) {
case COMPLETE_UPGRADE: // mv previous.tmp -> previous
@@ -883,7 +940,8 @@ public abstract class Storage extends StorageInfo {
@Override
public String toString() {
- return "Storage Directory " + this.root;
+ return "Storage Directory root= " + this.root +
+ "; location= " + this.location;
}
/**
@@ -1153,6 +1211,9 @@ public abstract class Storage extends StorageInfo {
}
public void writeProperties(File to, StorageDirectory sd) throws IOException {
+ if (to == null) {
+ return;
+ }
Properties props = new Properties();
setPropertiesFromFields(props, sd);
writeProperties(to, props);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
index 50363c9..28871e5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/StorageInfo.java
@@ -152,6 +152,9 @@ public class StorageInfo {
*/
protected void setFieldsFromProperties(
Properties props, StorageDirectory sd) throws IOException {
+ if (props == null) {
+ return;
+ }
setLayoutVersion(props, sd);
setNamespaceID(props, sd);
setcTime(props, sd);
@@ -241,6 +244,9 @@ public class StorageInfo {
}
public static Properties readPropertiesFile(File from) throws IOException {
+ if (from == null) {
+ return null;
+ }
RandomAccessFile file = new RandomAccessFile(from, "rws");
FileInputStream in = null;
Properties props = new Properties();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
new file mode 100644
index 0000000..eacd08f
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
@@ -0,0 +1,442 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.common;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Collections;
+import java.util.IdentityHashMap;
+import java.util.NoSuchElementException;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.CompressionCodecFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+
+/**
+ * This class is used for block maps stored as text files,
+ * with a specified delimiter.
+ */
+public class TextFileRegionFormat
+ extends BlockFormat<FileRegion> implements Configurable {
+
+ private Configuration conf;
+ private ReaderOptions readerOpts = TextReader.defaults();
+ private WriterOptions writerOpts = TextWriter.defaults();
+
+ public static final Logger LOG =
+ LoggerFactory.getLogger(TextFileRegionFormat.class);
+ @Override
+ public void setConf(Configuration conf) {
+ readerOpts.setConf(conf);
+ writerOpts.setConf(conf);
+ this.conf = conf;
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public Reader<FileRegion> getReader(Reader.Options opts)
+ throws IOException {
+ if (null == opts) {
+ opts = readerOpts;
+ }
+ if (!(opts instanceof ReaderOptions)) {
+ throw new IllegalArgumentException("Invalid options " + opts.getClass());
+ }
+ ReaderOptions o = (ReaderOptions) opts;
+ Configuration readerConf = (null == o.getConf())
+ ? new Configuration()
+ : o.getConf();
+ return createReader(o.file, o.delim, readerConf);
+ }
+
+ @VisibleForTesting
+ TextReader createReader(Path file, String delim, Configuration cfg)
+ throws IOException {
+ FileSystem fs = file.getFileSystem(cfg);
+ if (fs instanceof LocalFileSystem) {
+ fs = ((LocalFileSystem)fs).getRaw();
+ }
+ CompressionCodecFactory factory = new CompressionCodecFactory(cfg);
+ CompressionCodec codec = factory.getCodec(file);
+ return new TextReader(fs, file, codec, delim);
+ }
+
+ @Override
+ public Writer<FileRegion> getWriter(Writer.Options opts) throws IOException {
+ if (null == opts) {
+ opts = writerOpts;
+ }
+ if (!(opts instanceof WriterOptions)) {
+ throw new IllegalArgumentException("Invalid options " + opts.getClass());
+ }
+ WriterOptions o = (WriterOptions) opts;
+ Configuration cfg = (null == o.getConf())
+ ? new Configuration()
+ : o.getConf();
+ if (o.codec != null) {
+ CompressionCodecFactory factory = new CompressionCodecFactory(cfg);
+ CompressionCodec codec = factory.getCodecByName(o.codec);
+ String name = o.file.getName() + codec.getDefaultExtension();
+ o.filename(new Path(o.file.getParent(), name));
+ return createWriter(o.file, codec, o.delim, cfg);
+ }
+ return createWriter(o.file, null, o.delim, conf);
+ }
+
+ @VisibleForTesting
+ TextWriter createWriter(Path file, CompressionCodec codec, String delim,
+ Configuration cfg) throws IOException {
+ FileSystem fs = file.getFileSystem(cfg);
+ if (fs instanceof LocalFileSystem) {
+ fs = ((LocalFileSystem)fs).getRaw();
+ }
+ OutputStream tmp = fs.create(file);
+ java.io.Writer out = new BufferedWriter(new OutputStreamWriter(
+ (null == codec) ? tmp : codec.createOutputStream(tmp), "UTF-8"));
+ return new TextWriter(out, delim);
+ }
+
+ /**
+ * Class specifying reader options for the {@link TextFileRegionFormat}.
+ */
+ public static class ReaderOptions
+ implements TextReader.Options, Configurable {
+
+ private Configuration conf;
+ private String delim =
+ DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT;
+ private Path file = new Path(
+ new File(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT)
+ .toURI().toString());
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ String tmpfile = conf.get(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_READ_PATH,
+ DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT);
+ file = new Path(tmpfile);
+ delim = conf.get(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER,
+ DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT);
+ LOG.info("TextFileRegionFormat: read path " + tmpfile.toString());
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public ReaderOptions filename(Path file) {
+ this.file = file;
+ return this;
+ }
+
+ @Override
+ public ReaderOptions delimiter(String delim) {
+ this.delim = delim;
+ return this;
+ }
+ }
+
+ /**
+ * Class specifying writer options for the {@link TextFileRegionFormat}.
+ */
+ public static class WriterOptions
+ implements TextWriter.Options, Configurable {
+
+ private Configuration conf;
+ private String codec = null;
+ private Path file =
+ new Path(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT);
+ private String delim =
+ DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT;
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ String tmpfile = conf.get(
+ DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_WRITE_PATH, file.toString());
+ file = new Path(tmpfile);
+ codec = conf.get(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_CODEC);
+ delim = conf.get(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER,
+ DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT);
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public WriterOptions filename(Path file) {
+ this.file = file;
+ return this;
+ }
+
+ public String getCodec() {
+ return codec;
+ }
+
+ public Path getFile() {
+ return file;
+ }
+
+ @Override
+ public WriterOptions codec(String codec) {
+ this.codec = codec;
+ return this;
+ }
+
+ @Override
+ public WriterOptions delimiter(String delim) {
+ this.delim = delim;
+ return this;
+ }
+
+ }
+
+ /**
+ * This class is used as a reader for block maps which
+ * are stored as delimited text files.
+ */
+ public static class TextReader extends Reader<FileRegion> {
+
+ /**
+ * Options for {@link TextReader}.
+ */
+ public interface Options extends Reader.Options {
+ Options filename(Path file);
+ Options delimiter(String delim);
+ }
+
+ static ReaderOptions defaults() {
+ return new ReaderOptions();
+ }
+
+ private final Path file;
+ private final String delim;
+ private final FileSystem fs;
+ private final CompressionCodec codec;
+ private final Map<FRIterator, BufferedReader> iterators;
+
+ protected TextReader(FileSystem fs, Path file, CompressionCodec codec,
+ String delim) {
+ this(fs, file, codec, delim,
+ new IdentityHashMap<FRIterator, BufferedReader>());
+ }
+
+ TextReader(FileSystem fs, Path file, CompressionCodec codec, String delim,
+ Map<FRIterator, BufferedReader> iterators) {
+ this.fs = fs;
+ this.file = file;
+ this.codec = codec;
+ this.delim = delim;
+ this.iterators = Collections.synchronizedMap(iterators);
+ }
+
+ @Override
+ public FileRegion resolve(Block ident) throws IOException {
+ // consider layering index w/ composable format
+ Iterator<FileRegion> i = iterator();
+ try {
+ while (i.hasNext()) {
+ FileRegion f = i.next();
+ if (f.getBlock().equals(ident)) {
+ return f;
+ }
+ }
+ } finally {
+ BufferedReader r = iterators.remove(i);
+ if (r != null) {
+ // null on last element
+ r.close();
+ }
+ }
+ return null;
+ }
+
+ class FRIterator implements Iterator<FileRegion> {
+
+ private FileRegion pending;
+
+ @Override
+ public boolean hasNext() {
+ return pending != null;
+ }
+
+ @Override
+ public FileRegion next() {
+ if (null == pending) {
+ throw new NoSuchElementException();
+ }
+ FileRegion ret = pending;
+ try {
+ pending = nextInternal(this);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ return ret;
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ }
+
+ private FileRegion nextInternal(Iterator<FileRegion> i) throws IOException {
+ BufferedReader r = iterators.get(i);
+ if (null == r) {
+ throw new IllegalStateException();
+ }
+ String line = r.readLine();
+ if (null == line) {
+ iterators.remove(i);
+ return null;
+ }
+ String[] f = line.split(delim);
+ if (f.length != 6) {
+ throw new IOException("Invalid line: " + line);
+ }
+ return new FileRegion(Long.parseLong(f[0]), new Path(f[1]),
+ Long.parseLong(f[2]), Long.parseLong(f[3]), f[5],
+ Long.parseLong(f[4]));
+ }
+
+ public InputStream createStream() throws IOException {
+ InputStream i = fs.open(file);
+ if (codec != null) {
+ i = codec.createInputStream(i);
+ }
+ return i;
+ }
+
+ @Override
+ public Iterator<FileRegion> iterator() {
+ FRIterator i = new FRIterator();
+ try {
+ BufferedReader r =
+ new BufferedReader(new InputStreamReader(createStream(), "UTF-8"));
+ iterators.put(i, r);
+ i.pending = nextInternal(i);
+ } catch (IOException e) {
+ iterators.remove(i);
+ throw new RuntimeException(e);
+ }
+ return i;
+ }
+
+ @Override
+ public void close() throws IOException {
+ ArrayList<IOException> ex = new ArrayList<>();
+ synchronized (iterators) {
+ for (Iterator<BufferedReader> i = iterators.values().iterator();
+ i.hasNext();) {
+ try {
+ BufferedReader r = i.next();
+ r.close();
+ } catch (IOException e) {
+ ex.add(e);
+ } finally {
+ i.remove();
+ }
+ }
+ iterators.clear();
+ }
+ if (!ex.isEmpty()) {
+ throw MultipleIOException.createIOException(ex);
+ }
+ }
+
+ }
+
+ /**
+ * This class is used as a writer for block maps which
+ * are stored as delimited text files.
+ */
+ public static class TextWriter extends Writer<FileRegion> {
+
+ /**
+ * Interface for Writer options.
+ */
+ public interface Options extends Writer.Options {
+ Options codec(String codec);
+ Options filename(Path file);
+ Options delimiter(String delim);
+ }
+
+ public static WriterOptions defaults() {
+ return new WriterOptions();
+ }
+
+ private final String delim;
+ private final java.io.Writer out;
+
+ public TextWriter(java.io.Writer out, String delim) {
+ this.out = out;
+ this.delim = delim;
+ }
+
+ @Override
+ public void store(FileRegion token) throws IOException {
+ out.append(String.valueOf(token.getBlock().getBlockId())).append(delim);
+ out.append(token.getPath().toString()).append(delim);
+ out.append(Long.toString(token.getOffset())).append(delim);
+ out.append(Long.toString(token.getLength())).append(delim);
+ out.append(Long.toString(token.getGenerationStamp())).append(delim);
+ out.append(token.getBlockPoolId()).append("\n");
+ }
+
+ @Override
+ public void close() throws IOException {
+ out.close();
+ }
+
+ }
+
+ @Override
+ public void refresh() throws IOException {
+ //nothing to do;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
new file mode 100644
index 0000000..0fa667e
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.common;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * This class is used to read file regions from block maps
+ * specified using delimited text.
+ */
+public class TextFileRegionProvider
+ extends FileRegionProvider implements Configurable {
+
+ private Configuration conf;
+ private BlockFormat<FileRegion> fmt;
+
+ @SuppressWarnings("unchecked")
+ @Override
+ public void setConf(Configuration conf) {
+ fmt = ReflectionUtils.newInstance(
+ conf.getClass(DFSConfigKeys.DFS_PROVIDER_BLK_FORMAT_CLASS,
+ TextFileRegionFormat.class,
+ BlockFormat.class),
+ conf);
+ ((Configurable)fmt).setConf(conf); //redundant?
+ this.conf = conf;
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public Iterator<FileRegion> iterator() {
+ try {
+ final BlockFormat.Reader<FileRegion> r = fmt.getReader(null);
+ return new Iterator<FileRegion>() {
+
+ private final Iterator<FileRegion> inner = r.iterator();
+
+ @Override
+ public boolean hasNext() {
+ return inner.hasNext();
+ }
+
+ @Override
+ public FileRegion next() {
+ return inner.next();
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ };
+ } catch (IOException e) {
+ throw new RuntimeException("Failed to read provided blocks", e);
+ }
+ }
+
+ @Override
+ public void refresh() throws IOException {
+ fmt.refresh();
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
index bc41715..012d1f5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java
@@ -36,6 +36,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.HardLink;
+import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.protocol.LayoutVersion;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
@@ -360,6 +361,9 @@ public class BlockPoolSliceStorage extends Storage {
private boolean doTransition(StorageDirectory sd, NamespaceInfo nsInfo,
StartupOption startOpt, List<Callable<StorageDirectory>> callables,
Configuration conf) throws IOException {
+ if (sd.getStorageLocation().getStorageType() == StorageType.PROVIDED) {
+ return false; // regular startup for PROVIDED storage directories
+ }
if (startOpt == StartupOption.ROLLBACK && sd.getPreviousDir().exists()) {
Preconditions.checkState(!getTrashRootDir(sd).exists(),
sd.getPreviousDir() + " and " + getTrashRootDir(sd) + " should not " +
@@ -439,6 +443,10 @@ public class BlockPoolSliceStorage extends Storage {
LayoutVersion.Feature.FEDERATION, layoutVersion)) {
return;
}
+ //no upgrades for storage directories that are PROVIDED
+ if (bpSd.getRoot() == null) {
+ return;
+ }
final int oldLV = getLayoutVersion();
LOG.info("Upgrading block pool storage directory " + bpSd.getRoot()
+ ".\n old LV = " + oldLV
@@ -589,8 +597,9 @@ public class BlockPoolSliceStorage extends Storage {
throws IOException {
File prevDir = bpSd.getPreviousDir();
// regular startup if previous dir does not exist
- if (!prevDir.exists())
+ if (prevDir == null || !prevDir.exists()) {
return;
+ }
// read attributes out of the VERSION file of previous directory
BlockPoolSliceStorage prevInfo = new BlockPoolSliceStorage();
prevInfo.readPreviousVersionProperties(bpSd);
@@ -631,6 +640,10 @@ public class BlockPoolSliceStorage extends Storage {
* that holds the snapshot.
*/
void doFinalize(File dnCurDir) throws IOException {
+ LOG.info("doFinalize: " + dnCurDir);
+ if (dnCurDir == null) {
+ return; //we do nothing if the directory is null
+ }
File bpRoot = getBpRoot(blockpoolID, dnCurDir);
StorageDirectory bpSd = new StorageDirectory(bpRoot);
// block pool level previous directory
@@ -841,6 +854,9 @@ public class BlockPoolSliceStorage extends Storage {
public void setRollingUpgradeMarkers(List<StorageDirectory> dnStorageDirs)
throws IOException {
for (StorageDirectory sd : dnStorageDirs) {
+ if (sd.getCurrentDir() == null) {
+ return;
+ }
File bpRoot = getBpRoot(blockpoolID, sd.getCurrentDir());
File markerFile = new File(bpRoot, ROLLING_UPGRADE_MARKER_FILE);
if (!storagesWithRollingUpgradeMarker.contains(bpRoot.toString())) {
@@ -863,6 +879,9 @@ public class BlockPoolSliceStorage extends Storage {
public void clearRollingUpgradeMarkers(List<StorageDirectory> dnStorageDirs)
throws IOException {
for (StorageDirectory sd : dnStorageDirs) {
+ if (sd.getCurrentDir() == null) {
+ continue;
+ }
File bpRoot = getBpRoot(blockpoolID, sd.getCurrentDir());
File markerFile = new File(bpRoot, ROLLING_UPGRADE_MARKER_FILE);
if (!storagesWithoutRollingUpgradeMarker.contains(bpRoot.toString())) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
index 6d6e96a..a1bde31 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java
@@ -48,6 +48,7 @@ import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.HardLink;
+import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DFSUtilClient;
import org.apache.hadoop.hdfs.protocol.Block;
@@ -129,22 +130,31 @@ public class DataStorage extends Storage {
this.datanodeUuid = newDatanodeUuid;
}
- private static boolean createStorageID(StorageDirectory sd, int lv) {
+ private static boolean createStorageID(StorageDirectory sd, int lv,
+ Configuration conf) {
// Clusters previously upgraded from layout versions earlier than
// ADD_DATANODE_AND_STORAGE_UUIDS failed to correctly generate a
// new storage ID. We check for that and fix it now.
final boolean haveValidStorageId = DataNodeLayoutVersion.supports(
LayoutVersion.Feature.ADD_DATANODE_AND_STORAGE_UUIDS, lv)
&& DatanodeStorage.isValidStorageId(sd.getStorageUuid());
- return createStorageID(sd, !haveValidStorageId);
+ return createStorageID(sd, !haveValidStorageId, conf);
}
/** Create an ID for this storage.
* @return true if a new storage ID was generated.
* */
public static boolean createStorageID(
- StorageDirectory sd, boolean regenerateStorageIds) {
+ StorageDirectory sd, boolean regenerateStorageIds, Configuration conf) {
final String oldStorageID = sd.getStorageUuid();
+ if (sd.getStorageLocation() != null &&
+ sd.getStorageLocation().getStorageType() == StorageType.PROVIDED) {
+ // We only support one provided storage per datanode for now.
+ // TODO support multiple provided storage ids per datanode.
+ sd.setStorageUuid(conf.get(DFSConfigKeys.DFS_PROVIDER_STORAGEUUID,
+ DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT));
+ return false;
+ }
if (oldStorageID == null || regenerateStorageIds) {
sd.setStorageUuid(DatanodeStorage.generateUuid());
LOG.info("Generated new storageID " + sd.getStorageUuid() +
@@ -273,7 +283,7 @@ public class DataStorage extends Storage {
LOG.info("Storage directory with location " + location
+ " is not formatted for namespace " + nsInfo.getNamespaceID()
+ ". Formatting...");
- format(sd, nsInfo, datanode.getDatanodeUuid());
+ format(sd, nsInfo, datanode.getDatanodeUuid(), datanode.getConf());
break;
default: // recovery part is common
sd.doRecover(curState);
@@ -547,15 +557,15 @@ public class DataStorage extends Storage {
}
void format(StorageDirectory sd, NamespaceInfo nsInfo,
- String datanodeUuid) throws IOException {
+ String newDatanodeUuid, Configuration conf) throws IOException {
sd.clearDirectory(); // create directory
this.layoutVersion = HdfsServerConstants.DATANODE_LAYOUT_VERSION;
this.clusterID = nsInfo.getClusterID();
this.namespaceID = nsInfo.getNamespaceID();
this.cTime = 0;
- setDatanodeUuid(datanodeUuid);
+ setDatanodeUuid(newDatanodeUuid);
- createStorageID(sd, false);
+ createStorageID(sd, false, conf);
writeProperties(sd);
}
@@ -600,6 +610,9 @@ public class DataStorage extends Storage {
private void setFieldsFromProperties(Properties props, StorageDirectory sd,
boolean overrideLayoutVersion, int toLayoutVersion) throws IOException {
+ if (props == null) {
+ return;
+ }
if (overrideLayoutVersion) {
this.layoutVersion = toLayoutVersion;
} else {
@@ -694,6 +707,10 @@ public class DataStorage extends Storage {
private boolean doTransition(StorageDirectory sd, NamespaceInfo nsInfo,
StartupOption startOpt, List<Callable<StorageDirectory>> callables,
Configuration conf) throws IOException {
+ if (sd.getStorageLocation().getStorageType() == StorageType.PROVIDED) {
+ createStorageID(sd, layoutVersion, conf);
+ return false; // regular start up for PROVIDED storage directories
+ }
if (startOpt == StartupOption.ROLLBACK) {
doRollback(sd, nsInfo); // rollback if applicable
}
@@ -724,7 +741,7 @@ public class DataStorage extends Storage {
// regular start up.
if (this.layoutVersion == HdfsServerConstants.DATANODE_LAYOUT_VERSION) {
- createStorageID(sd, layoutVersion);
+ createStorageID(sd, layoutVersion, conf);
return false; // need to write properties
}
@@ -733,7 +750,7 @@ public class DataStorage extends Storage {
if (federationSupported) {
// If the existing on-disk layout version supports federation,
// simply update the properties.
- upgradeProperties(sd);
+ upgradeProperties(sd, conf);
} else {
doUpgradePreFederation(sd, nsInfo, callables, conf);
}
@@ -829,15 +846,16 @@ public class DataStorage extends Storage {
// 4. Write version file under <SD>/current
clusterID = nsInfo.getClusterID();
- upgradeProperties(sd);
+ upgradeProperties(sd, conf);
// 5. Rename <SD>/previous.tmp to <SD>/previous
rename(tmpDir, prevDir);
LOG.info("Upgrade of " + sd.getRoot()+ " is complete");
}
- void upgradeProperties(StorageDirectory sd) throws IOException {
- createStorageID(sd, layoutVersion);
+ void upgradeProperties(StorageDirectory sd, Configuration conf)
+ throws IOException {
+ createStorageID(sd, layoutVersion, conf);
LOG.info("Updating layout version from " + layoutVersion
+ " to " + HdfsServerConstants.DATANODE_LAYOUT_VERSION
+ " for storage " + sd.getRoot());
@@ -989,7 +1007,7 @@ public class DataStorage extends Storage {
// then finalize it. Else finalize the corresponding BP.
for (StorageDirectory sd : getStorageDirs()) {
File prevDir = sd.getPreviousDir();
- if (prevDir.exists()) {
+ if (prevDir != null && prevDir.exists()) {
// data node level storage finalize
doFinalize(sd);
} else {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 966bcb0..3b6d06c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -44,6 +44,7 @@ import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.util.AutoCloseableLock;
+import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi;
@@ -105,7 +106,7 @@ public class DirectoryScanner implements Runnable {
* @param b whether to retain diffs
*/
@VisibleForTesting
- void setRetainDiffs(boolean b) {
+ public void setRetainDiffs(boolean b) {
retainDiffs = b;
}
@@ -215,7 +216,8 @@ public class DirectoryScanner implements Runnable {
* @param dataset the dataset to scan
* @param conf the Configuration object
*/
- DirectoryScanner(DataNode datanode, FsDatasetSpi<?> dataset, Configuration conf) {
+ public DirectoryScanner(DataNode datanode, FsDatasetSpi<?> dataset,
+ Configuration conf) {
this.datanode = datanode;
this.dataset = dataset;
int interval = (int) conf.getTimeDuration(
@@ -369,15 +371,14 @@ public class DirectoryScanner implements Runnable {
* Reconcile differences between disk and in-memory blocks
*/
@VisibleForTesting
- void reconcile() throws IOException {
+ public void reconcile() throws IOException {
scan();
for (Entry<String, LinkedList<ScanInfo>> entry : diffs.entrySet()) {
String bpid = entry.getKey();
LinkedList<ScanInfo> diff = entry.getValue();
for (ScanInfo info : diff) {
- dataset.checkAndUpdate(bpid, info.getBlockId(), info.getBlockFile(),
- info.getMetaFile(), info.getVolume());
+ dataset.checkAndUpdate(bpid, info);
}
}
if (!retainDiffs) clear();
@@ -429,11 +430,12 @@ public class DirectoryScanner implements Runnable {
}
// Block file and/or metadata file exists on the disk
// Block exists in memory
- if (info.getBlockFile() == null) {
+ if (info.getVolume().getStorageType() != StorageType.PROVIDED &&
+ info.getBlockFile() == null) {
// Block metadata file exits and block file is missing
addDifference(diffRecord, statsRecord, info);
} else if (info.getGenStamp() != memBlock.getGenerationStamp()
- || info.getBlockFileLength() != memBlock.getNumBytes()) {
+ || info.getBlockLength() != memBlock.getNumBytes()) {
// Block metadata file is missing or has wrong generation stamp,
// or block file length is different than expected
statsRecord.mismatchBlocks++;
@@ -611,6 +613,9 @@ public class DirectoryScanner implements Runnable {
for (String bpid : bpList) {
LinkedList<ScanInfo> report = new LinkedList<>();
+ perfTimer.reset().start();
+ throttleTimer.reset().start();
+
try {
result.put(bpid, volume.compileReport(bpid, report, this));
} catch (InterruptedException ex) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
new file mode 100644
index 0000000..722d573
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
@@ -0,0 +1,91 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode;
+
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
+
+/**
+ * This class is used for provided replicas that are finalized.
+ */
+public class FinalizedProvidedReplica extends ProvidedReplica {
+
+ public FinalizedProvidedReplica(long blockId, URI fileURI,
+ long fileOffset, long blockLen, long genStamp,
+ FsVolumeSpi volume, Configuration conf) {
+ super(blockId, fileURI, fileOffset, blockLen, genStamp, volume, conf);
+ }
+
+ @Override
+ public ReplicaState getState() {
+ return ReplicaState.FINALIZED;
+ }
+
+ @Override
+ public long getBytesOnDisk() {
+ return getNumBytes();
+ }
+
+ @Override
+ public long getVisibleLength() {
+ return getNumBytes(); //all bytes are visible
+ }
+
+ @Override // Object
+ public boolean equals(Object o) {
+ return super.equals(o);
+ }
+
+ @Override // Object
+ public int hashCode() {
+ return super.hashCode();
+ }
+
+ @Override
+ public String toString() {
+ return super.toString();
+ }
+
+ @Override
+ public ReplicaInfo getOriginalReplica() {
+ throw new UnsupportedOperationException("Replica of type " + getState() +
+ " does not support getOriginalReplica");
+ }
+
+ @Override
+ public long getRecoveryID() {
+ throw new UnsupportedOperationException("Replica of type " + getState() +
+ " does not support getRecoveryID");
+ }
+
+ @Override
+ public void setRecoveryID(long recoveryId) {
+ throw new UnsupportedOperationException("Replica of type " + getState() +
+ " does not support setRecoveryID");
+ }
+
+ @Override
+ public ReplicaRecoveryInfo createInfo() {
+ throw new UnsupportedOperationException("Replica of type " + getState() +
+ " does not support createInfo");
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
new file mode 100644
index 0000000..b021ea2
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
@@ -0,0 +1,248 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.net.URI;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.ScanInfo;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetUtil;
+import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * This abstract class is used as a base class for provided replicas.
+ */
+public abstract class ProvidedReplica extends ReplicaInfo {
+
+ public static final Logger LOG =
+ LoggerFactory.getLogger(ProvidedReplica.class);
+
+ // Null checksum information for provided replicas.
+ // Shared across all replicas.
+ static final byte[] NULL_CHECKSUM_ARRAY =
+ FsDatasetUtil.createNullChecksumByteArray();
+ private URI fileURI;
+ private long fileOffset;
+ private Configuration conf;
+ private FileSystem remoteFS;
+
+ /**
+ * Constructor.
+ * @param blockId block id
+ * @param fileURI remote URI this block is to be read from
+ * @param fileOffset the offset in the remote URI
+ * @param blockLen the length of the block
+ * @param genStamp the generation stamp of the block
+ * @param volume the volume this block belongs to
+ */
+ public ProvidedReplica(long blockId, URI fileURI, long fileOffset,
+ long blockLen, long genStamp, FsVolumeSpi volume, Configuration conf) {
+ super(volume, blockId, blockLen, genStamp);
+ this.fileURI = fileURI;
+ this.fileOffset = fileOffset;
+ this.conf = conf;
+ try {
+ this.remoteFS = FileSystem.get(fileURI, this.conf);
+ } catch (IOException e) {
+ LOG.warn("Failed to obtain filesystem for " + fileURI);
+ this.remoteFS = null;
+ }
+ }
+
+ public ProvidedReplica(ProvidedReplica r) {
+ super(r);
+ this.fileURI = r.fileURI;
+ this.fileOffset = r.fileOffset;
+ this.conf = r.conf;
+ try {
+ this.remoteFS = FileSystem.newInstance(fileURI, this.conf);
+ } catch (IOException e) {
+ this.remoteFS = null;
+ }
+ }
+
+ @Override
+ public URI getBlockURI() {
+ return this.fileURI;
+ }
+
+ @Override
+ public InputStream getDataInputStream(long seekOffset) throws IOException {
+ if (remoteFS != null) {
+ FSDataInputStream ins = remoteFS.open(new Path(fileURI));
+ ins.seek(fileOffset + seekOffset);
+ return new FSDataInputStream(ins);
+ } else {
+ throw new IOException("Remote filesystem for provided replica " + this +
+ " does not exist");
+ }
+ }
+
+ @Override
+ public OutputStream getDataOutputStream(boolean append) throws IOException {
+ throw new UnsupportedOperationException(
+ "OutputDataStream is not implemented for ProvidedReplica");
+ }
+
+ @Override
+ public URI getMetadataURI() {
+ return null;
+ }
+
+ @Override
+ public OutputStream getMetadataOutputStream(boolean append)
+ throws IOException {
+ return null;
+ }
+
+ @Override
+ public boolean blockDataExists() {
+ if(remoteFS != null) {
+ try {
+ return remoteFS.exists(new Path(fileURI));
+ } catch (IOException e) {
+ return false;
+ }
+ } else {
+ return false;
+ }
+ }
+
+ @Override
+ public boolean deleteBlockData() {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not support deleting block data");
+ }
+
+ @Override
+ public long getBlockDataLength() {
+ return this.getNumBytes();
+ }
+
+ @Override
+ public LengthInputStream getMetadataInputStream(long offset)
+ throws IOException {
+ return new LengthInputStream(new ByteArrayInputStream(NULL_CHECKSUM_ARRAY),
+ NULL_CHECKSUM_ARRAY.length);
+ }
+
+ @Override
+ public boolean metadataExists() {
+ return NULL_CHECKSUM_ARRAY == null ? false : true;
+ }
+
+ @Override
+ public boolean deleteMetadata() {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not support deleting metadata");
+ }
+
+ @Override
+ public long getMetadataLength() {
+ return NULL_CHECKSUM_ARRAY == null ? 0 : NULL_CHECKSUM_ARRAY.length;
+ }
+
+ @Override
+ public boolean renameMeta(URI destURI) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not support renaming metadata");
+ }
+
+ @Override
+ public boolean renameData(URI destURI) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not support renaming data");
+ }
+
+ @Override
+ public boolean getPinning(LocalFileSystem localFS) throws IOException {
+ return false;
+ }
+
+ @Override
+ public void setPinning(LocalFileSystem localFS) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not support pinning");
+ }
+
+ @Override
+ public void bumpReplicaGS(long newGS) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not yet support writes");
+ }
+
+ @Override
+ public boolean breakHardLinksIfNeeded() throws IOException {
+ return false;
+ }
+
+ @Override
+ public ReplicaRecoveryInfo createInfo()
+ throws UnsupportedOperationException {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not yet support writes");
+ }
+
+ @Override
+ public int compareWith(ScanInfo info) {
+ //local scanning cannot find any provided blocks.
+ if (info.getFileRegion().equals(
+ new FileRegion(this.getBlockId(), new Path(fileURI),
+ fileOffset, this.getNumBytes(), this.getGenerationStamp()))) {
+ return 0;
+ } else {
+ return (int) (info.getBlockLength() - getNumBytes());
+ }
+ }
+
+ @Override
+ public void truncateBlock(long newLength) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not yet support truncate");
+ }
+
+ @Override
+ public void updateWithReplica(StorageLocation replicaLocation) {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not yet support update");
+ }
+
+ @Override
+ public void copyMetadata(URI destination) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not yet support copy metadata");
+ }
+
+ @Override
+ public void copyBlockdata(URI destination) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedReplica does not yet support copy data");
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
index 280aaa0..639467f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
@@ -18,9 +18,13 @@
package org.apache.hadoop.hdfs.server.datanode;
import java.io.File;
+import java.net.URI;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
/**
@@ -42,11 +46,20 @@ public class ReplicaBuilder {
private ReplicaInfo fromReplica;
+ private URI uri;
+ private long offset;
+ private Configuration conf;
+ private FileRegion fileRegion;
+
public ReplicaBuilder(ReplicaState state) {
volume = null;
writer = null;
block = null;
length = -1;
+ fileRegion = null;
+ conf = null;
+ fromReplica = null;
+ uri = null;
this.state = state;
}
@@ -105,6 +118,26 @@ public class ReplicaBuilder {
return this;
}
+ public ReplicaBuilder setURI(URI uri) {
+ this.uri = uri;
+ return this;
+ }
+
+ public ReplicaBuilder setConf(Configuration conf) {
+ this.conf = conf;
+ return this;
+ }
+
+ public ReplicaBuilder setOffset(long offset) {
+ this.offset = offset;
+ return this;
+ }
+
+ public ReplicaBuilder setFileRegion(FileRegion fileRegion) {
+ this.fileRegion = fileRegion;
+ return this;
+ }
+
public LocalReplicaInPipeline buildLocalReplicaInPipeline()
throws IllegalArgumentException {
LocalReplicaInPipeline info = null;
@@ -176,7 +209,7 @@ public class ReplicaBuilder {
}
}
- private ReplicaInfo buildFinalizedReplica() throws IllegalArgumentException {
+ private LocalReplica buildFinalizedReplica() throws IllegalArgumentException {
if (null != fromReplica &&
fromReplica.getState() == ReplicaState.FINALIZED) {
return new FinalizedReplica((FinalizedReplica)fromReplica);
@@ -193,7 +226,7 @@ public class ReplicaBuilder {
}
}
- private ReplicaInfo buildRWR() throws IllegalArgumentException {
+ private LocalReplica buildRWR() throws IllegalArgumentException {
if (null != fromReplica && fromReplica.getState() == ReplicaState.RWR) {
return new ReplicaWaitingToBeRecovered(
@@ -211,7 +244,7 @@ public class ReplicaBuilder {
}
}
- private ReplicaInfo buildRUR() throws IllegalArgumentException {
+ private LocalReplica buildRUR() throws IllegalArgumentException {
if (null == fromReplica) {
throw new IllegalArgumentException(
"Missing a valid replica to recover from");
@@ -228,8 +261,53 @@ public class ReplicaBuilder {
}
}
- public ReplicaInfo build() throws IllegalArgumentException {
- ReplicaInfo info = null;
+ private ProvidedReplica buildProvidedFinalizedReplica()
+ throws IllegalArgumentException {
+ ProvidedReplica info = null;
+ if (fromReplica != null) {
+ throw new IllegalArgumentException("Finalized PROVIDED replica " +
+ "cannot be constructed from another replica");
+ }
+ if (fileRegion == null && uri == null) {
+ throw new IllegalArgumentException(
+ "Trying to construct a provided replica on " + volume +
+ " without enough information");
+ }
+ if (fileRegion == null) {
+ info = new FinalizedProvidedReplica(blockId, uri, offset,
+ length, genStamp, volume, conf);
+ } else {
+ info = new FinalizedProvidedReplica(fileRegion.getBlock().getBlockId(),
+ fileRegion.getPath().toUri(),
+ fileRegion.getOffset(),
+ fileRegion.getBlock().getNumBytes(),
+ fileRegion.getBlock().getGenerationStamp(),
+ volume, conf);
+ }
+ return info;
+ }
+
+ private ProvidedReplica buildProvidedReplica()
+ throws IllegalArgumentException {
+ ProvidedReplica info = null;
+ switch(this.state) {
+ case FINALIZED:
+ info = buildProvidedFinalizedReplica();
+ break;
+ case RWR:
+ case RUR:
+ case RBW:
+ case TEMPORARY:
+ default:
+ throw new IllegalArgumentException("Unknown replica state " +
+ state + " for PROVIDED replica");
+ }
+ return info;
+ }
+
+ private LocalReplica buildLocalReplica()
+ throws IllegalArgumentException {
+ LocalReplica info = null;
switch(this.state) {
case FINALIZED:
info = buildFinalizedReplica();
@@ -249,4 +327,16 @@ public class ReplicaBuilder {
}
return info;
}
+
+ public ReplicaInfo build() throws IllegalArgumentException {
+
+ ReplicaInfo info = null;
+ if(volume != null && volume.getStorageType() == StorageType.PROVIDED) {
+ info = buildProvidedReplica();
+ } else {
+ info = buildLocalReplica();
+ }
+
+ return info;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
index 65e9ba7..3718799 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInfo.java
@@ -50,6 +50,17 @@ abstract public class ReplicaInfo extends Block
new FileIoProvider(null, null);
/**
+ * Constructor.
+ * @param block a block
+ * @param vol volume where replica is located
+ * @param dir directory path where block and meta files are located
+ */
+ ReplicaInfo(Block block, FsVolumeSpi vol) {
+ this(vol, block.getBlockId(), block.getNumBytes(),
+ block.getGenerationStamp());
+ }
+
+ /**
* Constructor
* @param vol volume where replica is located
* @param blockId block id
@@ -62,7 +73,14 @@ abstract public class ReplicaInfo extends Block
}
/**
- * Get the volume where this replica is located on disk.
+ * Copy constructor.
+ * @param from where to copy from
+ */
+ ReplicaInfo(ReplicaInfo from) {
+ this(from, from.getVolume());
+ }
+
+ /**
* @return the volume where this replica is located on disk
*/
public FsVolumeSpi getVolume() {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
index b4d5794..fb7acfd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
@@ -98,6 +98,16 @@ public class StorageLocation
public boolean matchesStorageDirectory(StorageDirectory sd,
String bpid) throws IOException {
+ if (sd.getStorageLocation().getStorageType() == StorageType.PROVIDED &&
+ storageType == StorageType.PROVIDED) {
+ return matchesStorageDirectory(sd);
+ }
+ if (sd.getStorageLocation().getStorageType() == StorageType.PROVIDED ||
+ storageType == StorageType.PROVIDED) {
+ //only one of these is PROVIDED; so it cannot be a match!
+ return false;
+ }
+ //both storage directories are local
return this.getBpURI(bpid, Storage.STORAGE_DIR_CURRENT).normalize()
.equals(sd.getRoot().toURI().normalize());
}
@@ -197,6 +207,10 @@ public class StorageLocation
if (conf == null) {
conf = new HdfsConfiguration();
}
+ if (storageType == StorageType.PROVIDED) {
+ //skip creation if the storage type is PROVIDED
+ return;
+ }
LocalFileSystem localFS = FileSystem.getLocal(conf);
FsPermission permission = new FsPermission(conf.get(
@@ -213,10 +227,14 @@ public class StorageLocation
@Override // Checkable
public VolumeCheckResult check(CheckContext context) throws IOException {
- DiskChecker.checkDir(
- context.localFileSystem,
- new Path(baseURI),
- context.expectedPermission);
+ //we assume provided storage locations are always healthy,
+ //and check only for local storages.
+ if (storageType != StorageType.PROVIDED) {
+ DiskChecker.checkDir(
+ context.localFileSystem,
+ new Path(baseURI),
+ context.expectedPermission);
+ }
return VolumeCheckResult.HEALTHY;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
index 7be42e8..f4bf839 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
@@ -51,6 +51,7 @@ import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
import org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException;
import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
import org.apache.hadoop.hdfs.server.datanode.UnexpectedReplicaStateException;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.ScanInfo;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory;
import org.apache.hadoop.hdfs.server.datanode.metrics.FSDatasetMBean;
import org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand.RecoveringBlock;
@@ -252,8 +253,7 @@ public interface FsDatasetSpi<V extends FsVolumeSpi> extends FSDatasetMBean {
* and, in case that they are not matched, update the record or mark it
* as corrupted.
*/
- void checkAndUpdate(String bpid, long blockId, File diskFile,
- File diskMetaFile, FsVolumeSpi vol) throws IOException;
+ void checkAndUpdate(String bpid, ScanInfo info) throws IOException;
/**
* @param b - the block
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[19/50] [abbrv] hadoop git commit: YARN-7589. TestPBImplRecords fails
with NullPointerException. Contributed by Daniel Templeton
Posted by vi...@apache.org.
YARN-7589. TestPBImplRecords fails with NullPointerException. Contributed by Daniel Templeton
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/25df5054
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/25df5054
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/25df5054
Branch: refs/heads/HDFS-9806
Commit: 25df5054216a6a76d09d9c49984f8075ebc6a197
Parents: c83fe44
Author: Jason Lowe <jl...@apache.org>
Authored: Fri Dec 1 15:37:36 2017 -0600
Committer: Jason Lowe <jl...@apache.org>
Committed: Fri Dec 1 15:37:36 2017 -0600
----------------------------------------------------------------------
.../org/apache/hadoop/yarn/api/records/Resource.java | 9 ++++++---
.../hadoop/yarn/util/resource/ResourceUtils.java | 13 +++++++++----
2 files changed, 15 insertions(+), 7 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/25df5054/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
index b32955b..304a963 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
@@ -102,9 +102,12 @@ public abstract class Resource implements Comparable<Resource> {
@Stable
public static Resource newInstance(long memory, int vCores,
Map<String, Long> others) {
- ResourceInformation[] info = ResourceUtils.createResourceTypesArray(others);
-
- return new LightWeightResource(memory, vCores, info);
+ if (others != null) {
+ return new LightWeightResource(memory, vCores,
+ ResourceUtils.createResourceTypesArray(others));
+ } else {
+ return newInstance(memory, vCores);
+ }
}
@InterfaceAudience.Private
http://git-wip-us.apache.org/repos/asf/hadoop/blob/25df5054/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
index 3c6ca98..76ae061 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceUtils.java
@@ -313,15 +313,13 @@ public class ResourceUtils {
}
public static ResourceInformation[] getResourceTypesArray() {
- initializeResourceTypesIfNeeded(null,
- YarnConfiguration.RESOURCE_TYPES_CONFIGURATION_FILE);
+ initializeResourceTypesIfNeeded();
return resourceTypesArray;
}
public static int getNumberOfKnownResourceTypes() {
if (numKnownResourceTypes < 0) {
- initializeResourceTypesIfNeeded(null,
- YarnConfiguration.RESOURCE_TYPES_CONFIGURATION_FILE);
+ initializeResourceTypesIfNeeded();
}
return numKnownResourceTypes;
}
@@ -332,6 +330,11 @@ public class ResourceUtils {
YarnConfiguration.RESOURCE_TYPES_CONFIGURATION_FILE);
}
+ private static void initializeResourceTypesIfNeeded() {
+ initializeResourceTypesIfNeeded(null,
+ YarnConfiguration.RESOURCE_TYPES_CONFIGURATION_FILE);
+ }
+
private static void initializeResourceTypesIfNeeded(Configuration conf,
String resourceFile) {
if (!initializedResources) {
@@ -641,6 +644,8 @@ public class ResourceUtils {
*/
public static ResourceInformation[] createResourceTypesArray(Map<String,
Long> res) {
+ initializeResourceTypesIfNeeded();
+
ResourceInformation[] info = new ResourceInformation[resourceTypes.size()];
for (Entry<String, Integer> entry : RESOURCE_NAME_TO_INDEX.entrySet()) {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[20/50] [abbrv] hadoop git commit: YARN-7455. quote_and_append_arg
can overflow buffer. Contributed by Jim Brennan
Posted by vi...@apache.org.
YARN-7455. quote_and_append_arg can overflow buffer. Contributed by Jim Brennan
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/60f95fb7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/60f95fb7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/60f95fb7
Branch: refs/heads/HDFS-9806
Commit: 60f95fb719f00a718b484c36a823ec5aa8b099f4
Parents: 25df505
Author: Jason Lowe <jl...@apache.org>
Authored: Fri Dec 1 15:47:01 2017 -0600
Committer: Jason Lowe <jl...@apache.org>
Committed: Fri Dec 1 15:47:01 2017 -0600
----------------------------------------------------------------------
.../main/native/container-executor/impl/util.c | 25 +--
.../main/native/container-executor/impl/util.h | 3 +-
.../native/container-executor/test/test_util.cc | 160 +++++++++++++++++--
3 files changed, 161 insertions(+), 27 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/60f95fb7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
index a9539cf..eea3e10 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.c
@@ -21,6 +21,7 @@
#include <string.h>
#include <ctype.h>
#include <regex.h>
+#include <stdio.h>
char** split_delimiter(char *value, const char *delim) {
char **return_values = NULL;
@@ -176,17 +177,19 @@ char* escape_single_quote(const char *str) {
void quote_and_append_arg(char **str, size_t *size, const char* param, const char *arg) {
char *tmp = escape_single_quote(arg);
- int alloc_block = 1024;
- strcat(*str, param);
- strcat(*str, "'");
- if (strlen(*str) + strlen(tmp) > *size) {
- *size = (strlen(*str) + strlen(tmp) + alloc_block) * sizeof(char);
- *str = (char *) realloc(*str, *size);
- if (*str == NULL) {
- exit(OUT_OF_MEMORY);
- }
+ const char *append_format = "%s'%s' ";
+ size_t append_size = snprintf(NULL, 0, append_format, param, tmp);
+ append_size += 1; // for the terminating NUL
+ size_t len_str = strlen(*str);
+ size_t new_size = len_str + append_size;
+ if (new_size > *size) {
+ *size = new_size + QUOTE_AND_APPEND_ARG_GROWTH;
+ *str = (char *) realloc(*str, *size);
+ if (*str == NULL) {
+ exit(OUT_OF_MEMORY);
+ }
}
- strcat(*str, tmp);
- strcat(*str, "' ");
+ char *cur_ptr = *str + len_str;
+ sprintf(cur_ptr, append_format, param, tmp);
free(tmp);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/60f95fb7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
index 8758d90..ed9fba8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h
@@ -141,12 +141,13 @@ char* escape_single_quote(const char *str);
/**
* Helper function to quote the argument to a parameter and then append it to the provided string.
- * @param str Buffer to which the param='arg' string must be appended
+ * @param str Buffer to which the param'arg' string must be appended
* @param size Size of the buffer
* @param param Param name
* @param arg Argument to be quoted
*/
void quote_and_append_arg(char **str, size_t *size, const char* param, const char *arg);
+#define QUOTE_AND_APPEND_ARG_GROWTH (1024) // how much to increase str buffer by if needed
/**
* Helper function to allocate and clear a block of memory. It'll call exit if the allocation fails.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/60f95fb7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
index b96dea1..8cdbf2f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
@@ -149,25 +149,155 @@ namespace ContainerExecutor {
}
}
+ /**
+ * Internal function for testing quote_and_append_arg()
+ */
+ void test_quote_and_append_arg_function_internal(char **str, size_t *size, const char* param, const char *arg, const char *expected_result) {
+ const size_t alloc_block_size = QUOTE_AND_APPEND_ARG_GROWTH;
+ size_t orig_size = *size;
+ size_t expected_size = strlen(expected_result) + 1;
+ if (expected_size > orig_size) {
+ expected_size += alloc_block_size;
+ } else {
+ expected_size = orig_size; // fits in original string
+ }
+ quote_and_append_arg(str, size, param, arg);
+ ASSERT_STREQ(*str, expected_result);
+ ASSERT_EQ(*size, expected_size);
+ return;
+ }
+
TEST_F(TestUtil, test_quote_and_append_arg) {
+ size_t str_real_size = 32;
+
+ // Simple test - size = 32, result = 16
+ size_t str_size = str_real_size;
+ char *str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "ssss");
+ const char *param = "pppp";
+ const char *arg = "aaaa";
+ const char *expected_result = "sssspppp'aaaa' ";
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
+
+ // Original test - size = 32, result = 19
+ str_size = str_real_size;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ param = "param=";
+ arg = "argument1";
+ expected_result = "param='argument1' ";
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
+
+ // Original test - size = 32 and result = 21
+ str_size = str_real_size;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ param = "param=";
+ arg = "ab'cd";
+ expected_result = "param='ab'\"'\"'cd' "; // 21 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
+
+ // Lie about size of buffer so we don't crash from an actual buffer overflow
+ // Original Test - Size = 4 and result = 19
+ str_size = 4;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ param = "param=";
+ arg = "argument1";
+ expected_result = "param='argument1' "; // 19 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
+
+ // Size = 8 and result = 7
+ str_size = 8;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "s");
+ param = "p";
+ arg = "a";
+ expected_result = "sp'a' "; // 7 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
+
+ // Size = 8 and result = 7
+ str_size = 8;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "s");
+ param = "p";
+ arg = "a";
+ expected_result = "sp'a' "; // 7 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
+
+ // Size = 8 and result = 8
+ str_size = 8;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "ss");
+ param = "p";
+ arg = "a";
+ expected_result = "ssp'a' "; // 8 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
+
+ // size = 8, result = 9 (should grow buffer)
+ str_size = 8;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "ss");
+ param = "pp";
+ arg = "a";
+ expected_result = "sspp'a' "; // 9 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
- char *tmp = static_cast<char *>(malloc(4096));
- size_t tmp_size = 4096;
+ // size = 8, result = 10 (should grow buffer)
+ str_size = 8;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "ss");
+ param = "pp";
+ arg = "aa";
+ expected_result = "sspp'aa' "; // 10 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
- memset(tmp, 0, tmp_size);
- quote_and_append_arg(&tmp, &tmp_size, "param=", "argument1");
- ASSERT_STREQ("param='argument1' ", tmp);
+ // size = 8, result = 11 (should grow buffer)
+ str_size = 8;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "sss");
+ param = "pp";
+ arg = "aa";
+ expected_result = "ssspp'aa' "; // 11 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
- memset(tmp, 0, tmp_size);
- quote_and_append_arg(&tmp, &tmp_size, "param=", "ab'cd");
- ASSERT_STREQ("param='ab'\"'\"'cd' ", tmp);
- free(tmp);
+ // One with quotes - size = 32, result = 17
+ str_size = 32;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "s");
+ param = "p";
+ arg = "'a'";
+ expected_result = "sp''\"'\"'a'\"'\"'' "; // 17 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
- tmp = static_cast<char *>(malloc(4));
- tmp_size = 4;
- memset(tmp, 0, tmp_size);
- quote_and_append_arg(&tmp, &tmp_size, "param=", "argument1");
- ASSERT_STREQ("param='argument1' ", tmp);
- ASSERT_EQ(1040, tmp_size);
+ // One with quotes - size = 16, result = 17
+ str_size = 16;
+ str = (char *) malloc(str_real_size);
+ memset(str, 0, str_real_size);
+ strcpy(str, "s");
+ param = "p";
+ arg = "'a'";
+ expected_result = "sp''\"'\"'a'\"'\"'' "; // 17 characters
+ test_quote_and_append_arg_function_internal(&str, &str_size, param, arg, expected_result);
+ free(str);
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[34/50] [abbrv] hadoop git commit: HDFS-11902. [READ] Merge
BlockFormatProvider and FileRegionProvider.
Posted by vi...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index 8782e71..40d77f7a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -52,11 +52,12 @@ import org.apache.hadoop.fs.FileSystemTestHelper;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.server.common.FileRegion;
-import org.apache.hadoop.hdfs.server.common.FileRegionProvider;
import org.apache.hadoop.hdfs.server.common.Storage;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
import org.apache.hadoop.hdfs.server.datanode.BlockScanner;
import org.apache.hadoop.hdfs.server.datanode.DNConf;
import org.apache.hadoop.hdfs.server.datanode.DataNode;
@@ -168,49 +169,66 @@ public class TestProvidedImpl {
}
/**
- * A simple FileRegion provider for tests.
+ * A simple FileRegion BlockAliasMap for tests.
*/
- public static class TestFileRegionProvider
- extends FileRegionProvider implements Configurable {
+ public static class TestFileRegionBlockAliasMap
+ extends BlockAliasMap<FileRegion> {
private Configuration conf;
private int minId;
private int numBlocks;
private Iterator<FileRegion> suppliedIterator;
- TestFileRegionProvider() {
+ TestFileRegionBlockAliasMap() {
this(null, MIN_BLK_ID, NUM_PROVIDED_BLKS);
}
- TestFileRegionProvider(Iterator<FileRegion> iterator, int minId,
- int numBlocks) {
+ TestFileRegionBlockAliasMap(Iterator<FileRegion> iterator, int minId,
+ int numBlocks) {
this.suppliedIterator = iterator;
this.minId = minId;
this.numBlocks = numBlocks;
}
@Override
- public Iterator<FileRegion> iterator() {
- if (suppliedIterator == null) {
- return new TestFileRegionIterator(providedBasePath, minId, numBlocks);
- } else {
- return suppliedIterator;
- }
- }
+ public Reader<FileRegion> getReader(Reader.Options opts)
+ throws IOException {
+
+ BlockAliasMap.Reader<FileRegion> reader =
+ new BlockAliasMap.Reader<FileRegion>() {
+ @Override
+ public Iterator<FileRegion> iterator() {
+ if (suppliedIterator == null) {
+ return new TestFileRegionIterator(providedBasePath, minId,
+ numBlocks);
+ } else {
+ return suppliedIterator;
+ }
+ }
- @Override
- public void setConf(Configuration conf) {
- this.conf = conf;
+ @Override
+ public void close() throws IOException {
+
+ }
+
+ @Override
+ public FileRegion resolve(Block ident) throws IOException {
+ return null;
+ }
+ };
+ return reader;
}
@Override
- public Configuration getConf() {
- return conf;
+ public Writer<FileRegion> getWriter(Writer.Options opts)
+ throws IOException {
+ // not implemented
+ return null;
}
@Override
- public void refresh() {
- //do nothing!
+ public void refresh() throws IOException {
+ // do nothing!
}
public void setMinBlkId(int minId) {
@@ -359,8 +377,8 @@ public class TestProvidedImpl {
new ShortCircuitRegistry(conf);
when(datanode.getShortCircuitRegistry()).thenReturn(shortCircuitRegistry);
- conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
- TestFileRegionProvider.class, FileRegionProvider.class);
+ this.conf.setClass(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
+ TestFileRegionBlockAliasMap.class, BlockAliasMap.class);
conf.setClass(DFSConfigKeys.DFS_PROVIDER_DF_CLASS,
TestProvidedVolumeDF.class, ProvidedVolumeDF.class);
@@ -496,12 +514,13 @@ public class TestProvidedImpl {
conf.setInt(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_THREADS_KEY, 1);
for (int i = 0; i < providedVolumes.size(); i++) {
ProvidedVolumeImpl vol = (ProvidedVolumeImpl) providedVolumes.get(i);
- TestFileRegionProvider provider = (TestFileRegionProvider)
- vol.getFileRegionProvider(BLOCK_POOL_IDS[CHOSEN_BP_ID]);
+ TestFileRegionBlockAliasMap testBlockFormat =
+ (TestFileRegionBlockAliasMap) vol
+ .getBlockFormat(BLOCK_POOL_IDS[CHOSEN_BP_ID]);
//equivalent to two new blocks appearing
- provider.setBlockCount(NUM_PROVIDED_BLKS + 2);
+ testBlockFormat.setBlockCount(NUM_PROVIDED_BLKS + 2);
//equivalent to deleting the first block
- provider.setMinBlkId(MIN_BLK_ID + 1);
+ testBlockFormat.setMinBlkId(MIN_BLK_ID + 1);
DirectoryScanner scanner = new DirectoryScanner(datanode, dataset, conf);
scanner.reconcile();
@@ -525,7 +544,7 @@ public class TestProvidedImpl {
for (int i = 0; i < providedVolumes.size(); i++) {
ProvidedVolumeImpl vol = (ProvidedVolumeImpl) providedVolumes.get(i);
vol.setFileRegionProvider(BLOCK_POOL_IDS[CHOSEN_BP_ID],
- new TestFileRegionProvider(fileRegionIterator, minBlockId,
+ new TestFileRegionBlockAliasMap(fileRegionIterator, minBlockId,
numBlocks));
ReplicaMap volumeMap = new ReplicaMap(new AutoCloseableLock());
vol.getVolumeMap(BLOCK_POOL_IDS[CHOSEN_BP_ID], volumeMap, null);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
index e1e85c1..2e57c9f 100644
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
@@ -29,7 +29,7 @@ import org.apache.commons.cli.PosixParser;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hdfs.server.common.BlockFormat;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
import org.apache.hadoop.util.ReflectionUtils;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
@@ -103,7 +103,7 @@ public class FileSystemImage implements Tool {
break;
case "b":
opts.blocks(
- Class.forName(o.getValue()).asSubclass(BlockFormat.class));
+ Class.forName(o.getValue()).asSubclass(BlockAliasMap.class));
break;
case "i":
opts.blockIds(
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
index a3603a1..ea1888a 100644
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
@@ -44,8 +44,8 @@ import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocalFileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.hdfs.server.common.BlockFormat;
import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf.SectionName;
import org.apache.hadoop.hdfs.server.namenode.FsImageProto.CacheManagerSection;
import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary;
@@ -88,7 +88,7 @@ public class ImageWriter implements Closeable {
private final long startBlock;
private final long startInode;
private final UGIResolver ugis;
- private final BlockFormat.Writer<FileRegion> blocks;
+ private final BlockAliasMap.Writer<FileRegion> blocks;
private final BlockResolver blockIds;
private final Map<Long, DirEntry.Builder> dircache;
private final TrackedOutputStream<DigestOutputStream> raw;
@@ -155,8 +155,8 @@ public class ImageWriter implements Closeable {
ugis = null == opts.ugis
? ReflectionUtils.newInstance(opts.ugisClass, opts.getConf())
: opts.ugis;
- BlockFormat<FileRegion> fmt = null == opts.blocks
- ? ReflectionUtils.newInstance(opts.blockFormatClass, opts.getConf())
+ BlockAliasMap<FileRegion> fmt = null == opts.blocks
+ ? ReflectionUtils.newInstance(opts.aliasMap, opts.getConf())
: opts.blocks;
blocks = fmt.getWriter(null);
blockIds = null == opts.blockIds
@@ -509,10 +509,10 @@ public class ImageWriter implements Closeable {
private long startInode;
private UGIResolver ugis;
private Class<? extends UGIResolver> ugisClass;
- private BlockFormat<FileRegion> blocks;
+ private BlockAliasMap<FileRegion> blocks;
@SuppressWarnings("rawtypes")
- private Class<? extends BlockFormat> blockFormatClass;
+ private Class<? extends BlockAliasMap> aliasMap;
private BlockResolver blockIds;
private Class<? extends BlockResolver> blockIdsClass;
private FSImageCompression compress =
@@ -524,7 +524,6 @@ public class ImageWriter implements Closeable {
@Override
public void setConf(Configuration conf) {
this.conf = conf;
- //long lastTxn = conf.getLong(LAST_TXN, 0L);
String def = new File("hdfs/name").toURI().toString();
outdir = new Path(conf.get(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, def));
startBlock = conf.getLong(FixedBlockResolver.START_BLOCK, (1L << 30) + 1);
@@ -532,9 +531,9 @@ public class ImageWriter implements Closeable {
maxdircache = conf.getInt(CACHE_ENTRY, 100);
ugisClass = conf.getClass(UGI_CLASS,
SingleUGIResolver.class, UGIResolver.class);
- blockFormatClass = conf.getClass(
- DFSConfigKeys.DFS_PROVIDER_BLK_FORMAT_CLASS,
- NullBlockFormat.class, BlockFormat.class);
+ aliasMap = conf.getClass(
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
+ NullBlockAliasMap.class, BlockAliasMap.class);
blockIdsClass = conf.getClass(BLOCK_RESOLVER_CLASS,
FixedBlockResolver.class, BlockResolver.class);
}
@@ -584,14 +583,14 @@ public class ImageWriter implements Closeable {
return this;
}
- public Options blocks(BlockFormat<FileRegion> blocks) {
+ public Options blocks(BlockAliasMap<FileRegion> blocks) {
this.blocks = blocks;
return this;
}
@SuppressWarnings("rawtypes")
- public Options blocks(Class<? extends BlockFormat> blocksClass) {
- this.blockFormatClass = blocksClass;
+ public Options blocks(Class<? extends BlockAliasMap> blocksClass) {
+ this.aliasMap = blocksClass;
return this;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
new file mode 100644
index 0000000..4cdf473
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
@@ -0,0 +1,86 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+
+/**
+ * Null sink for region information emitted from FSImage.
+ */
+public class NullBlockAliasMap extends BlockAliasMap<FileRegion> {
+
+ @Override
+ public Reader<FileRegion> getReader(Reader.Options opts) throws IOException {
+ return new Reader<FileRegion>() {
+ @Override
+ public Iterator<FileRegion> iterator() {
+ return new Iterator<FileRegion>() {
+ @Override
+ public boolean hasNext() {
+ return false;
+ }
+ @Override
+ public FileRegion next() {
+ throw new NoSuchElementException();
+ }
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ };
+ }
+
+ @Override
+ public void close() throws IOException {
+ // do nothing
+ }
+
+ @Override
+ public FileRegion resolve(Block ident) throws IOException {
+ throw new UnsupportedOperationException();
+ }
+ };
+ }
+
+ @Override
+ public Writer<FileRegion> getWriter(Writer.Options opts) throws IOException {
+ return new Writer<FileRegion>() {
+ @Override
+ public void store(FileRegion token) throws IOException {
+ // do nothing
+ }
+
+ @Override
+ public void close() throws IOException {
+ // do nothing
+ }
+ };
+ }
+
+ @Override
+ public void refresh() throws IOException {
+ // do nothing
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
deleted file mode 100644
index aabdf74..0000000
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.server.namenode;
-
-import java.io.IOException;
-import java.util.Iterator;
-import java.util.NoSuchElementException;
-
-import org.apache.hadoop.hdfs.protocol.Block;
-import org.apache.hadoop.hdfs.server.common.BlockFormat;
-import org.apache.hadoop.hdfs.server.common.BlockFormat.Reader.Options;
-import org.apache.hadoop.hdfs.server.common.FileRegion;
-
-/**
- * Null sink for region information emitted from FSImage.
- */
-public class NullBlockFormat extends BlockFormat<FileRegion> {
-
- @Override
- public Reader<FileRegion> getReader(Options opts) throws IOException {
- return new Reader<FileRegion>() {
- @Override
- public Iterator<FileRegion> iterator() {
- return new Iterator<FileRegion>() {
- @Override
- public boolean hasNext() {
- return false;
- }
- @Override
- public FileRegion next() {
- throw new NoSuchElementException();
- }
- @Override
- public void remove() {
- throw new UnsupportedOperationException();
- }
- };
- }
-
- @Override
- public void close() throws IOException {
- // do nothing
- }
-
- @Override
- public FileRegion resolve(Block ident) throws IOException {
- throw new UnsupportedOperationException();
- }
- };
- }
-
- @Override
- public Writer<FileRegion> getWriter(Writer.Options opts) throws IOException {
- return new Writer<FileRegion>() {
- @Override
- public void store(FileRegion token) throws IOException {
- // do nothing
- }
-
- @Override
- public void close() throws IOException {
- // do nothing
- }
- };
- }
-
- @Override
- public void refresh() throws IOException {
- // do nothing
- }
-
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
index 14e6bed..d327363 100644
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
@@ -24,8 +24,8 @@ import com.google.protobuf.ByteString;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto;
-import org.apache.hadoop.hdfs.server.common.BlockFormat;
import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection.INode;
import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection.INodeDirectory;
import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection.INodeFile;
@@ -70,7 +70,7 @@ public class TreePath {
}
public INode toINode(UGIResolver ugi, BlockResolver blk,
- BlockFormat.Writer<FileRegion> out, String blockPoolID)
+ BlockAliasMap.Writer<FileRegion> out, String blockPoolID)
throws IOException {
if (stat.isFile()) {
return toFile(ugi, blk, out, blockPoolID);
@@ -101,14 +101,14 @@ public class TreePath {
void writeBlock(long blockId, long offset, long length,
long genStamp, String blockPoolID,
- BlockFormat.Writer<FileRegion> out) throws IOException {
+ BlockAliasMap.Writer<FileRegion> out) throws IOException {
FileStatus s = getFileStatus();
out.store(new FileRegion(blockId, s.getPath(), offset, length,
blockPoolID, genStamp));
}
INode toFile(UGIResolver ugi, BlockResolver blk,
- BlockFormat.Writer<FileRegion> out, String blockPoolID)
+ BlockAliasMap.Writer<FileRegion> out, String blockPoolID)
throws IOException {
final FileStatus s = getFileStatus();
// TODO should this store resolver's user/group?
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index d622b9e..2170baa 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -44,13 +44,9 @@ import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
-import org.apache.hadoop.hdfs.server.blockmanagement.BlockFormatProvider;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
-import org.apache.hadoop.hdfs.server.blockmanagement.BlockProvider;
-import org.apache.hadoop.hdfs.server.common.BlockFormat;
-import org.apache.hadoop.hdfs.server.common.FileRegionProvider;
-import org.apache.hadoop.hdfs.server.common.TextFileRegionFormat;
-import org.apache.hadoop.hdfs.server.common.TextFileRegionProvider;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap;
import org.apache.hadoop.hdfs.server.datanode.DataNode;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY;
@@ -103,18 +99,13 @@ public class TestNameNodeProvidedImplementation {
DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT);
conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED, true);
- conf.setClass(DFSConfigKeys.DFS_NAMENODE_BLOCK_PROVIDER_CLASS,
- BlockFormatProvider.class, BlockProvider.class);
- conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
- TextFileRegionProvider.class, FileRegionProvider.class);
- conf.setClass(DFSConfigKeys.DFS_PROVIDER_BLK_FORMAT_CLASS,
- TextFileRegionFormat.class, BlockFormat.class);
-
- conf.set(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_WRITE_PATH,
+ conf.setClass(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
+ TextFileRegionAliasMap.class, BlockAliasMap.class);
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_WRITE_PATH,
BLOCKFILE.toString());
- conf.set(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_READ_PATH,
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_READ_PATH,
BLOCKFILE.toString());
- conf.set(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER, ",");
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER, ",");
conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR_PROVIDED,
new File(NAMEPATH.toUri()).toString());
@@ -167,7 +158,7 @@ public class TestNameNodeProvidedImplementation {
ImageWriter.Options opts = ImageWriter.defaults();
opts.setConf(conf);
opts.output(out.toString())
- .blocks(TextFileRegionFormat.class)
+ .blocks(TextFileRegionAliasMap.class)
.blockIds(blockIdsClass);
try (ImageWriter w = new ImageWriter(opts)) {
for (TreePath e : t) {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[18/50] [abbrv] hadoop git commit: YARN-4813.
TestRMWebServicesDelegationTokenAuthentication.testDoAs fails intermittently
(grepas via rkanter)
Posted by vi...@apache.org.
YARN-4813. TestRMWebServicesDelegationTokenAuthentication.testDoAs fails intermittently (grepas via rkanter)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c83fe449
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c83fe449
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c83fe449
Branch: refs/heads/HDFS-9806
Commit: c83fe4491731c994a4867759d80db31d9c1cab60
Parents: 3b78607
Author: Robert Kanter <rk...@apache.org>
Authored: Fri Dec 1 12:18:13 2017 -0800
Committer: Robert Kanter <rk...@apache.org>
Committed: Fri Dec 1 12:18:13 2017 -0800
----------------------------------------------------------------------
...stRMWebServicesDelegationTokenAuthentication.java | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c83fe449/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokenAuthentication.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokenAuthentication.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokenAuthentication.java
index b406fdb..41e56ae 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokenAuthentication.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesDelegationTokenAuthentication.java
@@ -76,6 +76,8 @@ public class TestRMWebServicesDelegationTokenAuthentication {
TestRMWebServicesDelegationTokenAuthentication.class.getName() + "-root");
private static File httpSpnegoKeytabFile = new File(
KerberosTestUtils.getKeytabFile());
+ private static final String SUN_SECURITY_KRB5_RCACHE_KEY =
+ "sun.security.krb5.rcache";
private static String httpSpnegoPrincipal = KerberosTestUtils
.getServerPrincipal();
@@ -83,7 +85,7 @@ public class TestRMWebServicesDelegationTokenAuthentication {
private static boolean miniKDCStarted = false;
private static MiniKdc testMiniKDC;
private static MockRM rm;
-
+ private static String sunSecurityKrb5RcacheValue;
String delegationTokenHeader;
@@ -98,6 +100,11 @@ public class TestRMWebServicesDelegationTokenAuthentication {
@BeforeClass
public static void setUp() {
try {
+ // Disabling kerberos replay cache to avoid "Request is a replay" errors
+ // caused by frequent webservice calls
+ sunSecurityKrb5RcacheValue =
+ System.getProperty(SUN_SECURITY_KRB5_RCACHE_KEY);
+ System.setProperty(SUN_SECURITY_KRB5_RCACHE_KEY, "none");
testMiniKDC = new MiniKdc(MiniKdc.createConf(), testRootDir);
setupKDC();
setupAndStartRM();
@@ -114,6 +121,12 @@ public class TestRMWebServicesDelegationTokenAuthentication {
if (rm != null) {
rm.stop();
}
+ if (sunSecurityKrb5RcacheValue == null) {
+ System.clearProperty(SUN_SECURITY_KRB5_RCACHE_KEY);
+ } else {
+ System.setProperty(SUN_SECURITY_KRB5_RCACHE_KEY,
+ sunSecurityKrb5RcacheValue);
+ }
}
@Parameterized.Parameters
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[15/50] [abbrv] hadoop git commit: HDFS-12836. startTxId could be
greater than endTxId when tailing in-progress edit log. Contributed by Chao
Sun.
Posted by vi...@apache.org.
HDFS-12836. startTxId could be greater than endTxId when tailing in-progress edit log. Contributed by Chao Sun.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0faf5062
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0faf5062
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0faf5062
Branch: refs/heads/HDFS-9806
Commit: 0faf50624580b86b64a828cdbbb630ae8994e2cd
Parents: 53bbef3
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Fri Dec 1 12:01:21 2017 -0800
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Fri Dec 1 12:01:21 2017 -0800
----------------------------------------------------------------------
.../qjournal/client/QuorumJournalManager.java | 6 ++++++
.../namenode/ha/TestStandbyInProgressTail.java | 19 +++++++++++++++++++
2 files changed, 25 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0faf5062/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
index d30625b..7dff9b4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
@@ -498,6 +498,12 @@ public class QuorumJournalManager implements JournalManager {
// than committedTxnId. This ensures the consistency.
if (onlyDurableTxns && inProgressOk) {
endTxId = Math.min(endTxId, committedTxnId);
+ if (endTxId < remoteLog.getStartTxId()) {
+ LOG.warn("Found endTxId (" + endTxId + ") that is less than " +
+ "the startTxId (" + remoteLog.getStartTxId() +
+ ") - setting it to startTxId.");
+ endTxId = remoteLog.getStartTxId();
+ }
}
EditLogInputStream elis = EditLogFileInputStream.fromUrl(
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0faf5062/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyInProgressTail.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyInProgressTail.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyInProgressTail.java
index 9201cda..b1cd037 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyInProgressTail.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyInProgressTail.java
@@ -309,6 +309,25 @@ public class TestStandbyInProgressTail {
assertNotNull(NameNodeAdapter.getFileInfo(nn1, "/test3", true));
}
+ @Test
+ public void testNonUniformConfig() throws Exception {
+ // Test case where some NNs (in this case the active NN) in the cluster
+ // do not have in-progress tailing enabled.
+ Configuration newConf = cluster.getNameNode(0).getConf();
+ newConf.setBoolean(
+ DFSConfigKeys.DFS_HA_TAILEDITS_INPROGRESS_KEY,
+ false);
+ cluster.restartNameNode(0);
+ cluster.transitionToActive(0);
+
+ cluster.getNameNode(0).getRpcServer().mkdirs("/test",
+ FsPermission.createImmutable((short) 0755), true);
+ cluster.getNameNode(0).getRpcServer().rollEdits();
+
+ cluster.getNameNode(1).getNamesystem().getEditLogTailer().doTailEdits();
+ assertNotNull(NameNodeAdapter.getFileInfo(nn1, "/test", true));
+ }
+
/**
* Check that no edits files are present in the given storage dirs.
*/
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[27/50] [abbrv] hadoop git commit: HDFS-11653. [READ] ProvidedReplica
should return an InputStream that is bounded by its length
Posted by vi...@apache.org.
HDFS-11653. [READ] ProvidedReplica should return an InputStream that is bounded by its length
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f63ec953
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f63ec953
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f63ec953
Branch: refs/heads/HDFS-9806
Commit: f63ec953407047644454448d647ad575bede1e85
Parents: 1cc1f21
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu May 4 12:43:48 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:57 2017 -0800
----------------------------------------------------------------------
.../hdfs/server/datanode/ProvidedReplica.java | 5 +-
.../datanode/TestProvidedReplicaImpl.java | 163 +++++++++++++++++++
2 files changed, 167 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f63ec953/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
index b021ea2..946ab5a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
@@ -22,6 +22,8 @@ import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
+
+import org.apache.commons.io.input.BoundedInputStream;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
@@ -98,7 +100,8 @@ public abstract class ProvidedReplica extends ReplicaInfo {
if (remoteFS != null) {
FSDataInputStream ins = remoteFS.open(new Path(fileURI));
ins.seek(fileOffset + seekOffset);
- return new FSDataInputStream(ins);
+ return new BoundedInputStream(
+ new FSDataInputStream(ins), getBlockDataLength());
} else {
throw new IOException("Remote filesystem for provided replica " + this +
" does not exist");
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f63ec953/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
new file mode 100644
index 0000000..8258c21
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestProvidedReplicaImpl.java
@@ -0,0 +1,163 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+import java.nio.channels.Channels;
+import java.nio.channels.ReadableByteChannel;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.io.input.BoundedInputStream;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Tests the implementation of {@link ProvidedReplica}.
+ */
+public class TestProvidedReplicaImpl {
+
+ private static final Logger LOG =
+ LoggerFactory.getLogger(TestProvidedReplicaImpl.class);
+ private static final String BASE_DIR =
+ new FileSystemTestHelper().getTestRootDir();
+ private static final String FILE_NAME = "provided-test";
+ //length of the file that is associated with the provided blocks.
+ private static final long FILE_LEN = 128 * 1024 * 10L + 64 * 1024;
+ //length of each provided block.
+ private static final long BLK_LEN = 128 * 1024L;
+
+ private static List<ProvidedReplica> replicas;
+
+ private static void createFileIfNotExists(String baseDir) throws IOException {
+ File newFile = new File(baseDir, FILE_NAME);
+ newFile.getParentFile().mkdirs();
+ if(!newFile.exists()) {
+ newFile.createNewFile();
+ OutputStream writer = new FileOutputStream(newFile.getAbsolutePath());
+ //FILE_LEN is length in bytes.
+ byte[] bytes = new byte[1];
+ bytes[0] = (byte) 0;
+ for(int i=0; i< FILE_LEN; i++) {
+ writer.write(bytes);
+ }
+ writer.flush();
+ writer.close();
+ LOG.info("Created provided file " + newFile +
+ " of length " + newFile.length());
+ }
+ }
+
+ private static void createProvidedReplicas(Configuration conf) {
+ long numReplicas = (long) Math.ceil((double) FILE_LEN/BLK_LEN);
+ File providedFile = new File(BASE_DIR, FILE_NAME);
+ replicas = new ArrayList<ProvidedReplica>();
+
+ LOG.info("Creating " + numReplicas + " provided replicas");
+ for (int i=0; i<numReplicas; i++) {
+ long currentReplicaLength =
+ FILE_LEN >= (i+1)*BLK_LEN ? BLK_LEN : FILE_LEN - i*BLK_LEN;
+ replicas.add(
+ new FinalizedProvidedReplica(i, providedFile.toURI(), i*BLK_LEN,
+ currentReplicaLength, 0, null, conf));
+ }
+ }
+
+ @Before
+ public void setUp() throws IOException {
+ createFileIfNotExists(new File(BASE_DIR).getAbsolutePath());
+ createProvidedReplicas(new Configuration());
+ }
+
+ /**
+ * Checks if {@code ins} matches the provided file from offset
+ * {@code fileOffset} for length {@ dataLength}.
+ * @param file the local file
+ * @param ins input stream to compare against
+ * @param fileOffset offset
+ * @param dataLength length
+ * @throws IOException
+ */
+ private void verifyReplicaContents(File file,
+ InputStream ins, long fileOffset, long dataLength)
+ throws IOException {
+
+ InputStream fileIns = new FileInputStream(file);
+ fileIns.skip(fileOffset);
+
+ try (ReadableByteChannel i =
+ Channels.newChannel(new BoundedInputStream(fileIns, dataLength))) {
+ try (ReadableByteChannel j = Channels.newChannel(ins)) {
+ ByteBuffer ib = ByteBuffer.allocate(4096);
+ ByteBuffer jb = ByteBuffer.allocate(4096);
+ while (true) {
+ int il = i.read(ib);
+ int jl = j.read(jb);
+ if (il < 0 || jl < 0) {
+ assertEquals(il, jl);
+ break;
+ }
+ ib.flip();
+ jb.flip();
+ int cmp = Math.min(ib.remaining(), jb.remaining());
+ for (int k = 0; k < cmp; ++k) {
+ assertEquals(ib.get(), jb.get());
+ }
+ ib.compact();
+ jb.compact();
+ }
+ }
+ }
+ }
+
+ @Test
+ public void testProvidedReplicaRead() throws IOException {
+
+ File providedFile = new File(BASE_DIR, FILE_NAME);
+ for(int i=0; i < replicas.size(); i++) {
+ ProvidedReplica replica = replicas.get(i);
+ //block data should exist!
+ assertTrue(replica.blockDataExists());
+ assertEquals(providedFile.toURI(), replica.getBlockURI());
+ verifyReplicaContents(providedFile, replica.getDataInputStream(0),
+ BLK_LEN*i, replica.getBlockDataLength());
+ }
+ LOG.info("All replica contents verified");
+
+ providedFile.delete();
+ //the block data should no longer be found!
+ for(int i=0; i < replicas.size(); i++) {
+ ProvidedReplica replica = replicas.get(i);
+ assertTrue(!replica.blockDataExists());
+ }
+ }
+
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[46/50] [abbrv] hadoop git commit: HDFS-12671. [READ] Test NameNode
restarts when PROVIDED is configured
Posted by vi...@apache.org.
HDFS-12671. [READ] Test NameNode restarts when PROVIDED is configured
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f0805c85
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f0805c85
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f0805c85
Branch: refs/heads/HDFS-9806
Commit: f0805c85c80410682b35a215c9dde5f4611ab649
Parents: dacc6bc
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Tue Nov 7 12:54:27 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../TestNameNodeProvidedImplementation.java | 52 +++++++++++++++-----
1 file changed, 39 insertions(+), 13 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f0805c85/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index aae04be..f0303b5 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -507,16 +507,10 @@ public class TestNameNodeProvidedImplementation {
DataNode providedDatanode = cluster.getDataNodes().get(0);
DFSClient client = new DFSClient(new InetSocketAddress("localhost",
- cluster.getNameNodePort()), cluster.getConfiguration(0));
+ cluster.getNameNodePort()), cluster.getConfiguration(0));
for (int i= 0; i < numFiles; i++) {
- String filename = "/" + filePrefix + i + fileSuffix;
-
- DatanodeInfo[] dnInfos = getAndCheckBlockLocations(client, filename, 1);
- // location should be the provided DN.
- assertTrue(dnInfos[0].getDatanodeUuid()
- .equals(providedDatanode.getDatanodeUuid()));
-
+ verifyFileLocation(i);
// NameNode thinks the datanode is down
BlockManagerTestUtil.noticeDeadDatanode(
cluster.getNameNode(),
@@ -524,12 +518,44 @@ public class TestNameNodeProvidedImplementation {
cluster.waitActive();
cluster.triggerHeartbeats();
Thread.sleep(1000);
+ verifyFileLocation(i);
+ }
+ }
- // should find the block on the 2nd provided datanode.
- dnInfos = getAndCheckBlockLocations(client, filename, 1);
- assertTrue(
- dnInfos[0].getDatanodeUuid()
- .equals(providedDatanode.getDatanodeUuid()));
+ @Test(timeout=30000)
+ public void testNamenodeRestart() throws Exception {
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class);
+ // 2 Datanodes, 1 PROVIDED and other DISK
+ startCluster(NNDIRPATH, 2, null,
+ new StorageType[][] {
+ {StorageType.PROVIDED},
+ {StorageType.DISK}},
+ false);
+
+ verifyFileLocation(numFiles - 1);
+ cluster.restartNameNodes();
+ cluster.waitActive();
+ verifyFileLocation(numFiles - 1);
+ }
+
+ /**
+ * verify that the specified file has a valid provided location.
+ * @param fileIndex the index of the file to verify.
+ * @throws Exception
+ */
+ private void verifyFileLocation(int fileIndex)
+ throws Exception {
+ DataNode providedDatanode = cluster.getDataNodes().get(0);
+ DFSClient client = new DFSClient(
+ new InetSocketAddress("localhost", cluster.getNameNodePort()),
+ cluster.getConfiguration(0));
+ if (fileIndex <= numFiles && fileIndex >= 0) {
+ String filename = "/" + filePrefix + fileIndex + fileSuffix;
+ DatanodeInfo[] dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ // location should be the provided DN
+ assertEquals(providedDatanode.getDatanodeUuid(),
+ dnInfos[0].getDatanodeUuid());
}
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[05/50] [abbrv] hadoop git commit: YARN-7558. yarn logs command fails
to get logs for running containers if UI authentication is enabled.
Contributed by Xuan Gong.
Posted by vi...@apache.org.
YARN-7558. yarn logs command fails to get logs for running containers if UI authentication is enabled. Contributed by Xuan Gong.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a4094259
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a4094259
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a4094259
Branch: refs/heads/HDFS-9806
Commit: a409425986fc128bb54f656b05373201545f7213
Parents: b1c7654
Author: Junping Du <ju...@apache.org>
Authored: Thu Nov 30 13:47:47 2017 -0800
Committer: Junping Du <ju...@apache.org>
Committed: Thu Nov 30 13:47:47 2017 -0800
----------------------------------------------------------------------
.../apache/hadoop/yarn/client/cli/LogsCLI.java | 41 ++++++++++++++------
1 file changed, 29 insertions(+), 12 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a4094259/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
index 74497ce..6953a4d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
@@ -18,13 +18,25 @@
package org.apache.hadoop.yarn.client.cli;
+import com.google.common.annotations.VisibleForTesting;
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientHandlerException;
+import com.sun.jersey.api.client.ClientRequest;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.UniformInterfaceException;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.filter.ClientFilter;
+import com.sun.jersey.client.urlconnection.HttpURLConnectionFactory;
+import com.sun.jersey.client.urlconnection.URLConnectionClientHandler;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.io.PrintStream;
import java.net.ConnectException;
+import java.net.HttpURLConnection;
import java.net.SocketException;
import java.net.SocketTimeoutException;
+import java.net.URL;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
@@ -38,9 +50,7 @@ import java.util.Map.Entry;
import java.util.Set;
import java.util.TreeMap;
import java.util.regex.Pattern;
-
import javax.ws.rs.core.MediaType;
-
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.CommandLineParser;
import org.apache.commons.cli.GnuParser;
@@ -57,6 +67,8 @@ import org.apache.hadoop.classification.InterfaceStability.Evolving;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authentication.client.AuthenticatedURL;
+import org.apache.hadoop.security.authentication.client.AuthenticationException;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
import org.apache.hadoop.yarn.api.records.ApplicationId;
@@ -78,15 +90,6 @@ import org.codehaus.jettison.json.JSONArray;
import org.codehaus.jettison.json.JSONException;
import org.codehaus.jettison.json.JSONObject;
-import com.google.common.annotations.VisibleForTesting;
-import com.sun.jersey.api.client.Client;
-import com.sun.jersey.api.client.ClientHandlerException;
-import com.sun.jersey.api.client.ClientRequest;
-import com.sun.jersey.api.client.ClientResponse;
-import com.sun.jersey.api.client.UniformInterfaceException;
-import com.sun.jersey.api.client.WebResource;
-import com.sun.jersey.api.client.filter.ClientFilter;
-
@Public
@Evolving
public class LogsCLI extends Configured implements Tool {
@@ -132,7 +135,21 @@ public class LogsCLI extends Configured implements Tool {
public int run(String[] args) throws Exception {
try {
yarnClient = createYarnClient();
- webServiceClient = Client.create();
+ webServiceClient = new Client(new URLConnectionClientHandler(
+ new HttpURLConnectionFactory() {
+ @Override
+ public HttpURLConnection getHttpURLConnection(URL url)
+ throws IOException {
+ AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+ HttpURLConnection conn = null;
+ try {
+ conn = new AuthenticatedURL().openConnection(url, token);
+ } catch (AuthenticationException e) {
+ throw new IOException(e);
+ }
+ return conn;
+ }
+ }));
return runCommand(args);
} finally {
if (yarnClient != null) {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[42/50] [abbrv] hadoop git commit: HDFS-12775. [READ] Fix reporting
of Provided volumes
Posted by vi...@apache.org.
HDFS-12775. [READ] Fix reporting of Provided volumes
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ecb56029
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ecb56029
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ecb56029
Branch: refs/heads/HDFS-9806
Commit: ecb5602994b9b161e9ca54aa3a50b6284688f1c1
Parents: 4310e05
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu Nov 16 03:52:12 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../org/apache/hadoop/hdfs/DFSConfigKeys.java | 1 -
.../server/blockmanagement/BlockManager.java | 19 ++-
.../blockmanagement/DatanodeDescriptor.java | 24 ++--
.../blockmanagement/DatanodeStatistics.java | 3 +
.../server/blockmanagement/DatanodeStats.java | 4 +-
.../blockmanagement/HeartbeatManager.java | 9 +-
.../blockmanagement/ProvidedStorageMap.java | 60 +++++++--
.../blockmanagement/StorageTypeStats.java | 33 ++++-
.../fsdataset/impl/DefaultProvidedVolumeDF.java | 58 ---------
.../fsdataset/impl/ProvidedVolumeDF.java | 34 -----
.../fsdataset/impl/ProvidedVolumeImpl.java | 101 ++++++++++++---
.../federation/metrics/FederationMBean.java | 6 +
.../federation/metrics/FederationMetrics.java | 5 +
.../federation/metrics/NamenodeBeanMetrics.java | 10 ++
.../resolver/MembershipNamenodeResolver.java | 1 +
.../resolver/NamenodeStatusReport.java | 12 +-
.../router/NamenodeHeartbeatService.java | 3 +-
.../store/records/MembershipStats.java | 4 +
.../records/impl/pb/MembershipStatsPBImpl.java | 10 ++
.../hdfs/server/namenode/FSNamesystem.java | 12 ++
.../hdfs/server/namenode/NameNodeMXBean.java | 10 +-
.../namenode/metrics/FSNamesystemMBean.java | 7 +-
.../src/main/proto/FederationProtocol.proto | 1 +
.../src/main/resources/hdfs-default.xml | 8 --
.../src/main/webapps/hdfs/dfshealth.html | 1 +
.../blockmanagement/TestProvidedStorageMap.java | 39 +++---
.../fsdataset/impl/TestProvidedImpl.java | 55 ++------
.../metrics/TestFederationMetrics.java | 2 +
.../TestNameNodeProvidedImplementation.java | 125 ++++++++++++++++---
29 files changed, 425 insertions(+), 232 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index cb57675..fbdc859 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -331,7 +331,6 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
public static final String DFS_NAMENODE_PROVIDED_ENABLED = "dfs.namenode.provided.enabled";
public static final boolean DFS_NAMENODE_PROVIDED_ENABLED_DEFAULT = false;
- public static final String DFS_PROVIDER_DF_CLASS = "dfs.provided.df.class";
public static final String DFS_PROVIDER_STORAGEUUID = "dfs.provided.storage.id";
public static final String DFS_PROVIDER_STORAGEUUID_DEFAULT = "DS-PROVIDED";
public static final String DFS_PROVIDED_ALIASMAP_CLASS = "dfs.provided.aliasmap.class";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 38dcad2..3c2822d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -103,6 +103,8 @@ import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State;
import org.apache.hadoop.hdfs.server.protocol.KeyUpdateCommand;
import org.apache.hadoop.hdfs.server.protocol.ReceivedDeletedBlockInfo;
import org.apache.hadoop.hdfs.server.protocol.StorageReceivedDeletedBlocks;
+import org.apache.hadoop.hdfs.server.protocol.StorageReport;
+import org.apache.hadoop.hdfs.server.protocol.VolumeFailureSummary;
import org.apache.hadoop.hdfs.util.FoldedTreeSet;
import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
import org.apache.hadoop.hdfs.server.namenode.CacheManager;
@@ -2392,6 +2394,21 @@ public class BlockManager implements BlockStatsMXBean {
}
}
+ public long getProvidedCapacity() {
+ return providedStorageMap.getCapacity();
+ }
+
+ public void updateHeartbeat(DatanodeDescriptor node, StorageReport[] reports,
+ long cacheCapacity, long cacheUsed, int xceiverCount, int failedVolumes,
+ VolumeFailureSummary volumeFailureSummary) {
+
+ for (StorageReport report: reports) {
+ providedStorageMap.updateStorage(node, report.getStorage());
+ }
+ node.updateHeartbeat(reports, cacheCapacity, cacheUsed, xceiverCount,
+ failedVolumes, volumeFailureSummary);
+ }
+
/**
* StatefulBlockInfo is used to build the "toUC" list, which is a list of
* updates to the information about under-construction blocks.
@@ -2453,7 +2470,7 @@ public class BlockManager implements BlockStatsMXBean {
// !#! Register DN with provided storage, not with storage owned by DN
// !#! DN should still have a ref to the DNStorageInfo
DatanodeStorageInfo storageInfo =
- providedStorageMap.getStorage(node, storage, context);
+ providedStorageMap.getStorage(node, storage);
if (storageInfo == null) {
// We handle this for backwards compatibility.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index c17ab4c..83c608f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -449,24 +449,24 @@ public class DatanodeDescriptor extends DatanodeInfo {
this.volumeFailures = volFailures;
this.volumeFailureSummary = volumeFailureSummary;
for (StorageReport report : reports) {
- totalCapacity += report.getCapacity();
- totalRemaining += report.getRemaining();
- totalBlockPoolUsed += report.getBlockPoolUsed();
- totalDfsUsed += report.getDfsUsed();
- totalNonDfsUsed += report.getNonDfsUsed();
- // for PROVIDED storages, do not call updateStorage() unless
- // DatanodeStorageInfo already exists!
- if (StorageType.PROVIDED.equals(report.getStorage().getStorageType())
- && storageMap.get(report.getStorage().getStorageID()) == null) {
- continue;
- }
- DatanodeStorageInfo storage = updateStorage(report.getStorage());
+ DatanodeStorageInfo storage =
+ storageMap.get(report.getStorage().getStorageID());
if (checkFailedStorages) {
failedStorageInfos.remove(storage);
}
storage.receivedHeartbeat(report);
+ // skip accounting for capacity of PROVIDED storages!
+ if (StorageType.PROVIDED.equals(storage.getStorageType())) {
+ continue;
+ }
+
+ totalCapacity += report.getCapacity();
+ totalRemaining += report.getRemaining();
+ totalBlockPoolUsed += report.getBlockPoolUsed();
+ totalDfsUsed += report.getDfsUsed();
+ totalNonDfsUsed += report.getNonDfsUsed();
}
// Update total metrics for the node.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
index 33eca2e..36a9c2b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStatistics.java
@@ -77,4 +77,7 @@ public interface DatanodeStatistics {
/** @return Storage Tier statistics*/
Map<StorageType, StorageTypeStats> getStorageTypeStats();
+
+ /** @return the provided capacity */
+ public long getProvidedCapacity();
}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
index 8386b27..912d4d2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStats.java
@@ -183,7 +183,7 @@ class DatanodeStats {
StorageTypeStats storageTypeStats =
storageTypeStatsMap.get(storageType);
if (storageTypeStats == null) {
- storageTypeStats = new StorageTypeStats();
+ storageTypeStats = new StorageTypeStats(storageType);
storageTypeStatsMap.put(storageType, storageTypeStats);
}
storageTypeStats.addNode(node);
@@ -194,7 +194,7 @@ class DatanodeStats {
StorageTypeStats storageTypeStats =
storageTypeStatsMap.get(info.getStorageType());
if (storageTypeStats == null) {
- storageTypeStats = new StorageTypeStats();
+ storageTypeStats = new StorageTypeStats(info.getStorageType());
storageTypeStatsMap.put(info.getStorageType(), storageTypeStats);
}
storageTypeStats.addStorage(info, node);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
index a72ad64..1972a6d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/HeartbeatManager.java
@@ -195,6 +195,11 @@ class HeartbeatManager implements DatanodeStatistics {
return stats.getStatsMap();
}
+ @Override
+ public long getProvidedCapacity() {
+ return blockManager.getProvidedCapacity();
+ }
+
synchronized void register(final DatanodeDescriptor d) {
if (!d.isAlive()) {
addDatanode(d);
@@ -232,8 +237,8 @@ class HeartbeatManager implements DatanodeStatistics {
int xceiverCount, int failedVolumes,
VolumeFailureSummary volumeFailureSummary) {
stats.subtract(node);
- node.updateHeartbeat(reports, cacheCapacity, cacheUsed,
- xceiverCount, failedVolumes, volumeFailureSummary);
+ blockManager.updateHeartbeat(node, reports, cacheCapacity, cacheUsed,
+ xceiverCount, failedVolumes, volumeFailureSummary);
stats.add(node);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 3d19775..2bc8faa 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -42,7 +42,6 @@ import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
import org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap;
-import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
import org.apache.hadoop.hdfs.server.common.BlockAlias;
import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State;
@@ -72,6 +71,7 @@ public class ProvidedStorageMap {
private final ProvidedDescriptor providedDescriptor;
private final DatanodeStorageInfo providedStorageInfo;
private boolean providedEnabled;
+ private long capacity;
ProvidedStorageMap(RwLock lock, BlockManager bm, Configuration conf)
throws IOException {
@@ -112,14 +112,13 @@ public class ProvidedStorageMap {
/**
* @param dn datanode descriptor
* @param s data node storage
- * @param context the block report context
* @return the {@link DatanodeStorageInfo} for the specified datanode.
* If {@code s} corresponds to a provided storage, the storage info
* representing provided storage is returned.
* @throws IOException
*/
- DatanodeStorageInfo getStorage(DatanodeDescriptor dn, DatanodeStorage s,
- BlockReportContext context) throws IOException {
+ DatanodeStorageInfo getStorage(DatanodeDescriptor dn, DatanodeStorage s)
+ throws IOException {
if (providedEnabled && storageId.equals(s.getStorageID())) {
if (StorageType.PROVIDED.equals(s.getStorageType())) {
if (providedStorageInfo.getState() == State.FAILED
@@ -127,8 +126,10 @@ public class ProvidedStorageMap {
providedStorageInfo.setState(State.NORMAL);
LOG.info("Provided storage transitioning to state " + State.NORMAL);
}
- processProvidedStorageReport(context);
- dn.injectStorage(providedStorageInfo);
+ if (dn.getStorageInfo(s.getStorageID()) == null) {
+ dn.injectStorage(providedStorageInfo);
+ }
+ processProvidedStorageReport();
return providedDescriptor.getProvidedStorage(dn, s);
}
LOG.warn("Reserved storage {} reported as non-provided from {}", s, dn);
@@ -136,7 +137,7 @@ public class ProvidedStorageMap {
return dn.getStorageInfo(s.getStorageID());
}
- private void processProvidedStorageReport(BlockReportContext context)
+ private void processProvidedStorageReport()
throws IOException {
assert lock.hasWriteLock() : "Not holding write lock";
if (providedStorageInfo.getBlockReportCount() == 0
@@ -172,6 +173,26 @@ public class ProvidedStorageMap {
}
}
+ public long getCapacity() {
+ if (providedStorageInfo == null) {
+ return 0;
+ }
+ return providedStorageInfo.getCapacity();
+ }
+
+ public void updateStorage(DatanodeDescriptor node, DatanodeStorage storage) {
+ if (providedEnabled && storageId.equals(storage.getStorageID())) {
+ if (StorageType.PROVIDED.equals(storage.getStorageType())) {
+ node.injectStorage(providedStorageInfo);
+ return;
+ } else {
+ LOG.warn("Reserved storage {} reported as non-provided from {}",
+ storage, node);
+ }
+ }
+ node.updateStorage(storage);
+ }
+
/**
* Builder used for creating {@link LocatedBlocks} when a block is provided.
*/
@@ -295,10 +316,12 @@ public class ProvidedStorageMap {
* An abstract DatanodeDescriptor to track datanodes with provided storages.
* NOTE: never resolved through registerDatanode, so not in the topology.
*/
- static class ProvidedDescriptor extends DatanodeDescriptor {
+ public static class ProvidedDescriptor extends DatanodeDescriptor {
private final NavigableMap<String, DatanodeDescriptor> dns =
new ConcurrentSkipListMap<>();
+ public final static String NETWORK_LOCATION = "/REMOTE";
+ public final static String NAME = "PROVIDED";
ProvidedDescriptor() {
super(new DatanodeID(
@@ -444,6 +467,21 @@ public class ProvidedStorageMap {
public int hashCode() {
return super.hashCode();
}
+
+ @Override
+ public String toString() {
+ return "PROVIDED-LOCATION";
+ }
+
+ @Override
+ public String getNetworkLocation() {
+ return NETWORK_LOCATION;
+ }
+
+ @Override
+ public String getName() {
+ return NAME;
+ }
}
/**
@@ -480,7 +518,13 @@ public class ProvidedStorageMap {
super.setState(state);
}
}
+
+ @Override
+ public String toString() {
+ return "PROVIDED-STORAGE";
+ }
}
+
/**
* Used to emulate block reports for provided blocks.
*/
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/StorageTypeStats.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/StorageTypeStats.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/StorageTypeStats.java
index 978009e..c335ec6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/StorageTypeStats.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/StorageTypeStats.java
@@ -22,6 +22,7 @@ import java.beans.ConstructorProperties;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.StorageType;
/**
* Statistics per StorageType.
@@ -36,6 +37,7 @@ public class StorageTypeStats {
private long capacityRemaining = 0L;
private long blockPoolUsed = 0L;
private int nodesInService = 0;
+ private StorageType storageType;
@ConstructorProperties({"capacityTotal", "capacityUsed", "capacityNonDfsUsed",
"capacityRemaining", "blockPoolUsed", "nodesInService"})
@@ -51,22 +53,47 @@ public class StorageTypeStats {
}
public long getCapacityTotal() {
+ // for PROVIDED storage, avoid counting the same storage
+ // across multiple datanodes
+ if (storageType == StorageType.PROVIDED && nodesInService > 0) {
+ return capacityTotal/nodesInService;
+ }
return capacityTotal;
}
public long getCapacityUsed() {
+ // for PROVIDED storage, avoid counting the same storage
+ // across multiple datanodes
+ if (storageType == StorageType.PROVIDED && nodesInService > 0) {
+ return capacityUsed/nodesInService;
+ }
return capacityUsed;
}
public long getCapacityNonDfsUsed() {
+ // for PROVIDED storage, avoid counting the same storage
+ // across multiple datanodes
+ if (storageType == StorageType.PROVIDED && nodesInService > 0) {
+ return capacityNonDfsUsed/nodesInService;
+ }
return capacityNonDfsUsed;
}
public long getCapacityRemaining() {
+ // for PROVIDED storage, avoid counting the same storage
+ // across multiple datanodes
+ if (storageType == StorageType.PROVIDED && nodesInService > 0) {
+ return capacityRemaining/nodesInService;
+ }
return capacityRemaining;
}
public long getBlockPoolUsed() {
+ // for PROVIDED storage, avoid counting the same storage
+ // across multiple datanodes
+ if (storageType == StorageType.PROVIDED && nodesInService > 0) {
+ return blockPoolUsed/nodesInService;
+ }
return blockPoolUsed;
}
@@ -74,7 +101,9 @@ public class StorageTypeStats {
return nodesInService;
}
- StorageTypeStats() {}
+ StorageTypeStats(StorageType storageType) {
+ this.storageType = storageType;
+ }
StorageTypeStats(StorageTypeStats other) {
capacityTotal = other.capacityTotal;
@@ -87,6 +116,7 @@ public class StorageTypeStats {
void addStorage(final DatanodeStorageInfo info,
final DatanodeDescriptor node) {
+ assert storageType == info.getStorageType();
capacityUsed += info.getDfsUsed();
capacityNonDfsUsed += info.getNonDfsUsed();
blockPoolUsed += info.getBlockPoolUsed();
@@ -106,6 +136,7 @@ public class StorageTypeStats {
void subtractStorage(final DatanodeStorageInfo info,
final DatanodeDescriptor node) {
+ assert storageType == info.getStorageType();
capacityUsed -= info.getDfsUsed();
capacityNonDfsUsed -= info.getNonDfsUsed();
blockPoolUsed -= info.getBlockPoolUsed();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
deleted file mode 100644
index 24921c4..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
+++ /dev/null
@@ -1,58 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
-
-import org.apache.hadoop.conf.Configurable;
-import org.apache.hadoop.conf.Configuration;
-
-/**
- * The default usage statistics for a provided volume.
- */
-public class DefaultProvidedVolumeDF
- implements ProvidedVolumeDF, Configurable {
-
- @Override
- public void setConf(Configuration conf) {
- }
-
- @Override
- public Configuration getConf() {
- return null;
- }
-
- @Override
- public long getCapacity() {
- return Long.MAX_VALUE;
- }
-
- @Override
- public long getSpaceUsed() {
- return 0;
- }
-
- @Override
- public long getBlockPoolUsed(String bpid) {
- return 0;
- }
-
- @Override
- public long getAvailable() {
- return Long.MAX_VALUE;
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
deleted file mode 100644
index 4d28883..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
+++ /dev/null
@@ -1,34 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
-
-/**
- * This interface is used to define the usage statistics
- * of the provided storage.
- */
-public interface ProvidedVolumeDF {
-
- long getCapacity();
-
- long getSpaceUsed();
-
- long getBlockPoolUsed(String bpid);
-
- long getAvailable();
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index d103b64..65487f9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -26,6 +26,7 @@ import java.util.Map;
import java.util.Set;
import java.util.Map.Entry;
import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicLong;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
@@ -89,6 +90,30 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
return suffix;
}
+ /**
+ * Class to keep track of the capacity usage statistics for provided volumes.
+ */
+ public static class ProvidedVolumeDF {
+
+ private AtomicLong used = new AtomicLong();
+
+ public long getSpaceUsed() {
+ return used.get();
+ }
+
+ public void decDfsUsed(long value) {
+ used.addAndGet(-value);
+ }
+
+ public void incDfsUsed(long value) {
+ used.addAndGet(value);
+ }
+
+ public long getCapacity() {
+ return getSpaceUsed();
+ }
+ }
+
static class ProvidedBlockPoolSlice {
private ProvidedVolumeImpl providedVolume;
@@ -96,6 +121,8 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
private Configuration conf;
private String bpid;
private ReplicaMap bpVolumeMap;
+ private ProvidedVolumeDF df;
+ private AtomicLong numOfBlocks = new AtomicLong();
ProvidedBlockPoolSlice(String bpid, ProvidedVolumeImpl volume,
Configuration conf) {
@@ -107,6 +134,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
aliasMap = ReflectionUtils.newInstance(fmt, conf);
this.conf = conf;
this.bpid = bpid;
+ this.df = new ProvidedVolumeDF();
bpVolumeMap.initBlockPool(bpid);
LOG.info("Created alias map using class: " + aliasMap.getClass());
}
@@ -155,6 +183,8 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
if (oldReplica == null) {
volumeMap.add(bpid, newReplica);
bpVolumeMap.add(bpid, newReplica);
+ incrNumBlocks();
+ incDfsUsed(region.getBlock().getNumBytes());
} else {
throw new IOException("A block with id " + newReplica.getBlockId()
+ " already exists in the volumeMap");
@@ -163,6 +193,10 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
}
}
+ private void incrNumBlocks() {
+ numOfBlocks.incrementAndGet();
+ }
+
public boolean isEmpty() {
return bpVolumeMap.replicas(bpid).size() == 0;
}
@@ -199,6 +233,18 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
}
}
}
+
+ public long getNumOfBlocks() {
+ return numOfBlocks.get();
+ }
+
+ long getDfsUsed() throws IOException {
+ return df.getSpaceUsed();
+ }
+
+ void incDfsUsed(long value) {
+ df.incDfsUsed(value);
+ }
}
private URI baseURI;
@@ -217,10 +263,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
"Only provided storages must use ProvidedVolume";
baseURI = getStorageLocation().getUri();
- Class<? extends ProvidedVolumeDF> dfClass =
- conf.getClass(DFSConfigKeys.DFS_PROVIDER_DF_CLASS,
- DefaultProvidedVolumeDF.class, ProvidedVolumeDF.class);
- df = ReflectionUtils.newInstance(dfClass, conf);
+ df = new ProvidedVolumeDF();
remoteFS = FileSystem.get(baseURI, conf);
}
@@ -231,34 +274,47 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
@Override
public long getCapacity() {
- if (configuredCapacity < 0) {
- return df.getCapacity();
+ try {
+ // default to whatever is the space used!
+ return getDfsUsed();
+ } catch (IOException e) {
+ LOG.warn("Exception when trying to get capacity of ProvidedVolume: {}",
+ e);
}
- return configuredCapacity;
+ return 0L;
}
@Override
public long getDfsUsed() throws IOException {
- return df.getSpaceUsed();
+ long dfsUsed = 0;
+ synchronized(getDataset()) {
+ for(ProvidedBlockPoolSlice s : bpSlices.values()) {
+ dfsUsed += s.getDfsUsed();
+ }
+ }
+ return dfsUsed;
}
@Override
long getBlockPoolUsed(String bpid) throws IOException {
- if (bpSlices.containsKey(bpid)) {
- return df.getBlockPoolUsed(bpid);
- } else {
- throw new IOException("block pool " + bpid + " is not found");
- }
+ return getProvidedBlockPoolSlice(bpid).getDfsUsed();
}
@Override
public long getAvailable() throws IOException {
- return df.getAvailable();
+ long remaining = getCapacity() - getDfsUsed();
+ // do not report less than 0 remaining space for PROVIDED storage
+ // to prevent marking it as over capacity on NN
+ if (remaining < 0L) {
+ LOG.warn("Volume {} has less than 0 available space", this);
+ return 0L;
+ }
+ return remaining;
}
@Override
long getActualNonDfsUsed() throws IOException {
- return df.getSpaceUsed();
+ return 0L;
}
@Override
@@ -267,6 +323,21 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
}
@Override
+ long getNumBlocks() {
+ long numBlocks = 0;
+ for (ProvidedBlockPoolSlice s : bpSlices.values()) {
+ numBlocks += s.getNumOfBlocks();
+ }
+ return numBlocks;
+ }
+
+ @Override
+ void incDfsUsedAndNumBlocks(String bpid, long value) {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
public URI getBaseURI() {
return baseURI;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
index cb4245a..8abfc6e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMBean.java
@@ -65,6 +65,12 @@ public interface FederationMBean {
long getRemainingCapacity();
/**
+ * Get the total remote storage capacity mounted in the federated cluster.
+ * @return Remote capacity of the federated cluster.
+ */
+ long getProvidedSpace();
+
+ /**
* Get the number of nameservices in the federation.
* @return Number of nameservices in the federation.
*/
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
index 7844a2e..4582825 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/FederationMetrics.java
@@ -272,6 +272,11 @@ public class FederationMetrics implements FederationMBean {
}
@Override
+ public long getProvidedSpace() {
+ return getNameserviceAggregatedLong(MembershipStats::getProvidedSpace);
+ }
+
+ @Override
public long getUsedCapacity() {
return getTotalCapacity() - getRemainingCapacity();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
index 23cd675..c4e5b5b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java
@@ -169,6 +169,11 @@ public class NamenodeBeanMetrics
}
@Override
+ public long getProvidedCapacity() {
+ return getFederationMetrics().getProvidedSpace();
+ }
+
+ @Override
public String getSafemode() {
// We assume that the global federated view is never in safe mode
return "";
@@ -450,6 +455,11 @@ public class NamenodeBeanMetrics
}
@Override
+ public long getProvidedCapacityTotal() {
+ return getProvidedCapacity();
+ }
+
+ @Override
public long getFilesTotal() {
return getFederationMetrics().getNumFiles();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
index 98ddd22..b87eeec 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java
@@ -236,6 +236,7 @@ public class MembershipNamenodeResolver
report.getNumOfBlocksPendingDeletion());
stats.setAvailableSpace(report.getAvailableSpace());
stats.setTotalSpace(report.getTotalSpace());
+ stats.setProvidedSpace(report.getProvidedSpace());
stats.setNumOfDecommissioningDatanodes(
report.getNumDecommissioningDatanodes());
stats.setNumOfActiveDatanodes(report.getNumLiveDatanodes());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
index 555e2ee..d3c6d87 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/NamenodeStatusReport.java
@@ -58,6 +58,7 @@ public class NamenodeStatusReport {
private long numOfBlocksUnderReplicated = -1;
private long numOfBlocksPendingDeletion = -1;
private long totalSpace = -1;
+ private long providedSpace = -1;
/** If the fields are valid. */
private boolean registrationValid = false;
@@ -296,7 +297,7 @@ public class NamenodeStatusReport {
public void setNamesystemInfo(long available, long total,
long numFiles, long numBlocks, long numBlocksMissing,
long numBlocksPendingReplication, long numBlocksUnderReplicated,
- long numBlocksPendingDeletion) {
+ long numBlocksPendingDeletion, long providedSpace) {
this.totalSpace = total;
this.availableSpace = available;
this.numOfBlocks = numBlocks;
@@ -306,6 +307,7 @@ public class NamenodeStatusReport {
this.numOfBlocksPendingDeletion = numBlocksPendingDeletion;
this.numOfFiles = numFiles;
this.statsValid = true;
+ this.providedSpace = providedSpace;
}
/**
@@ -345,6 +347,14 @@ public class NamenodeStatusReport {
}
/**
+ * Get the space occupied by provided storage.
+ *
+ * @return the provided capacity.
+ */
+ public long getProvidedSpace() {
+ return this.providedSpace;
+ }
+ /**
* Get the number of missing blocks.
*
* @return Number of missing blocks.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
index 7d69a26..aaf2817 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
@@ -350,7 +350,8 @@ public class NamenodeHeartbeatService extends PeriodicService {
jsonObject.getLong("MissingBlocks"),
jsonObject.getLong("PendingReplicationBlocks"),
jsonObject.getLong("UnderReplicatedBlocks"),
- jsonObject.getLong("PendingDeletionBlocks"));
+ jsonObject.getLong("PendingDeletionBlocks"),
+ jsonObject.getLong("ProvidedCapacityTotal"));
}
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java
index 0bd19d9..654140c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MembershipStats.java
@@ -45,6 +45,10 @@ public abstract class MembershipStats extends BaseRecord {
public abstract long getAvailableSpace();
+ public abstract void setProvidedSpace(long capacity);
+
+ public abstract long getProvidedSpace();
+
public abstract void setNumOfFiles(long files);
public abstract long getNumOfFiles();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java
index 9f0a167..3347bc6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MembershipStatsPBImpl.java
@@ -78,6 +78,16 @@ public class MembershipStatsPBImpl extends MembershipStats
}
@Override
+ public void setProvidedSpace(long capacity) {
+ this.translator.getBuilder().setProvidedSpace(capacity);
+ }
+
+ @Override
+ public long getProvidedSpace() {
+ return this.translator.getProtoOrBuilder().getProvidedSpace();
+ }
+
+ @Override
public void setNumOfFiles(long files) {
this.translator.getBuilder().setNumOfFiles(files);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index d3d9cdc..23bcc3a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4154,6 +4154,13 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
return datanodeStatistics.getCapacityRemaining();
}
+ @Override // FSNamesystemMBean
+ @Metric({"ProvidedCapacityTotal",
+ "Total space used in PROVIDED storage in bytes" })
+ public long getProvidedCapacityTotal() {
+ return datanodeStatistics.getProvidedCapacity();
+ }
+
@Metric({"CapacityRemainingGB", "Remaining capacity in GB"})
public float getCapacityRemainingGB() {
return DFSUtil.roundBytesToGB(getCapacityRemaining());
@@ -5718,6 +5725,11 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
}
@Override // NameNodeMXBean
+ public long getProvidedCapacity() {
+ return this.getProvidedCapacityTotal();
+ }
+
+ @Override // NameNodeMXBean
public String getSafemode() {
if (!this.isInSafeMode())
return "";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
index 82cec33..e4ed3a9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeMXBean.java
@@ -65,8 +65,14 @@ public interface NameNodeMXBean {
* @return the total raw bytes including non-dfs used space
*/
public long getTotal();
-
-
+
+ /**
+ * Gets capacity of the provided storage mounted, in bytes.
+ *
+ * @return the total raw bytes present in the provided storage.
+ */
+ public long getProvidedCapacity();
+
/**
* Gets the safemode status
*
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
index ebdbc12..c25bafd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/FSNamesystemMBean.java
@@ -69,7 +69,12 @@ public interface FSNamesystemMBean {
* @return - used capacity in bytes
*/
public long getCapacityUsed();
-
+
+ /**
+ * Total PROVIDED storage capacity.
+ * @return - total PROVIDED storage capacity in bytes
+ */
+ public long getProvidedCapacityTotal();
/**
* Total number of files and directories
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/FederationProtocol.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/FederationProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/FederationProtocol.proto
index 32a6250..26b3111 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/FederationProtocol.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/FederationProtocol.proto
@@ -30,6 +30,7 @@ package hadoop.hdfs;
message NamenodeMembershipStatsRecordProto {
optional uint64 totalSpace = 1;
optional uint64 availableSpace = 2;
+ optional uint64 providedSpace = 3;
optional uint64 numOfFiles = 10;
optional uint64 numOfBlocks = 11;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 835d8c4..655f9cb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -4630,14 +4630,6 @@
</property>
<property>
- <name>dfs.provided.df.class</name>
- <value>org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.DefaultProvidedVolumeDF</value>
- <description>
- The class that is used to measure usage statistics of provided stores.
- </description>
- </property>
-
- <property>
<name>dfs.provided.storage.id</name>
<value>DS-PROVIDED</value>
<description>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
index 6ae3960..45aee1e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
@@ -162,6 +162,7 @@
{#nn}
<table class="table table-bordered table-striped">
<tr><th> Configured Capacity:</th><td>{Total|fmt_bytes}</td></tr>
+ <tr><th> Configured Remote Capacity:</th><td>{ProvidedCapacity|fmt_bytes}</td></tr>
<tr><th> DFS Used:</th><td>{Used|fmt_bytes} ({PercentUsed|fmt_percentage})</td></tr>
<tr><th> Non DFS Used:</th><td>{NonDfsUsedSpace|fmt_bytes}</td></tr>
<tr><th> DFS Remaining:</th><td>{Free|fmt_bytes} ({PercentRemaining|fmt_percentage})</td></tr>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
index 89741b5..1ef2f2b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
@@ -63,15 +63,15 @@ public class TestProvidedStorageMap {
private DatanodeDescriptor createDatanodeDescriptor(int port) {
return DFSTestUtil.getDatanodeDescriptor("127.0.0.1", port, "defaultRack",
- "localhost");
+ "localhost");
}
@Test
public void testProvidedStorageMap() throws IOException {
ProvidedStorageMap providedMap = new ProvidedStorageMap(
- nameSystemLock, bm, conf);
+ nameSystemLock, bm, conf);
DatanodeStorageInfo providedMapStorage =
- providedMap.getProvidedStorageInfo();
+ providedMap.getProvidedStorageInfo();
//the provided storage cannot be null
assertNotNull(providedMapStorage);
@@ -80,41 +80,40 @@ public class TestProvidedStorageMap {
//associate two storages to the datanode
DatanodeStorage dn1ProvidedStorage = new DatanodeStorage(
- providedStorageID,
- DatanodeStorage.State.NORMAL,
- StorageType.PROVIDED);
+ providedStorageID,
+ DatanodeStorage.State.NORMAL,
+ StorageType.PROVIDED);
DatanodeStorage dn1DiskStorage = new DatanodeStorage(
- "sid-1", DatanodeStorage.State.NORMAL, StorageType.DISK);
+ "sid-1", DatanodeStorage.State.NORMAL, StorageType.DISK);
when(nameSystemLock.hasWriteLock()).thenReturn(true);
- DatanodeStorageInfo dns1Provided = providedMap.getStorage(dn1,
- dn1ProvidedStorage, null);
- DatanodeStorageInfo dns1Disk = providedMap.getStorage(dn1,
- dn1DiskStorage, null);
+ DatanodeStorageInfo dns1Provided =
+ providedMap.getStorage(dn1, dn1ProvidedStorage);
+ DatanodeStorageInfo dns1Disk = providedMap.getStorage(dn1, dn1DiskStorage);
assertTrue("The provided storages should be equal",
- dns1Provided == providedMapStorage);
+ dns1Provided == providedMapStorage);
assertTrue("Disk storage has not yet been registered with block manager",
- dns1Disk == null);
+ dns1Disk == null);
//add the disk storage to the datanode.
DatanodeStorageInfo dnsDisk = new DatanodeStorageInfo(dn1, dn1DiskStorage);
dn1.injectStorage(dnsDisk);
assertTrue("Disk storage must match the injected storage info",
- dnsDisk == providedMap.getStorage(dn1, dn1DiskStorage, null));
+ dnsDisk == providedMap.getStorage(dn1, dn1DiskStorage));
//create a 2nd datanode
DatanodeDescriptor dn2 = createDatanodeDescriptor(5010);
//associate a provided storage with the datanode
DatanodeStorage dn2ProvidedStorage = new DatanodeStorage(
- providedStorageID,
- DatanodeStorage.State.NORMAL,
- StorageType.PROVIDED);
+ providedStorageID,
+ DatanodeStorage.State.NORMAL,
+ StorageType.PROVIDED);
DatanodeStorageInfo dns2Provided = providedMap.getStorage(
- dn2, dn2ProvidedStorage, null);
+ dn2, dn2ProvidedStorage);
assertTrue("The provided storages should be equal",
- dns2Provided == providedMapStorage);
+ dns2Provided == providedMapStorage);
assertTrue("The DatanodeDescriptor should contain the provided storage",
- dn2.getStorageInfo(providedStorageID) == providedMapStorage);
+ dn2.getStorageInfo(providedStorageID) == providedMapStorage);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index ecab06b..52112f7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -46,7 +46,6 @@ import java.util.Map;
import java.util.Set;
import org.apache.commons.io.FileUtils;
-import org.apache.hadoop.conf.Configurable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystemTestHelper;
import org.apache.hadoop.fs.Path;
@@ -102,6 +101,7 @@ public class TestProvidedImpl {
private FsDatasetImpl dataset;
private static Map<Long, String> blkToPathMap;
private static List<FsVolumeImpl> providedVolumes;
+ private static long spaceUsed = 0;
/**
* A simple FileRegion iterator for tests.
@@ -142,6 +142,7 @@ public class TestProvidedImpl {
}
writer.flush();
writer.close();
+ spaceUsed += BLK_LEN;
} catch (IOException e) {
e.printStackTrace();
}
@@ -240,39 +241,6 @@ public class TestProvidedImpl {
}
}
- public static class TestProvidedVolumeDF
- implements ProvidedVolumeDF, Configurable {
-
- @Override
- public void setConf(Configuration conf) {
- }
-
- @Override
- public Configuration getConf() {
- return null;
- }
-
- @Override
- public long getCapacity() {
- return Long.MAX_VALUE;
- }
-
- @Override
- public long getSpaceUsed() {
- return -1;
- }
-
- @Override
- public long getBlockPoolUsed(String bpid) {
- return -1;
- }
-
- @Override
- public long getAvailable() {
- return Long.MAX_VALUE;
- }
- }
-
private static Storage.StorageDirectory createLocalStorageDirectory(
File root, Configuration conf)
throws SecurityException, IOException {
@@ -370,6 +338,8 @@ public class TestProvidedImpl {
when(datanode.getConf()).thenReturn(conf);
final DNConf dnConf = new DNConf(datanode);
when(datanode.getDnConf()).thenReturn(dnConf);
+ // reset the space used
+ spaceUsed = 0;
final BlockScanner disabledBlockScanner = new BlockScanner(datanode, conf);
when(datanode.getBlockScanner()).thenReturn(disabledBlockScanner);
@@ -379,8 +349,6 @@ public class TestProvidedImpl {
this.conf.setClass(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
TestFileRegionBlockAliasMap.class, BlockAliasMap.class);
- conf.setClass(DFSConfigKeys.DFS_PROVIDER_DF_CLASS,
- TestProvidedVolumeDF.class, ProvidedVolumeDF.class);
blkToPathMap = new HashMap<Long, String>();
providedVolumes = new LinkedList<FsVolumeImpl>();
@@ -410,8 +378,6 @@ public class TestProvidedImpl {
assertEquals(NUM_PROVIDED_INIT_VOLUMES, providedVolumes.size());
assertEquals(0, dataset.getNumFailedVolumes());
- TestProvidedVolumeDF df = new TestProvidedVolumeDF();
-
for (int i = 0; i < providedVolumes.size(); i++) {
//check basic information about provided volume
assertEquals(DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT,
@@ -419,18 +385,17 @@ public class TestProvidedImpl {
assertEquals(StorageType.PROVIDED,
providedVolumes.get(i).getStorageType());
+ long space = providedVolumes.get(i).getBlockPoolUsed(
+ BLOCK_POOL_IDS[CHOSEN_BP_ID]);
//check the df stats of the volume
- assertEquals(df.getAvailable(), providedVolumes.get(i).getAvailable());
- assertEquals(df.getBlockPoolUsed(BLOCK_POOL_IDS[CHOSEN_BP_ID]),
- providedVolumes.get(i).getBlockPoolUsed(
- BLOCK_POOL_IDS[CHOSEN_BP_ID]));
+ assertEquals(spaceUsed, space);
+ assertEquals(NUM_PROVIDED_BLKS, providedVolumes.get(i).getNumBlocks());
providedVolumes.get(i).shutdownBlockPool(
BLOCK_POOL_IDS[1 - CHOSEN_BP_ID], null);
try {
- assertEquals(df.getBlockPoolUsed(BLOCK_POOL_IDS[1 - CHOSEN_BP_ID]),
- providedVolumes.get(i).getBlockPoolUsed(
- BLOCK_POOL_IDS[1 - CHOSEN_BP_ID]));
+ assertEquals(0, providedVolumes.get(i)
+ .getBlockPoolUsed(BLOCK_POOL_IDS[1 - CHOSEN_BP_ID]));
//should not be triggered
assertTrue(false);
} catch (IOException e) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java
index d6a194f..99dcd40 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestFederationMetrics.java
@@ -187,6 +187,8 @@ public class TestFederationMetrics extends TestMetricsBase {
json.getLong("numOfDecomActiveDatanodes"));
assertEquals(stats.getNumOfDecomDeadDatanodes(),
json.getLong("numOfDecomDeadDatanodes"));
+ assertEquals(stats.getProvidedSpace(),
+ json.getLong("providedSpace"));
nameservicesFound++;
}
assertEquals(getNameservices().size(), nameservicesFound);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ecb56029/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 22f00aa..f6d38f6 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -27,6 +27,7 @@ import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.Channels;
import java.nio.channels.ReadableByteChannel;
+import java.util.Iterator;
import java.util.Random;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.conf.Configuration;
@@ -44,13 +45,23 @@ import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStatistics;
+import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
+import org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap;
import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
import org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap;
import org.apache.hadoop.hdfs.server.datanode.DataNode;
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl;
+import org.apache.hadoop.hdfs.server.protocol.StorageReport;
+import org.apache.hadoop.net.NodeBase;
import org.junit.After;
import org.junit.Before;
import org.junit.Rule;
@@ -59,6 +70,7 @@ import org.junit.rules.TestName;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import static org.apache.hadoop.net.NodeBase.PATH_SEPARATOR_STR;
import static org.junit.Assert.*;
public class TestNameNodeProvidedImplementation {
@@ -79,6 +91,7 @@ public class TestNameNodeProvidedImplementation {
private final String filePrefix = "file";
private final String fileSuffix = ".dat";
private final int baseFileLen = 1024;
+ private long providedDataSize = 0;
Configuration conf;
MiniDFSCluster cluster;
@@ -135,6 +148,7 @@ public class TestNameNodeProvidedImplementation {
}
writer.flush();
writer.close();
+ providedDataSize += newFile.length();
} catch (IOException e) {
e.printStackTrace();
}
@@ -206,13 +220,14 @@ public class TestNameNodeProvidedImplementation {
cluster.waitActive();
}
- @Test(timeout = 20000)
+ @Test(timeout=20000)
public void testLoadImage() throws Exception {
final long seed = r.nextLong();
LOG.info("NAMEPATH: " + NAMEPATH);
createImage(new RandomTreeWalk(seed), NNDIRPATH, FixedBlockResolver.class);
- startCluster(NNDIRPATH, 0, new StorageType[] {StorageType.PROVIDED},
- null, false);
+ startCluster(NNDIRPATH, 0,
+ new StorageType[] {StorageType.PROVIDED, StorageType.DISK}, null,
+ false);
FileSystem fs = cluster.getFileSystem();
for (TreePath e : new RandomTreeWalk(seed)) {
@@ -231,14 +246,83 @@ public class TestNameNodeProvidedImplementation {
}
}
- @Test(timeout=20000)
- public void testBlockLoad() throws Exception {
+ @Test(timeout=30000)
+ public void testProvidedReporting() throws Exception {
conf.setClass(ImageWriter.Options.UGI_CLASS,
SingleUGIResolver.class, UGIResolver.class);
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
- startCluster(NNDIRPATH, 1, new StorageType[] {StorageType.PROVIDED},
- null, false);
+ int numDatanodes = 10;
+ startCluster(NNDIRPATH, numDatanodes,
+ new StorageType[] {StorageType.PROVIDED, StorageType.DISK}, null,
+ false);
+ long diskCapacity = 1000;
+ // set the DISK capacity for testing
+ for (DataNode dn: cluster.getDataNodes()) {
+ for (FsVolumeSpi ref : dn.getFSDataset().getFsVolumeReferences()) {
+ if (ref.getStorageType() == StorageType.DISK) {
+ ((FsVolumeImpl) ref).setCapacityForTesting(diskCapacity);
+ }
+ }
+ }
+ // trigger heartbeats to update the capacities
+ cluster.triggerHeartbeats();
+ Thread.sleep(10000);
+ // verify namenode stats
+ FSNamesystem namesystem = cluster.getNameNode().getNamesystem();
+ DatanodeStatistics dnStats = namesystem.getBlockManager()
+ .getDatanodeManager().getDatanodeStatistics();
+
+ // total capacity reported includes only the local volumes and
+ // not the provided capacity
+ assertEquals(diskCapacity * numDatanodes, namesystem.getTotal());
+
+ // total storage used should be equal to the totalProvidedStorage
+ // no capacity should be remaining!
+ assertEquals(providedDataSize, dnStats.getProvidedCapacity());
+ assertEquals(providedDataSize, namesystem.getProvidedCapacityTotal());
+ assertEquals(providedDataSize, dnStats.getStorageTypeStats()
+ .get(StorageType.PROVIDED).getCapacityTotal());
+ assertEquals(providedDataSize, dnStats.getStorageTypeStats()
+ .get(StorageType.PROVIDED).getCapacityUsed());
+
+ // verify datanode stats
+ for (DataNode dn: cluster.getDataNodes()) {
+ for (StorageReport report : dn.getFSDataset()
+ .getStorageReports(namesystem.getBlockPoolId())) {
+ if (report.getStorage().getStorageType() == StorageType.PROVIDED) {
+ assertEquals(providedDataSize, report.getCapacity());
+ assertEquals(providedDataSize, report.getDfsUsed());
+ assertEquals(providedDataSize, report.getBlockPoolUsed());
+ assertEquals(0, report.getNonDfsUsed());
+ assertEquals(0, report.getRemaining());
+ }
+ }
+ }
+
+ DFSClient client = new DFSClient(new InetSocketAddress("localhost",
+ cluster.getNameNodePort()), cluster.getConfiguration(0));
+ BlockManager bm = namesystem.getBlockManager();
+ for (int fileId = 0; fileId < numFiles; fileId++) {
+ String filename = "/" + filePrefix + fileId + fileSuffix;
+ LocatedBlocks locatedBlocks = client.getLocatedBlocks(
+ filename, 0, baseFileLen);
+ for (LocatedBlock locatedBlock : locatedBlocks.getLocatedBlocks()) {
+ BlockInfo blockInfo =
+ bm.getStoredBlock(locatedBlock.getBlock().getLocalBlock());
+ Iterator<DatanodeStorageInfo> storagesItr = blockInfo.getStorageInfos();
+
+ DatanodeStorageInfo info = storagesItr.next();
+ assertEquals(StorageType.PROVIDED, info.getStorageType());
+ DatanodeDescriptor dnDesc = info.getDatanodeDescriptor();
+ // check the locations that are returned by FSCK have the right name
+ assertEquals(ProvidedStorageMap.ProvidedDescriptor.NETWORK_LOCATION
+ + PATH_SEPARATOR_STR + ProvidedStorageMap.ProvidedDescriptor.NAME,
+ NodeBase.getPath(dnDesc));
+ // no DatanodeStorageInfos should remain
+ assertFalse(storagesItr.hasNext());
+ }
+ }
}
@Test(timeout=500000)
@@ -250,8 +334,8 @@ public class TestNameNodeProvidedImplementation {
// make the last Datanode with only DISK
startCluster(NNDIRPATH, 3, null,
new StorageType[][] {
- {StorageType.PROVIDED},
- {StorageType.PROVIDED},
+ {StorageType.PROVIDED, StorageType.DISK},
+ {StorageType.PROVIDED, StorageType.DISK},
{StorageType.DISK}},
false);
// wait for the replication to finish
@@ -308,8 +392,9 @@ public class TestNameNodeProvidedImplementation {
FsUGIResolver.class, UGIResolver.class);
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
- startCluster(NNDIRPATH, 3, new StorageType[] {StorageType.PROVIDED},
- null, false);
+ startCluster(NNDIRPATH, 3,
+ new StorageType[] {StorageType.PROVIDED, StorageType.DISK}, null,
+ false);
FileSystem fs = cluster.getFileSystem();
Thread.sleep(2000);
int count = 0;
@@ -371,7 +456,7 @@ public class TestNameNodeProvidedImplementation {
return fs.getFileBlockLocations(path, 0, fileLen);
}
- @Test
+ @Test(timeout=30000)
public void testClusterWithEmptyImage() throws IOException {
// start a cluster with 2 datanodes without any provided storage
startCluster(NNDIRPATH, 2, null,
@@ -404,7 +489,7 @@ public class TestNameNodeProvidedImplementation {
* Tests setting replication of provided files.
* @throws Exception
*/
- @Test
+ @Test(timeout=30000)
public void testSetReplicationForProvidedFiles() throws Exception {
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
@@ -441,14 +526,14 @@ public class TestNameNodeProvidedImplementation {
getAndCheckBlockLocations(client, filename, newReplication);
}
- @Test
+ @Test(timeout=30000)
public void testProvidedDatanodeFailures() throws Exception {
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
startCluster(NNDIRPATH, 3, null,
new StorageType[][] {
- {StorageType.PROVIDED},
- {StorageType.PROVIDED},
+ {StorageType.PROVIDED, StorageType.DISK},
+ {StorageType.PROVIDED, StorageType.DISK},
{StorageType.DISK}},
false);
@@ -511,7 +596,7 @@ public class TestNameNodeProvidedImplementation {
// 2 Datanodes, 1 PROVIDED and other DISK
startCluster(NNDIRPATH, 2, null,
new StorageType[][] {
- {StorageType.PROVIDED},
+ {StorageType.PROVIDED, StorageType.DISK},
{StorageType.DISK}},
false);
@@ -540,7 +625,7 @@ public class TestNameNodeProvidedImplementation {
// 2 Datanodes, 1 PROVIDED and other DISK
startCluster(NNDIRPATH, 2, null,
new StorageType[][] {
- {StorageType.PROVIDED},
+ {StorageType.PROVIDED, StorageType.DISK},
{StorageType.DISK}},
false);
@@ -570,7 +655,7 @@ public class TestNameNodeProvidedImplementation {
}
}
- @Test
+ @Test(timeout=30000)
public void testSetClusterID() throws Exception {
String clusterID = "PROVIDED-CLUSTER";
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
@@ -578,7 +663,7 @@ public class TestNameNodeProvidedImplementation {
// 2 Datanodes, 1 PROVIDED and other DISK
startCluster(NNDIRPATH, 2, null,
new StorageType[][] {
- {StorageType.PROVIDED},
+ {StorageType.PROVIDED, StorageType.DISK},
{StorageType.DISK}},
false);
NameNode nn = cluster.getNameNode();
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[36/50] [abbrv] hadoop git commit: HDFS-11792. [READ] Test cases for
ProvidedVolumeDF and ProviderBlockIteratorImpl
Posted by vi...@apache.org.
HDFS-11792. [READ] Test cases for ProvidedVolumeDF and ProviderBlockIteratorImpl
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa5b1546
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa5b1546
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa5b1546
Branch: refs/heads/HDFS-9806
Commit: aa5b1546338b8aa51c579b86e0a3d9726ffd2b00
Parents: 90f4a78
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Wed May 31 15:17:12 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../fsdataset/impl/ProvidedVolumeImpl.java | 6 +-
.../fsdataset/impl/TestProvidedImpl.java | 94 ++++++++++++++++++--
2 files changed, 92 insertions(+), 8 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa5b1546/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index a48e117..421b9cc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -191,7 +191,11 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
@Override
long getBlockPoolUsed(String bpid) throws IOException {
- return df.getBlockPoolUsed(bpid);
+ if (bpSlices.containsKey(bpid)) {
+ return df.getBlockPoolUsed(bpid);
+ } else {
+ throw new IOException("block pool " + bpid + " is not found");
+ }
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa5b1546/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index 2c119fe..4753235 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -83,6 +83,7 @@ public class TestProvidedImpl {
private static final String BASE_DIR =
new FileSystemTestHelper().getTestRootDir();
private static final int NUM_LOCAL_INIT_VOLUMES = 1;
+ //only support one provided volume for now.
private static final int NUM_PROVIDED_INIT_VOLUMES = 1;
private static final String[] BLOCK_POOL_IDS = {"bpid-0", "bpid-1"};
private static final int NUM_PROVIDED_BLKS = 10;
@@ -208,6 +209,39 @@ public class TestProvidedImpl {
}
}
+ public static class TestProvidedVolumeDF
+ implements ProvidedVolumeDF, Configurable {
+
+ @Override
+ public void setConf(Configuration conf) {
+ }
+
+ @Override
+ public Configuration getConf() {
+ return null;
+ }
+
+ @Override
+ public long getCapacity() {
+ return Long.MAX_VALUE;
+ }
+
+ @Override
+ public long getSpaceUsed() {
+ return -1;
+ }
+
+ @Override
+ public long getBlockPoolUsed(String bpid) {
+ return -1;
+ }
+
+ @Override
+ public long getAvailable() {
+ return Long.MAX_VALUE;
+ }
+ }
+
private static Storage.StorageDirectory createLocalStorageDirectory(
File root, Configuration conf)
throws SecurityException, IOException {
@@ -299,8 +333,8 @@ public class TestProvidedImpl {
public void setUp() throws IOException {
datanode = mock(DataNode.class);
storage = mock(DataStorage.class);
- this.conf = new Configuration();
- this.conf.setLong(DFS_DATANODE_SCAN_PERIOD_HOURS_KEY, 0);
+ conf = new Configuration();
+ conf.setLong(DFS_DATANODE_SCAN_PERIOD_HOURS_KEY, 0);
when(datanode.getConf()).thenReturn(conf);
final DNConf dnConf = new DNConf(datanode);
@@ -312,8 +346,10 @@ public class TestProvidedImpl {
new ShortCircuitRegistry(conf);
when(datanode.getShortCircuitRegistry()).thenReturn(shortCircuitRegistry);
- this.conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
+ conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
TestFileRegionProvider.class, FileRegionProvider.class);
+ conf.setClass(DFSConfigKeys.DFS_PROVIDER_DF_CLASS,
+ TestProvidedVolumeDF.class, ProvidedVolumeDF.class);
blkToPathMap = new HashMap<Long, String>();
providedVolumes = new LinkedList<FsVolumeImpl>();
@@ -333,17 +369,43 @@ public class TestProvidedImpl {
for (String bpid : BLOCK_POOL_IDS) {
dataset.addBlockPool(bpid, conf);
}
+ }
+
+ @Test
+ public void testProvidedVolumeImpl() throws IOException {
assertEquals(NUM_LOCAL_INIT_VOLUMES + NUM_PROVIDED_INIT_VOLUMES,
getNumVolumes());
+ assertEquals(NUM_PROVIDED_INIT_VOLUMES, providedVolumes.size());
assertEquals(0, dataset.getNumFailedVolumes());
- }
- @Test
- public void testProvidedStorageID() throws IOException {
+ TestProvidedVolumeDF df = new TestProvidedVolumeDF();
+
for (int i = 0; i < providedVolumes.size(); i++) {
+ //check basic information about provided volume
assertEquals(DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT,
providedVolumes.get(i).getStorageID());
+ assertEquals(StorageType.PROVIDED,
+ providedVolumes.get(i).getStorageType());
+
+ //check the df stats of the volume
+ assertEquals(df.getAvailable(), providedVolumes.get(i).getAvailable());
+ assertEquals(df.getBlockPoolUsed(BLOCK_POOL_IDS[CHOSEN_BP_ID]),
+ providedVolumes.get(i).getBlockPoolUsed(
+ BLOCK_POOL_IDS[CHOSEN_BP_ID]));
+
+ providedVolumes.get(i).shutdownBlockPool(
+ BLOCK_POOL_IDS[1 - CHOSEN_BP_ID], null);
+ try {
+ assertEquals(df.getBlockPoolUsed(BLOCK_POOL_IDS[1 - CHOSEN_BP_ID]),
+ providedVolumes.get(i).getBlockPoolUsed(
+ BLOCK_POOL_IDS[1 - CHOSEN_BP_ID]));
+ //should not be triggered
+ assertTrue(false);
+ } catch (IOException e) {
+ LOG.info("Expected exception: " + e);
+ }
+
}
}
@@ -385,6 +447,8 @@ public class TestProvidedImpl {
BlockIterator iter =
vol.newBlockIterator(BLOCK_POOL_IDS[CHOSEN_BP_ID], "temp");
Set<Long> blockIdsUsed = new HashSet<Long>();
+
+ assertEquals(BLOCK_POOL_IDS[CHOSEN_BP_ID], iter.getBlockPoolId());
while(!iter.atEnd()) {
ExtendedBlock eb = iter.nextBlock();
long blkId = eb.getBlockId();
@@ -394,10 +458,26 @@ public class TestProvidedImpl {
blockIdsUsed.add(blkId);
}
assertEquals(NUM_PROVIDED_BLKS, blockIdsUsed.size());
+
+ // rewind the block iterator
+ iter.rewind();
+ while(!iter.atEnd()) {
+ ExtendedBlock eb = iter.nextBlock();
+ long blkId = eb.getBlockId();
+ //the block should have already appeared in the first scan.
+ assertTrue(blockIdsUsed.contains(blkId));
+ blockIdsUsed.remove(blkId);
+ }
+ //none of the blocks should remain in blockIdsUsed
+ assertEquals(0, blockIdsUsed.size());
+
+ //the other block pool should not contain any blocks!
+ BlockIterator nonProvidedBpIter =
+ vol.newBlockIterator(BLOCK_POOL_IDS[1 - CHOSEN_BP_ID], "temp");
+ assertEquals(null, nonProvidedBpIter.nextBlock());
}
}
-
@Test
public void testRefresh() throws IOException {
conf.setInt(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_THREADS_KEY, 1);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[03/50] [abbrv] hadoop git commit: HDFS-12594. snapshotDiff fails if
the report exceeds the RPC response limit. Contributed by Shashikant Banerjee
Posted by vi...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b1c7654e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java
index e0a7b5b..a4fb8ab 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java
@@ -90,6 +90,7 @@ public class TestSnapshotDiffReport {
conf.setBoolean(
DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DIFF_ALLOW_SNAP_ROOT_DESCENDANT,
true);
+ conf.setInt(DFSConfigKeys.DFS_NAMENODE_SNAPSHOT_DIFF_LISTING_LIMIT, 3);
cluster = new MiniDFSCluster.Builder(conf).numDataNodes(REPLICATION)
.format(true).build();
cluster.waitActive();
@@ -1293,4 +1294,119 @@ public class TestSnapshotDiffReport {
assertAtimeNotEquals(filePostSS, root, "s2", "s3");
}
+
+ /**
+ * Tests to verfy the diff report with maximum SnapsdiffReportEntries limit
+ * over an rpc being set to 3.
+ * @throws Exception
+ */
+ @Test
+ public void testDiffReportWithRpcLimit() throws Exception {
+ final Path root = new Path("/");
+ hdfs.mkdirs(root);
+ for (int i = 1; i < 4; i++) {
+ final Path path = new Path(root, "dir" + i);
+ hdfs.mkdirs(path);
+ }
+ SnapshotTestHelper.createSnapshot(hdfs, root, "s0");
+ for (int i = 1; i < 4; i++) {
+ final Path path = new Path(root, "dir" + i);
+ for (int j = 1; j < 4; j++) {
+ final Path file = new Path(path, "file" + j);
+ DFSTestUtil.createFile(hdfs, file, BLOCKSIZE, REPLICATION, SEED);
+ }
+ }
+
+ SnapshotTestHelper.createSnapshot(hdfs, root, "s1");
+ verifyDiffReport(root, "s0", "s1",
+ new DiffReportEntry(DiffType.MODIFY, DFSUtil.string2Bytes("")),
+ new DiffReportEntry(DiffType.MODIFY, DFSUtil.string2Bytes("dir1")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir1/file1")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir1/file2")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir1/file3")),
+ new DiffReportEntry(DiffType.MODIFY, DFSUtil.string2Bytes("dir2")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir2/file1")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir2/file2")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir2/file3")),
+ new DiffReportEntry(DiffType.MODIFY, DFSUtil.string2Bytes("dir3")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir3/file1")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir3/file2")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir3/file3")));
+ }
+
+ @Test
+ public void testDiffReportWithRpcLimit2() throws Exception {
+ final Path root = new Path("/");
+ hdfs.mkdirs(root);
+ for (int i = 1; i <=3; i++) {
+ final Path path = new Path(root, "dir" + i);
+ hdfs.mkdirs(path);
+ }
+ for (int i = 1; i <= 3; i++) {
+ final Path path = new Path(root, "dir" + i);
+ for (int j = 1; j < 4; j++) {
+ final Path file = new Path(path, "file" + j);
+ DFSTestUtil.createFile(hdfs, file, BLOCKSIZE, REPLICATION, SEED);
+ }
+ }
+ SnapshotTestHelper.createSnapshot(hdfs, root, "s0");
+ Path targetDir = new Path(root, "dir4");
+ //create directory dir4
+ hdfs.mkdirs(targetDir);
+ //moves files from dir1 to dir4
+ Path path = new Path(root, "dir1");
+ for (int j = 1; j < 4; j++) {
+ final Path srcPath = new Path(path, "file" + j);
+ final Path targetPath = new Path(targetDir, "file" + j);
+ hdfs.rename(srcPath, targetPath);
+ }
+ targetDir = new Path(root, "dir3");
+ //overwrite existing files in dir3 from files in dir1
+ path = new Path(root, "dir2");
+ for (int j = 1; j < 4; j++) {
+ final Path srcPath = new Path(path, "file" + j);
+ final Path targetPath = new Path(targetDir, "file" + j);
+ hdfs.rename(srcPath, targetPath, Rename.OVERWRITE);
+ }
+ final Path pathToRename = new Path(root, "dir2");
+ //move dir2 inside dir3
+ hdfs.rename(pathToRename, targetDir);
+ SnapshotTestHelper.createSnapshot(hdfs, root, "s1");
+ verifyDiffReport(root, "s0", "s1",
+ new DiffReportEntry(DiffType.MODIFY, DFSUtil.string2Bytes("")),
+ new DiffReportEntry(DiffType.CREATE,
+ DFSUtil.string2Bytes("dir4")),
+ new DiffReportEntry(DiffType.RENAME, DFSUtil.string2Bytes("dir2"),
+ DFSUtil.string2Bytes("dir3/dir2")),
+ new DiffReportEntry(DiffType.MODIFY, DFSUtil.string2Bytes("dir1")),
+ new DiffReportEntry(DiffType.RENAME, DFSUtil.string2Bytes("dir1/file1"),
+ DFSUtil.string2Bytes("dir4/file1")),
+ new DiffReportEntry(DiffType.RENAME, DFSUtil.string2Bytes("dir1/file2"),
+ DFSUtil.string2Bytes("dir4/file2")),
+ new DiffReportEntry(DiffType.RENAME, DFSUtil.string2Bytes("dir1/file3"),
+ DFSUtil.string2Bytes("dir4/file3")),
+ new DiffReportEntry(DiffType.MODIFY, DFSUtil.string2Bytes("dir2")),
+ new DiffReportEntry(DiffType.RENAME, DFSUtil.string2Bytes("dir2/file1"),
+ DFSUtil.string2Bytes("dir3/file1")),
+ new DiffReportEntry(DiffType.RENAME, DFSUtil.string2Bytes("dir2/file2"),
+ DFSUtil.string2Bytes("dir3/file2")),
+ new DiffReportEntry(DiffType.RENAME, DFSUtil.string2Bytes("dir2/file3"),
+ DFSUtil.string2Bytes("dir3/file3")),
+ new DiffReportEntry(DiffType.MODIFY, DFSUtil.string2Bytes("dir3")),
+ new DiffReportEntry(DiffType.DELETE,
+ DFSUtil.string2Bytes("dir3/file1")),
+ new DiffReportEntry(DiffType.DELETE,
+ DFSUtil.string2Bytes("dir3/file1")),
+ new DiffReportEntry(DiffType.DELETE,
+ DFSUtil.string2Bytes("dir3/file3")));
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[32/50] [abbrv] hadoop git commit: HDFS-11791. [READ] Test for
increasing replication of provided files.
Posted by vi...@apache.org.
HDFS-11791. [READ] Test for increasing replication of provided files.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/90f4a78d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/90f4a78d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/90f4a78d
Branch: refs/heads/HDFS-9806
Commit: 90f4a78d83cb50af58c35c80110bd9e8fb72bb22
Parents: 3f008df
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Wed May 31 10:29:53 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../TestNameNodeProvidedImplementation.java | 55 ++++++++++++++++++++
1 file changed, 55 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/90f4a78d/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 5062439..e171557 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -23,6 +23,7 @@ import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.Writer;
+import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.Channels;
import java.nio.channels.ReadableByteChannel;
@@ -34,10 +35,15 @@ import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSClient;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockFormatProvider;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockProvider;
import org.apache.hadoop.hdfs.server.common.BlockFormat;
@@ -378,4 +384,53 @@ public class TestNameNodeProvidedImplementation {
assertEquals(1, locations.length);
assertEquals(2, locations[0].getHosts().length);
}
+
+ private DatanodeInfo[] getAndCheckBlockLocations(DFSClient client,
+ String filename, int expectedLocations) throws IOException {
+ LocatedBlocks locatedBlocks = client.getLocatedBlocks(
+ filename, 0, baseFileLen);
+ //given the start and length in the above call,
+ //only one LocatedBlock in LocatedBlocks
+ assertEquals(1, locatedBlocks.getLocatedBlocks().size());
+ LocatedBlock locatedBlock = locatedBlocks.getLocatedBlocks().get(0);
+ assertEquals(expectedLocations, locatedBlock.getLocations().length);
+ return locatedBlock.getLocations();
+ }
+
+ /**
+ * Tests setting replication of provided files.
+ * @throws Exception
+ */
+ @Test
+ public void testSetReplicationForProvidedFiles() throws Exception {
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class);
+ startCluster(NNDIRPATH, 2, null,
+ new StorageType[][] {
+ {StorageType.PROVIDED},
+ {StorageType.DISK}},
+ false);
+
+ String filename = "/" + filePrefix + (numFiles - 1) + fileSuffix;
+ Path file = new Path(filename);
+ FileSystem fs = cluster.getFileSystem();
+
+ //set the replication to 2, and test that the file has
+ //the required replication.
+ fs.setReplication(file, (short) 2);
+ DFSTestUtil.waitForReplication((DistributedFileSystem) fs,
+ file, (short) 2, 10000);
+ DFSClient client = new DFSClient(new InetSocketAddress("localhost",
+ cluster.getNameNodePort()), cluster.getConfiguration(0));
+ getAndCheckBlockLocations(client, filename, 2);
+
+ //set the replication back to 1
+ fs.setReplication(file, (short) 1);
+ DFSTestUtil.waitForReplication((DistributedFileSystem) fs,
+ file, (short) 1, 10000);
+ //the only replica left should be the PROVIDED datanode
+ DatanodeInfo[] infos = getAndCheckBlockLocations(client, filename, 1);
+ assertEquals(cluster.getDataNodes().get(0).getDatanodeUuid(),
+ infos[0].getDatanodeUuid());
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[26/50] [abbrv] hadoop git commit: HDFS-11190. [READ] Namenode
support for data stored in external stores.
Posted by vi...@apache.org.
HDFS-11190. [READ] Namenode support for data stored in external stores.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1cc1f214
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1cc1f214
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1cc1f214
Branch: refs/heads/HDFS-9806
Commit: 1cc1f21447f6eb2df76be075b77aa505ef078f50
Parents: e189df2
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Fri Apr 21 11:12:36 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:57 2017 -0800
----------------------------------------------------------------------
.../hadoop/hdfs/protocol/LocatedBlock.java | 96 ++++-
.../org/apache/hadoop/hdfs/DFSConfigKeys.java | 5 +
.../blockmanagement/BlockFormatProvider.java | 91 ++++
.../server/blockmanagement/BlockManager.java | 95 +++--
.../server/blockmanagement/BlockProvider.java | 65 +++
.../BlockStoragePolicySuite.java | 6 +
.../blockmanagement/DatanodeDescriptor.java | 34 +-
.../server/blockmanagement/DatanodeManager.java | 2 +
.../blockmanagement/DatanodeStorageInfo.java | 4 +
.../blockmanagement/LocatedBlockBuilder.java | 109 +++++
.../blockmanagement/ProvidedStorageMap.java | 427 +++++++++++++++++++
.../src/main/resources/hdfs-default.xml | 30 +-
.../hadoop/hdfs/TestBlockStoragePolicy.java | 4 +
.../blockmanagement/TestDatanodeManager.java | 65 ++-
.../TestNameNodeProvidedImplementation.java | 345 +++++++++++++++
15 files changed, 1292 insertions(+), 86 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
index 85bec92..5ad0bca 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
@@ -18,6 +18,7 @@
package org.apache.hadoop.hdfs.protocol;
import java.util.Arrays;
+import java.util.Comparator;
import java.util.List;
import com.google.common.base.Preconditions;
@@ -62,40 +63,50 @@ public class LocatedBlock {
public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs) {
// By default, startOffset is unknown(-1) and corrupt is false.
- this(b, locs, null, null, -1, false, EMPTY_LOCS);
+ this(b, convert(locs, null, null), null, null, -1, false, EMPTY_LOCS);
}
public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs,
String[] storageIDs, StorageType[] storageTypes) {
- this(b, locs, storageIDs, storageTypes, -1, false, EMPTY_LOCS);
+ this(b, convert(locs, storageIDs, storageTypes),
+ storageIDs, storageTypes, -1, false, EMPTY_LOCS);
}
- public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs, String[] storageIDs,
- StorageType[] storageTypes, long startOffset,
+ public LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs,
+ String[] storageIDs, StorageType[] storageTypes, long startOffset,
+ boolean corrupt, DatanodeInfo[] cachedLocs) {
+ this(b, convert(locs, storageIDs, storageTypes),
+ storageIDs, storageTypes, startOffset, corrupt,
+ null == cachedLocs || 0 == cachedLocs.length ? EMPTY_LOCS : cachedLocs);
+ }
+
+ public LocatedBlock(ExtendedBlock b, DatanodeInfoWithStorage[] locs,
+ String[] storageIDs, StorageType[] storageTypes, long startOffset,
boolean corrupt, DatanodeInfo[] cachedLocs) {
this.b = b;
this.offset = startOffset;
this.corrupt = corrupt;
- if (locs==null) {
- this.locs = EMPTY_LOCS;
- } else {
- this.locs = new DatanodeInfoWithStorage[locs.length];
- for(int i = 0; i < locs.length; i++) {
- DatanodeInfo di = locs[i];
- DatanodeInfoWithStorage storage = new DatanodeInfoWithStorage(di,
- storageIDs != null ? storageIDs[i] : null,
- storageTypes != null ? storageTypes[i] : null);
- this.locs[i] = storage;
- }
- }
+ this.locs = null == locs ? EMPTY_LOCS : locs;
this.storageIDs = storageIDs;
this.storageTypes = storageTypes;
+ this.cachedLocs = null == cachedLocs || 0 == cachedLocs.length
+ ? EMPTY_LOCS
+ : cachedLocs;
+ }
+
+ private static DatanodeInfoWithStorage[] convert(
+ DatanodeInfo[] infos, String[] storageIDs, StorageType[] storageTypes) {
+ if (null == infos) {
+ return EMPTY_LOCS;
+ }
- if (cachedLocs == null || cachedLocs.length == 0) {
- this.cachedLocs = EMPTY_LOCS;
- } else {
- this.cachedLocs = cachedLocs;
+ DatanodeInfoWithStorage[] ret = new DatanodeInfoWithStorage[infos.length];
+ for(int i = 0; i < infos.length; i++) {
+ ret[i] = new DatanodeInfoWithStorage(infos[i],
+ storageIDs != null ? storageIDs[i] : null,
+ storageTypes != null ? storageTypes[i] : null);
}
+ return ret;
}
public Token<BlockTokenIdentifier> getBlockToken() {
@@ -145,6 +156,51 @@ public class LocatedBlock {
}
}
+ /**
+ * Comparator that ensures that a PROVIDED storage type is greater than
+ * any other storage type. Any other storage types are considered equal.
+ */
+ private class ProvidedLastComparator
+ implements Comparator<DatanodeInfoWithStorage> {
+ @Override
+ public int compare(DatanodeInfoWithStorage dns1,
+ DatanodeInfoWithStorage dns2) {
+ if (StorageType.PROVIDED.equals(dns1.getStorageType())
+ && !StorageType.PROVIDED.equals(dns2.getStorageType())) {
+ return 1;
+ }
+ if (!StorageType.PROVIDED.equals(dns1.getStorageType())
+ && StorageType.PROVIDED.equals(dns2.getStorageType())) {
+ return -1;
+ }
+ // Storage types of dns1 and dns2 are now both provided or not provided;
+ // thus, are essentially equal for the purpose of this comparator.
+ return 0;
+ }
+ }
+
+ /**
+ * Moves all locations that have {@link StorageType}
+ * {@code PROVIDED} to the end of the locations array without
+ * changing the relative ordering of the remaining locations
+ * Only the first {@code activeLen} locations are considered.
+ * The caller must immediately invoke {@link
+ * org.apache.hadoop.hdfs.protocol.LocatedBlock#updateCachedStorageInfo}
+ * to update the cached Storage ID/Type arrays.
+ * @param activeLen
+ */
+ public void moveProvidedToEnd(int activeLen) {
+
+ if (activeLen <= 0) {
+ return;
+ }
+ // as this is a stable sort, for elements that are equal,
+ // the current order of the elements is maintained
+ Arrays.sort(locs, 0,
+ (activeLen < locs.length) ? activeLen : locs.length,
+ new ProvidedLastComparator());
+ }
+
public long getStartOffset() {
return offset;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index ca753ce..7449987 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -328,6 +328,11 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
"dfs.namenode.edits.asynclogging";
public static final boolean DFS_NAMENODE_EDITS_ASYNC_LOGGING_DEFAULT = true;
+ public static final String DFS_NAMENODE_PROVIDED_ENABLED = "dfs.namenode.provided.enabled";
+ public static final boolean DFS_NAMENODE_PROVIDED_ENABLED_DEFAULT = false;
+
+ public static final String DFS_NAMENODE_BLOCK_PROVIDER_CLASS = "dfs.namenode.block.provider.class";
+
public static final String DFS_PROVIDER_CLASS = "dfs.provider.class";
public static final String DFS_PROVIDER_DF_CLASS = "dfs.provided.df.class";
public static final String DFS_PROVIDER_STORAGEUUID = "dfs.provided.storage.id";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
new file mode 100644
index 0000000..930263d
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
@@ -0,0 +1,91 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.common.BlockAlias;
+import org.apache.hadoop.hdfs.server.common.BlockFormat;
+import org.apache.hadoop.hdfs.server.common.TextFileRegionFormat;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Loads provided blocks from a {@link BlockFormat}.
+ */
+public class BlockFormatProvider extends BlockProvider
+ implements Configurable {
+
+ private Configuration conf;
+ private BlockFormat<? extends BlockAlias> blockFormat;
+ public static final Logger LOG =
+ LoggerFactory.getLogger(BlockFormatProvider.class);
+
+ @Override
+ @SuppressWarnings({ "rawtypes", "unchecked" })
+ public void setConf(Configuration conf) {
+ Class<? extends BlockFormat> c = conf.getClass(
+ DFSConfigKeys.DFS_PROVIDER_BLK_FORMAT_CLASS,
+ TextFileRegionFormat.class, BlockFormat.class);
+ blockFormat = ReflectionUtils.newInstance(c, conf);
+ LOG.info("Loaded BlockFormat class : " + c.getClass().getName());
+ this.conf = conf;
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public Iterator<Block> iterator() {
+ try {
+ final BlockFormat.Reader<? extends BlockAlias> reader =
+ blockFormat.getReader(null);
+
+ return new Iterator<Block>() {
+
+ private final Iterator<? extends BlockAlias> inner = reader.iterator();
+
+ @Override
+ public boolean hasNext() {
+ return inner.hasNext();
+ }
+
+ @Override
+ public Block next() {
+ return inner.next().getBlock();
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ };
+ } catch (IOException e) {
+ throw new RuntimeException("Failed to read provided blocks", e);
+ }
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 4986027..df5d23a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -430,6 +430,9 @@ public class BlockManager implements BlockStatsMXBean {
*/
private final short minReplicationToBeInMaintenance;
+ /** Storages accessible from multiple DNs. */
+ private final ProvidedStorageMap providedStorageMap;
+
public BlockManager(final Namesystem namesystem, boolean haEnabled,
final Configuration conf) throws IOException {
this.namesystem = namesystem;
@@ -462,6 +465,8 @@ public class BlockManager implements BlockStatsMXBean {
blockTokenSecretManager = createBlockTokenSecretManager(conf);
+ providedStorageMap = new ProvidedStorageMap(namesystem, this, conf);
+
this.maxCorruptFilesReturned = conf.getInt(
DFSConfigKeys.DFS_DEFAULT_MAX_CORRUPT_FILES_RETURNED_KEY,
DFSConfigKeys.DFS_DEFAULT_MAX_CORRUPT_FILES_RETURNED);
@@ -1133,7 +1138,7 @@ public class BlockManager implements BlockStatsMXBean {
final long fileLength = bc.computeContentSummary(
getStoragePolicySuite()).getLength();
final long pos = fileLength - lastBlock.getNumBytes();
- return createLocatedBlock(lastBlock, pos,
+ return createLocatedBlock(null, lastBlock, pos,
BlockTokenIdentifier.AccessMode.WRITE);
}
@@ -1154,8 +1159,10 @@ public class BlockManager implements BlockStatsMXBean {
return locations;
}
- private List<LocatedBlock> createLocatedBlockList(final BlockInfo[] blocks,
- final long offset, final long length, final int nrBlocksToReturn,
+ private void createLocatedBlockList(
+ LocatedBlockBuilder locatedBlocks,
+ final BlockInfo[] blocks,
+ final long offset, final long length,
final AccessMode mode) throws IOException {
int curBlk;
long curPos = 0, blkSize = 0;
@@ -1170,21 +1177,22 @@ public class BlockManager implements BlockStatsMXBean {
}
if (nrBlocks > 0 && curBlk == nrBlocks) // offset >= end of file
- return Collections.emptyList();
+ return;
long endOff = offset + length;
- List<LocatedBlock> results = new ArrayList<>(blocks.length);
do {
- results.add(createLocatedBlock(blocks[curBlk], curPos, mode));
+ locatedBlocks.addBlock(
+ createLocatedBlock(locatedBlocks, blocks[curBlk], curPos, mode));
curPos += blocks[curBlk].getNumBytes();
curBlk++;
} while (curPos < endOff
&& curBlk < blocks.length
- && results.size() < nrBlocksToReturn);
- return results;
+ && !locatedBlocks.isBlockMax());
+ return;
}
- private LocatedBlock createLocatedBlock(final BlockInfo[] blocks,
+ private LocatedBlock createLocatedBlock(LocatedBlockBuilder locatedBlocks,
+ final BlockInfo[] blocks,
final long endPos, final AccessMode mode) throws IOException {
int curBlk;
long curPos = 0;
@@ -1197,12 +1205,13 @@ public class BlockManager implements BlockStatsMXBean {
curPos += blkSize;
}
- return createLocatedBlock(blocks[curBlk], curPos, mode);
+ return createLocatedBlock(locatedBlocks, blocks[curBlk], curPos, mode);
}
- private LocatedBlock createLocatedBlock(final BlockInfo blk, final long pos,
- final AccessMode mode) throws IOException {
- final LocatedBlock lb = createLocatedBlock(blk, pos);
+ private LocatedBlock createLocatedBlock(LocatedBlockBuilder locatedBlocks,
+ final BlockInfo blk, final long pos, final AccessMode mode)
+ throws IOException {
+ final LocatedBlock lb = createLocatedBlock(locatedBlocks, blk, pos);
if (mode != null) {
setBlockToken(lb, mode);
}
@@ -1210,21 +1219,24 @@ public class BlockManager implements BlockStatsMXBean {
}
/** @return a LocatedBlock for the given block */
- private LocatedBlock createLocatedBlock(final BlockInfo blk, final long pos)
- throws IOException {
+ private LocatedBlock createLocatedBlock(LocatedBlockBuilder locatedBlocks,
+ final BlockInfo blk, final long pos) throws IOException {
if (!blk.isComplete()) {
final BlockUnderConstructionFeature uc = blk.getUnderConstructionFeature();
if (blk.isStriped()) {
final DatanodeStorageInfo[] storages = uc.getExpectedStorageLocations();
final ExtendedBlock eb = new ExtendedBlock(getBlockPoolId(),
blk);
+ //TODO use locatedBlocks builder??
return newLocatedStripedBlock(eb, storages, uc.getBlockIndices(), pos,
false);
} else {
final DatanodeStorageInfo[] storages = uc.getExpectedStorageLocations();
final ExtendedBlock eb = new ExtendedBlock(getBlockPoolId(),
blk);
- return newLocatedBlock(eb, storages, pos, false);
+ return null == locatedBlocks
+ ? newLocatedBlock(eb, storages, pos, false)
+ : locatedBlocks.newLocatedBlock(eb, storages, pos, false);
}
}
@@ -1288,9 +1300,10 @@ public class BlockManager implements BlockStatsMXBean {
" numCorrupt: " + numCorruptNodes +
" numCorruptRepls: " + numCorruptReplicas;
final ExtendedBlock eb = new ExtendedBlock(getBlockPoolId(), blk);
- return blockIndices == null ?
- newLocatedBlock(eb, machines, pos, isCorrupt) :
- newLocatedStripedBlock(eb, machines, blockIndices, pos, isCorrupt);
+ return blockIndices == null
+ ? null == locatedBlocks ? newLocatedBlock(eb, machines, pos, isCorrupt)
+ : locatedBlocks.newLocatedBlock(eb, machines, pos, isCorrupt)
+ : newLocatedStripedBlock(eb, machines, blockIndices, pos, isCorrupt);
}
/** Create a LocatedBlocks. */
@@ -1312,27 +1325,31 @@ public class BlockManager implements BlockStatsMXBean {
LOG.debug("blocks = {}", java.util.Arrays.asList(blocks));
}
final AccessMode mode = needBlockToken? BlockTokenIdentifier.AccessMode.READ: null;
- final List<LocatedBlock> locatedblocks = createLocatedBlockList(
- blocks, offset, length, Integer.MAX_VALUE, mode);
- final LocatedBlock lastlb;
- final boolean isComplete;
+ LocatedBlockBuilder locatedBlocks = providedStorageMap
+ .newLocatedBlocks(Integer.MAX_VALUE)
+ .fileLength(fileSizeExcludeBlocksUnderConstruction)
+ .lastUC(isFileUnderConstruction)
+ .encryption(feInfo)
+ .erasureCoding(ecPolicy);
+
+ createLocatedBlockList(locatedBlocks, blocks, offset, length, mode);
if (!inSnapshot) {
final BlockInfo last = blocks[blocks.length - 1];
final long lastPos = last.isComplete()?
fileSizeExcludeBlocksUnderConstruction - last.getNumBytes()
: fileSizeExcludeBlocksUnderConstruction;
- lastlb = createLocatedBlock(last, lastPos, mode);
- isComplete = last.isComplete();
+
+ locatedBlocks
+ .lastBlock(createLocatedBlock(locatedBlocks, last, lastPos, mode))
+ .lastComplete(last.isComplete());
} else {
- lastlb = createLocatedBlock(blocks,
- fileSizeExcludeBlocksUnderConstruction, mode);
- isComplete = true;
+ locatedBlocks
+ .lastBlock(createLocatedBlock(locatedBlocks, blocks,
+ fileSizeExcludeBlocksUnderConstruction, mode))
+ .lastComplete(true);
}
- LocatedBlocks locations = new LocatedBlocks(
- fileSizeExcludeBlocksUnderConstruction,
- isFileUnderConstruction, locatedblocks, lastlb, isComplete, feInfo,
- ecPolicy);
+ LocatedBlocks locations = locatedBlocks.build();
// Set caching information for the located blocks.
CacheManager cm = namesystem.getCacheManager();
if (cm != null) {
@@ -2432,7 +2449,10 @@ public class BlockManager implements BlockStatsMXBean {
// To minimize startup time, we discard any second (or later) block reports
// that we receive while still in startup phase.
- DatanodeStorageInfo storageInfo = node.getStorageInfo(storage.getStorageID());
+ // !#! Register DN with provided storage, not with storage owned by DN
+ // !#! DN should still have a ref to the DNStorageInfo
+ DatanodeStorageInfo storageInfo =
+ providedStorageMap.getStorage(node, storage);
if (storageInfo == null) {
// We handle this for backwards compatibility.
@@ -2464,9 +2484,12 @@ public class BlockManager implements BlockStatsMXBean {
nodeID.getDatanodeUuid());
processFirstBlockReport(storageInfo, newReport);
} else {
- invalidatedBlocks = processReport(storageInfo, newReport, context);
+ // Block reports for provided storage are not
+ // maintained by DN heartbeats
+ if (!StorageType.PROVIDED.equals(storageInfo.getStorageType())) {
+ invalidatedBlocks = processReport(storageInfo, newReport, context);
+ }
}
-
storageInfo.receivedBlockReport();
} finally {
endTime = Time.monotonicNow();
@@ -2680,7 +2703,7 @@ public class BlockManager implements BlockStatsMXBean {
* @param report - the initial block report, to be processed
* @throws IOException
*/
- private void processFirstBlockReport(
+ void processFirstBlockReport(
final DatanodeStorageInfo storageInfo,
final BlockListAsLongs report) throws IOException {
if (report == null) return;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
new file mode 100644
index 0000000..d8bed16
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
@@ -0,0 +1,65 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import java.io.IOException;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap.ProvidedBlockList;
+import org.apache.hadoop.hdfs.util.RwLock;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Used to load provided blocks in the {@link BlockManager}.
+ */
+public abstract class BlockProvider implements Iterable<Block> {
+
+ private static final Logger LOG =
+ LoggerFactory.getLogger(ProvidedStorageMap.class);
+
+ private RwLock lock;
+ private BlockManager bm;
+ private DatanodeStorageInfo storage;
+ private boolean hasDNs = false;
+
+ /**
+ * @param lock the namesystem lock
+ * @param bm block manager
+ * @param storage storage for provided blocks
+ */
+ void init(RwLock lock, BlockManager bm, DatanodeStorageInfo storage) {
+ this.bm = bm;
+ this.lock = lock;
+ this.storage = storage;
+ }
+
+ /**
+ * start the processing of block report for provided blocks.
+ * @throws IOException
+ */
+ void start() throws IOException {
+ assert lock.hasWriteLock() : "Not holding write lock";
+ if (hasDNs) {
+ return;
+ }
+ LOG.info("Calling process first blk report from storage: " + storage);
+ // first pass; periodic refresh should call bm.processReport
+ bm.processFirstBlockReport(storage, new ProvidedBlockList(iterator()));
+ hasDNs = true;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
index c8923da..6ea5198 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
@@ -82,6 +82,12 @@ public class BlockStoragePolicySuite {
HdfsConstants.COLD_STORAGE_POLICY_NAME,
new StorageType[]{StorageType.ARCHIVE}, StorageType.EMPTY_ARRAY,
StorageType.EMPTY_ARRAY);
+ final byte providedId = HdfsConstants.PROVIDED_STORAGE_POLICY_ID;
+ policies[providedId] = new BlockStoragePolicy(providedId,
+ HdfsConstants.PROVIDED_STORAGE_POLICY_NAME,
+ new StorageType[]{StorageType.PROVIDED, StorageType.DISK},
+ new StorageType[]{StorageType.PROVIDED, StorageType.DISK},
+ new StorageType[]{StorageType.PROVIDED, StorageType.DISK});
return new BlockStoragePolicySuite(hotId, policies);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index d35894c..28a3d1a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -151,7 +151,7 @@ public class DatanodeDescriptor extends DatanodeInfo {
private final LeavingServiceStatus leavingServiceStatus =
new LeavingServiceStatus();
- private final Map<String, DatanodeStorageInfo> storageMap =
+ protected final Map<String, DatanodeStorageInfo> storageMap =
new HashMap<>();
/**
@@ -322,6 +322,12 @@ public class DatanodeDescriptor extends DatanodeInfo {
boolean hasStaleStorages() {
synchronized (storageMap) {
for (DatanodeStorageInfo storage : storageMap.values()) {
+ if (StorageType.PROVIDED.equals(storage.getStorageType())) {
+ // to verify provided storage participated in this hb, requires
+ // check to pass DNDesc.
+ // e.g., storageInfo.verifyBlockReportId(this, curBlockReportId)
+ continue;
+ }
if (storage.areBlockContentsStale()) {
return true;
}
@@ -443,17 +449,22 @@ public class DatanodeDescriptor extends DatanodeInfo {
this.volumeFailures = volFailures;
this.volumeFailureSummary = volumeFailureSummary;
for (StorageReport report : reports) {
+ totalCapacity += report.getCapacity();
+ totalRemaining += report.getRemaining();
+ totalBlockPoolUsed += report.getBlockPoolUsed();
+ totalDfsUsed += report.getDfsUsed();
+ totalNonDfsUsed += report.getNonDfsUsed();
+
+ if (StorageType.PROVIDED.equals(
+ report.getStorage().getStorageType())) {
+ continue;
+ }
DatanodeStorageInfo storage = updateStorage(report.getStorage());
if (checkFailedStorages) {
failedStorageInfos.remove(storage);
}
storage.receivedHeartbeat(report);
- totalCapacity += report.getCapacity();
- totalRemaining += report.getRemaining();
- totalBlockPoolUsed += report.getBlockPoolUsed();
- totalDfsUsed += report.getDfsUsed();
- totalNonDfsUsed += report.getNonDfsUsed();
}
// Update total metrics for the node.
@@ -474,6 +485,17 @@ public class DatanodeDescriptor extends DatanodeInfo {
}
}
+ void injectStorage(DatanodeStorageInfo s) {
+ synchronized (storageMap) {
+ DatanodeStorageInfo storage = storageMap.get(s.getStorageID());
+ if (null == storage) {
+ storageMap.put(s.getStorageID(), s);
+ } else {
+ assert storage == s : "found " + storage + " expected " + s;
+ }
+ }
+ }
+
/**
* Remove stale storages from storageMap. We must not remove any storages
* as long as they have associated block replicas.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
index c75bcea..a7e31a2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
@@ -532,6 +532,8 @@ public class DatanodeManager {
} else {
networktopology.sortByDistance(client, lb.getLocations(), activeLen);
}
+ //move PROVIDED storage to the end to prefer local replicas.
+ lb.moveProvidedToEnd(activeLen);
// must update cache since we modified locations array
lb.updateCachedStorageInfo();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
index b1ccea2..76bf915 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java
@@ -172,6 +172,10 @@ public class DatanodeStorageInfo {
this.state = state;
}
+ void setHeartbeatedSinceFailover(boolean value) {
+ heartbeatedSinceFailover = value;
+ }
+
boolean areBlocksOnFailedStorage() {
return getState() == State.FAILED && !blocks.isEmpty();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LocatedBlockBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LocatedBlockBuilder.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LocatedBlockBuilder.java
new file mode 100644
index 0000000..0056887
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LocatedBlockBuilder.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.hadoop.fs.FileEncryptionInfo;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+class LocatedBlockBuilder {
+
+ protected long flen;
+ protected List<LocatedBlock> blocks = Collections.<LocatedBlock>emptyList();
+ protected boolean isUC;
+ protected LocatedBlock last;
+ protected boolean lastComplete;
+ protected FileEncryptionInfo feInfo;
+ private final int maxBlocks;
+ protected ErasureCodingPolicy ecPolicy;
+
+ LocatedBlockBuilder(int maxBlocks) {
+ this.maxBlocks = maxBlocks;
+ }
+
+ boolean isBlockMax() {
+ return blocks.size() >= maxBlocks;
+ }
+
+ LocatedBlockBuilder fileLength(long fileLength) {
+ flen = fileLength;
+ return this;
+ }
+
+ LocatedBlockBuilder addBlock(LocatedBlock block) {
+ if (blocks.isEmpty()) {
+ blocks = new ArrayList<>();
+ }
+ blocks.add(block);
+ return this;
+ }
+
+ // return new block so tokens can be set
+ LocatedBlock newLocatedBlock(ExtendedBlock eb,
+ DatanodeStorageInfo[] storage,
+ long pos, boolean isCorrupt) {
+ LocatedBlock blk =
+ BlockManager.newLocatedBlock(eb, storage, pos, isCorrupt);
+ return blk;
+ }
+
+ LocatedBlockBuilder lastUC(boolean underConstruction) {
+ isUC = underConstruction;
+ return this;
+ }
+
+ LocatedBlockBuilder lastBlock(LocatedBlock block) {
+ last = block;
+ return this;
+ }
+
+ LocatedBlockBuilder lastComplete(boolean complete) {
+ lastComplete = complete;
+ return this;
+ }
+
+ LocatedBlockBuilder encryption(FileEncryptionInfo fileEncryptionInfo) {
+ feInfo = fileEncryptionInfo;
+ return this;
+ }
+
+ LocatedBlockBuilder erasureCoding(ErasureCodingPolicy codingPolicy) {
+ ecPolicy = codingPolicy;
+ return this;
+ }
+
+ LocatedBlocks build(DatanodeDescriptor client) {
+ return build();
+ }
+
+ LocatedBlocks build() {
+ return new LocatedBlocks(flen, isUC, blocks, last,
+ lastComplete, feInfo, ecPolicy);
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
new file mode 100644
index 0000000..d222344
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -0,0 +1,427 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.ConcurrentSkipListMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State;
+import org.apache.hadoop.hdfs.util.RwLock;
+import org.apache.hadoop.util.ReflectionUtils;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.protobuf.ByteString;
+
+/**
+ * This class allows us to manage and multiplex between storages local to
+ * datanodes, and provided storage.
+ */
+public class ProvidedStorageMap {
+
+ private static final Logger LOG =
+ LoggerFactory.getLogger(ProvidedStorageMap.class);
+
+ // limit to a single provider for now
+ private final BlockProvider blockProvider;
+ private final String storageId;
+ private final ProvidedDescriptor providedDescriptor;
+ private final DatanodeStorageInfo providedStorageInfo;
+ private boolean providedEnabled;
+
+ ProvidedStorageMap(RwLock lock, BlockManager bm, Configuration conf)
+ throws IOException {
+
+ storageId = conf.get(DFSConfigKeys.DFS_PROVIDER_STORAGEUUID,
+ DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT);
+
+ providedEnabled = conf.getBoolean(
+ DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED,
+ DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED_DEFAULT);
+
+ if (!providedEnabled) {
+ // disable mapping
+ blockProvider = null;
+ providedDescriptor = null;
+ providedStorageInfo = null;
+ return;
+ }
+
+ DatanodeStorage ds = new DatanodeStorage(
+ storageId, State.NORMAL, StorageType.PROVIDED);
+ providedDescriptor = new ProvidedDescriptor();
+ providedStorageInfo = providedDescriptor.createProvidedStorage(ds);
+
+ // load block reader into storage
+ Class<? extends BlockProvider> fmt = conf.getClass(
+ DFSConfigKeys.DFS_NAMENODE_BLOCK_PROVIDER_CLASS,
+ BlockFormatProvider.class, BlockProvider.class);
+
+ blockProvider = ReflectionUtils.newInstance(fmt, conf);
+ blockProvider.init(lock, bm, providedStorageInfo);
+ LOG.info("Loaded block provider class: " +
+ blockProvider.getClass() + " storage: " + providedStorageInfo);
+ }
+
+ /**
+ * @param dn datanode descriptor
+ * @param s data node storage
+ * @return the {@link DatanodeStorageInfo} for the specified datanode.
+ * If {@code s} corresponds to a provided storage, the storage info
+ * representing provided storage is returned.
+ * @throws IOException
+ */
+ DatanodeStorageInfo getStorage(DatanodeDescriptor dn, DatanodeStorage s)
+ throws IOException {
+ if (providedEnabled && storageId.equals(s.getStorageID())) {
+ if (StorageType.PROVIDED.equals(s.getStorageType())) {
+ // poll service, initiate
+ blockProvider.start();
+ dn.injectStorage(providedStorageInfo);
+ return providedDescriptor.getProvidedStorage(dn, s);
+ }
+ LOG.warn("Reserved storage {} reported as non-provided from {}", s, dn);
+ }
+ return dn.getStorageInfo(s.getStorageID());
+ }
+
+ public LocatedBlockBuilder newLocatedBlocks(int maxValue) {
+ if (!providedEnabled) {
+ return new LocatedBlockBuilder(maxValue);
+ }
+ return new ProvidedBlocksBuilder(maxValue);
+ }
+
+ /**
+ * Builder used for creating {@link LocatedBlocks} when a block is provided.
+ */
+ class ProvidedBlocksBuilder extends LocatedBlockBuilder {
+
+ private ShadowDatanodeInfoWithStorage pending;
+
+ ProvidedBlocksBuilder(int maxBlocks) {
+ super(maxBlocks);
+ pending = new ShadowDatanodeInfoWithStorage(
+ providedDescriptor, storageId);
+ }
+
+ @Override
+ LocatedBlock newLocatedBlock(ExtendedBlock eb,
+ DatanodeStorageInfo[] storages, long pos, boolean isCorrupt) {
+
+ DatanodeInfoWithStorage[] locs =
+ new DatanodeInfoWithStorage[storages.length];
+ String[] sids = new String[storages.length];
+ StorageType[] types = new StorageType[storages.length];
+ for (int i = 0; i < storages.length; ++i) {
+ sids[i] = storages[i].getStorageID();
+ types[i] = storages[i].getStorageType();
+ if (StorageType.PROVIDED.equals(storages[i].getStorageType())) {
+ locs[i] = pending;
+ } else {
+ locs[i] = new DatanodeInfoWithStorage(
+ storages[i].getDatanodeDescriptor(), sids[i], types[i]);
+ }
+ }
+ return new LocatedBlock(eb, locs, sids, types, pos, isCorrupt, null);
+ }
+
+ @Override
+ LocatedBlocks build(DatanodeDescriptor client) {
+ // TODO: to support multiple provided storages, need to pass/maintain map
+ // set all fields of pending DatanodeInfo
+ List<String> excludedUUids = new ArrayList<String>();
+ for (LocatedBlock b: blocks) {
+ DatanodeInfo[] infos = b.getLocations();
+ StorageType[] types = b.getStorageTypes();
+
+ for (int i = 0; i < types.length; i++) {
+ if (!StorageType.PROVIDED.equals(types[i])) {
+ excludedUUids.add(infos[i].getDatanodeUuid());
+ }
+ }
+ }
+
+ DatanodeDescriptor dn = providedDescriptor.choose(client, excludedUUids);
+ if (dn == null) {
+ dn = providedDescriptor.choose(client);
+ }
+
+ pending.replaceInternal(dn);
+ return new LocatedBlocks(
+ flen, isUC, blocks, last, lastComplete, feInfo, ecPolicy);
+ }
+
+ @Override
+ LocatedBlocks build() {
+ return build(providedDescriptor.chooseRandom());
+ }
+ }
+
+ /**
+ * An abstract {@link DatanodeInfoWithStorage} to represent provided storage.
+ */
+ static class ShadowDatanodeInfoWithStorage extends DatanodeInfoWithStorage {
+ private String shadowUuid;
+
+ ShadowDatanodeInfoWithStorage(DatanodeDescriptor d, String storageId) {
+ super(d, storageId, StorageType.PROVIDED);
+ }
+
+ @Override
+ public String getDatanodeUuid() {
+ return shadowUuid;
+ }
+
+ public void setDatanodeUuid(String uuid) {
+ shadowUuid = uuid;
+ }
+
+ void replaceInternal(DatanodeDescriptor dn) {
+ updateRegInfo(dn); // overwrite DatanodeID (except UUID)
+ setDatanodeUuid(dn.getDatanodeUuid());
+ setCapacity(dn.getCapacity());
+ setDfsUsed(dn.getDfsUsed());
+ setRemaining(dn.getRemaining());
+ setBlockPoolUsed(dn.getBlockPoolUsed());
+ setCacheCapacity(dn.getCacheCapacity());
+ setCacheUsed(dn.getCacheUsed());
+ setLastUpdate(dn.getLastUpdate());
+ setLastUpdateMonotonic(dn.getLastUpdateMonotonic());
+ setXceiverCount(dn.getXceiverCount());
+ setNetworkLocation(dn.getNetworkLocation());
+ adminState = dn.getAdminState();
+ setUpgradeDomain(dn.getUpgradeDomain());
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ return super.equals(obj);
+ }
+
+ @Override
+ public int hashCode() {
+ return super.hashCode();
+ }
+ }
+
+ /**
+ * An abstract DatanodeDescriptor to track datanodes with provided storages.
+ * NOTE: never resolved through registerDatanode, so not in the topology.
+ */
+ static class ProvidedDescriptor extends DatanodeDescriptor {
+
+ private final NavigableMap<String, DatanodeDescriptor> dns =
+ new ConcurrentSkipListMap<>();
+
+ ProvidedDescriptor() {
+ super(new DatanodeID(
+ null, // String ipAddr,
+ null, // String hostName,
+ UUID.randomUUID().toString(), // String datanodeUuid,
+ 0, // int xferPort,
+ 0, // int infoPort,
+ 0, // int infoSecurePort,
+ 0)); // int ipcPort
+ }
+
+ DatanodeStorageInfo getProvidedStorage(
+ DatanodeDescriptor dn, DatanodeStorage s) {
+ dns.put(dn.getDatanodeUuid(), dn);
+ // TODO: maintain separate RPC ident per dn
+ return storageMap.get(s.getStorageID());
+ }
+
+ DatanodeStorageInfo createProvidedStorage(DatanodeStorage ds) {
+ assert null == storageMap.get(ds.getStorageID());
+ DatanodeStorageInfo storage = new DatanodeStorageInfo(this, ds);
+ storage.setHeartbeatedSinceFailover(true);
+ storageMap.put(storage.getStorageID(), storage);
+ return storage;
+ }
+
+ DatanodeDescriptor choose(DatanodeDescriptor client) {
+ // exact match for now
+ DatanodeDescriptor dn = dns.get(client.getDatanodeUuid());
+ if (null == dn) {
+ dn = chooseRandom();
+ }
+ return dn;
+ }
+
+ DatanodeDescriptor choose(DatanodeDescriptor client,
+ List<String> excludedUUids) {
+ // exact match for now
+ DatanodeDescriptor dn = dns.get(client.getDatanodeUuid());
+
+ if (null == dn || excludedUUids.contains(client.getDatanodeUuid())) {
+ dn = null;
+ Set<String> exploredUUids = new HashSet<String>();
+
+ while(exploredUUids.size() < dns.size()) {
+ Map.Entry<String, DatanodeDescriptor> d =
+ dns.ceilingEntry(UUID.randomUUID().toString());
+ if (null == d) {
+ d = dns.firstEntry();
+ }
+ String uuid = d.getValue().getDatanodeUuid();
+ //this node has already been explored, and was not selected earlier
+ if (exploredUUids.contains(uuid)) {
+ continue;
+ }
+ exploredUUids.add(uuid);
+ //this node has been excluded
+ if (excludedUUids.contains(uuid)) {
+ continue;
+ }
+ return dns.get(uuid);
+ }
+ }
+
+ return dn;
+ }
+
+ DatanodeDescriptor chooseRandom(DatanodeStorageInfo[] excludedStorages) {
+ // TODO: Currently this is not uniformly random;
+ // skewed toward sparse sections of the ids
+ Set<DatanodeDescriptor> excludedNodes =
+ new HashSet<DatanodeDescriptor>();
+ if (excludedStorages != null) {
+ for (int i= 0; i < excludedStorages.length; i++) {
+ LOG.info("Excluded: " + excludedStorages[i].getDatanodeDescriptor());
+ excludedNodes.add(excludedStorages[i].getDatanodeDescriptor());
+ }
+ }
+ Set<DatanodeDescriptor> exploredNodes = new HashSet<DatanodeDescriptor>();
+
+ while(exploredNodes.size() < dns.size()) {
+ Map.Entry<String, DatanodeDescriptor> d =
+ dns.ceilingEntry(UUID.randomUUID().toString());
+ if (null == d) {
+ d = dns.firstEntry();
+ }
+ DatanodeDescriptor node = d.getValue();
+ //this node has already been explored, and was not selected earlier
+ if (exploredNodes.contains(node)) {
+ continue;
+ }
+ exploredNodes.add(node);
+ //this node has been excluded
+ if (excludedNodes.contains(node)) {
+ continue;
+ }
+ return node;
+ }
+ return null;
+ }
+
+ DatanodeDescriptor chooseRandom() {
+ return chooseRandom(null);
+ }
+
+ @Override
+ void addBlockToBeReplicated(Block block, DatanodeStorageInfo[] targets) {
+ // pick a random datanode, delegate to it
+ DatanodeDescriptor node = chooseRandom(targets);
+ if (node != null) {
+ node.addBlockToBeReplicated(block, targets);
+ } else {
+ LOG.error("Cannot find a source node to replicate block: "
+ + block + " from");
+ }
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ return (this == obj) || super.equals(obj);
+ }
+
+ @Override
+ public int hashCode() {
+ return super.hashCode();
+ }
+ }
+
+ /**
+ * Used to emulate block reports for provided blocks.
+ */
+ static class ProvidedBlockList extends BlockListAsLongs {
+
+ private final Iterator<Block> inner;
+
+ ProvidedBlockList(Iterator<Block> inner) {
+ this.inner = inner;
+ }
+
+ @Override
+ public Iterator<BlockReportReplica> iterator() {
+ return new Iterator<BlockReportReplica>() {
+ @Override
+ public BlockReportReplica next() {
+ return new BlockReportReplica(inner.next());
+ }
+ @Override
+ public boolean hasNext() {
+ return inner.hasNext();
+ }
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ };
+ }
+
+ @Override
+ public int getNumberOfBlocks() {
+ // VERIFY: only printed for debugging
+ return -1;
+ }
+
+ @Override
+ public ByteString getBlocksBuffer() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public long[] getBlockListAsLongs() {
+ // should only be used for backwards compat, DN.ver > NN.ver
+ throw new UnsupportedOperationException();
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 169dfc2..0f1407a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -4622,14 +4622,30 @@
</property>
<property>
+ <name>dfs.namenode.provided.enabled</name>
+ <value>false</value>
+ <description>
+ Enables the Namenode to handle provided storages.
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.namenode.block.provider.class</name>
+ <value>org.apache.hadoop.hdfs.server.blockmanagement.BlockFormatProvider</value>
+ <description>
+ The class that is used to load provided blocks in the Namenode.
+ </description>
+ </property>
+
+ <property>
<name>dfs.provider.class</name>
<value>org.apache.hadoop.hdfs.server.common.TextFileRegionProvider</value>
<description>
- The class that is used to load information about blocks stored in
- provided storages.
- org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TextFileRegionProvider
- is used as the default, which expects the blocks to be specified
- using a delimited text file.
+ The class that is used to load information about blocks stored in
+ provided storages.
+ org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TextFileRegionProvider
+ is used as the default, which expects the blocks to be specified
+ using a delimited text file.
</description>
</property>
@@ -4637,7 +4653,7 @@
<name>dfs.provided.df.class</name>
<value>org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.DefaultProvidedVolumeDF</value>
<description>
- The class that is used to measure usage statistics of provided stores.
+ The class that is used to measure usage statistics of provided stores.
</description>
</property>
@@ -4645,7 +4661,7 @@
<name>dfs.provided.storage.id</name>
<value>DS-PROVIDED</value>
<description>
- The storage ID used for provided stores.
+ The storage ID used for provided stores.
</description>
</property>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
index ae256a5..55a7b3e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockStoragePolicy.java
@@ -84,6 +84,7 @@ public class TestBlockStoragePolicy {
static final byte ONESSD = HdfsConstants.ONESSD_STORAGE_POLICY_ID;
static final byte ALLSSD = HdfsConstants.ALLSSD_STORAGE_POLICY_ID;
static final byte LAZY_PERSIST = HdfsConstants.MEMORY_STORAGE_POLICY_ID;
+ static final byte PROVIDED = HdfsConstants.PROVIDED_STORAGE_POLICY_ID;
@Test (timeout=300000)
public void testConfigKeyEnabled() throws IOException {
@@ -143,6 +144,9 @@ public class TestBlockStoragePolicy {
expectedPolicyStrings.put(ALLSSD, "BlockStoragePolicy{ALL_SSD:" + ALLSSD +
", storageTypes=[SSD], creationFallbacks=[DISK], " +
"replicationFallbacks=[DISK]}");
+ expectedPolicyStrings.put(PROVIDED, "BlockStoragePolicy{PROVIDED:" + PROVIDED +
+ ", storageTypes=[PROVIDED, DISK], creationFallbacks=[PROVIDED, DISK], " +
+ "replicationFallbacks=[PROVIDED, DISK]}");
for(byte i = 1; i < 16; i++) {
final BlockStoragePolicy policy = POLICY_SUITE.getPolicy(i);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
index 286f4a4..81405eb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestDatanodeManager.java
@@ -300,7 +300,7 @@ public class TestDatanodeManager {
*/
@Test
public void testSortLocatedBlocks() throws IOException, URISyntaxException {
- HelperFunction(null);
+ HelperFunction(null, 0);
}
/**
@@ -312,7 +312,7 @@ public class TestDatanodeManager {
*/
@Test
public void testgoodScript() throws IOException, URISyntaxException {
- HelperFunction("/" + Shell.appendScriptExtension("topology-script"));
+ HelperFunction("/" + Shell.appendScriptExtension("topology-script"), 0);
}
@@ -325,7 +325,21 @@ public class TestDatanodeManager {
*/
@Test
public void testBadScript() throws IOException, URISyntaxException {
- HelperFunction("/"+ Shell.appendScriptExtension("topology-broken-script"));
+ HelperFunction("/"+ Shell.appendScriptExtension("topology-broken-script"), 0);
+ }
+
+ /**
+ * Test with different sorting functions but include datanodes
+ * with provided storage
+ * @throws IOException
+ * @throws URISyntaxException
+ */
+ @Test
+ public void testWithProvidedTypes() throws IOException, URISyntaxException {
+ HelperFunction(null, 1);
+ HelperFunction(null, 3);
+ HelperFunction("/" + Shell.appendScriptExtension("topology-script"), 1);
+ HelperFunction("/" + Shell.appendScriptExtension("topology-script"), 2);
}
/**
@@ -333,11 +347,12 @@ public class TestDatanodeManager {
* we invoke this function with and without topology scripts
*
* @param scriptFileName - Script Name or null
+ * @param providedStorages - number of provided storages to add
*
* @throws URISyntaxException
* @throws IOException
*/
- public void HelperFunction(String scriptFileName)
+ public void HelperFunction(String scriptFileName, int providedStorages)
throws URISyntaxException, IOException {
// create the DatanodeManager which will be tested
Configuration conf = new Configuration();
@@ -352,17 +367,25 @@ public class TestDatanodeManager {
}
DatanodeManager dm = mockDatanodeManager(fsn, conf);
+ int totalDNs = 5 + providedStorages;
+
// register 5 datanodes, each with different storage ID and type
- DatanodeInfo[] locs = new DatanodeInfo[5];
- String[] storageIDs = new String[5];
- StorageType[] storageTypes = new StorageType[]{
- StorageType.ARCHIVE,
- StorageType.DEFAULT,
- StorageType.DISK,
- StorageType.RAM_DISK,
- StorageType.SSD
- };
- for (int i = 0; i < 5; i++) {
+ DatanodeInfo[] locs = new DatanodeInfo[totalDNs];
+ String[] storageIDs = new String[totalDNs];
+ List<StorageType> storageTypesList = new ArrayList<>(
+ Arrays.asList(StorageType.ARCHIVE,
+ StorageType.DEFAULT,
+ StorageType.DISK,
+ StorageType.RAM_DISK,
+ StorageType.SSD));
+
+ for (int i = 0; i < providedStorages; i++) {
+ storageTypesList.add(StorageType.PROVIDED);
+ }
+
+ StorageType[] storageTypes= storageTypesList.toArray(new StorageType[0]);
+
+ for (int i = 0; i < totalDNs; i++) {
// register new datanode
String uuid = "UUID-" + i;
String ip = "IP-" + i;
@@ -398,9 +421,9 @@ public class TestDatanodeManager {
DatanodeInfo[] sortedLocs = block.getLocations();
storageIDs = block.getStorageIDs();
storageTypes = block.getStorageTypes();
- assertThat(sortedLocs.length, is(5));
- assertThat(storageIDs.length, is(5));
- assertThat(storageTypes.length, is(5));
+ assertThat(sortedLocs.length, is(totalDNs));
+ assertThat(storageIDs.length, is(totalDNs));
+ assertThat(storageTypes.length, is(totalDNs));
for (int i = 0; i < sortedLocs.length; i++) {
assertThat(((DatanodeInfoWithStorage) sortedLocs[i]).getStorageID(),
is(storageIDs[i]));
@@ -414,6 +437,14 @@ public class TestDatanodeManager {
is(DatanodeInfo.AdminStates.DECOMMISSIONED));
assertThat(sortedLocs[sortedLocs.length - 2].getAdminState(),
is(DatanodeInfo.AdminStates.DECOMMISSIONED));
+ // check that the StorageType of datanoodes immediately
+ // preceding the decommissioned datanodes is PROVIDED
+ for (int i = 0; i < providedStorages; i++) {
+ assertThat(
+ ((DatanodeInfoWithStorage)
+ sortedLocs[sortedLocs.length - 3 - i]).getStorageType(),
+ is(StorageType.PROVIDED));
+ }
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1cc1f214/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
new file mode 100644
index 0000000..3b75806
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -0,0 +1,345 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.nio.ByteBuffer;
+import java.nio.channels.Channels;
+import java.nio.channels.ReadableByteChannel;
+import java.util.Random;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockFormatProvider;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockProvider;
+import org.apache.hadoop.hdfs.server.common.BlockFormat;
+import org.apache.hadoop.hdfs.server.common.FileRegionProvider;
+import org.apache.hadoop.hdfs.server.common.TextFileRegionFormat;
+import org.apache.hadoop.hdfs.server.common.TextFileRegionProvider;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY;
+
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.junit.Assert.*;
+
+public class TestNameNodeProvidedImplementation {
+
+ @Rule public TestName name = new TestName();
+ public static final Logger LOG =
+ LoggerFactory.getLogger(TestNameNodeProvidedImplementation.class);
+
+ final Random r = new Random();
+ final File fBASE = new File(MiniDFSCluster.getBaseDirectory());
+ final Path BASE = new Path(fBASE.toURI().toString());
+ final Path NAMEPATH = new Path(BASE, "providedDir");;
+ final Path NNDIRPATH = new Path(BASE, "nnDir");
+ final Path BLOCKFILE = new Path(NNDIRPATH, "blocks.csv");
+ final String SINGLEUSER = "usr1";
+ final String SINGLEGROUP = "grp1";
+
+ Configuration conf;
+ MiniDFSCluster cluster;
+
+ @Before
+ public void setSeed() throws Exception {
+ if (fBASE.exists() && !FileUtil.fullyDelete(fBASE)) {
+ throw new IOException("Could not fully delete " + fBASE);
+ }
+ long seed = r.nextLong();
+ r.setSeed(seed);
+ System.out.println(name.getMethodName() + " seed: " + seed);
+ conf = new HdfsConfiguration();
+ conf.set(SingleUGIResolver.USER, SINGLEUSER);
+ conf.set(SingleUGIResolver.GROUP, SINGLEGROUP);
+
+ conf.set(DFSConfigKeys.DFS_PROVIDER_STORAGEUUID,
+ DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT);
+ conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED, true);
+
+ conf.setClass(DFSConfigKeys.DFS_NAMENODE_BLOCK_PROVIDER_CLASS,
+ BlockFormatProvider.class, BlockProvider.class);
+ conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
+ TextFileRegionProvider.class, FileRegionProvider.class);
+ conf.setClass(DFSConfigKeys.DFS_PROVIDER_BLK_FORMAT_CLASS,
+ TextFileRegionFormat.class, BlockFormat.class);
+
+ conf.set(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_WRITE_PATH,
+ BLOCKFILE.toString());
+ conf.set(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_READ_PATH,
+ BLOCKFILE.toString());
+ conf.set(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER, ",");
+
+ File imageDir = new File(NAMEPATH.toUri());
+ if (!imageDir.exists()) {
+ LOG.info("Creating directory: " + imageDir);
+ imageDir.mkdirs();
+ }
+
+ File nnDir = new File(NNDIRPATH.toUri());
+ if (!nnDir.exists()) {
+ nnDir.mkdirs();
+ }
+
+ // create 10 random files under BASE
+ for (int i=0; i < 10; i++) {
+ File newFile = new File(new Path(NAMEPATH, "file" + i).toUri());
+ if(!newFile.exists()) {
+ try {
+ LOG.info("Creating " + newFile.toString());
+ newFile.createNewFile();
+ Writer writer = new OutputStreamWriter(
+ new FileOutputStream(newFile.getAbsolutePath()), "utf-8");
+ for(int j=0; j < 10*i; j++) {
+ writer.write("0");
+ }
+ writer.flush();
+ writer.close();
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ }
+ }
+ }
+
+ @After
+ public void shutdown() throws Exception {
+ try {
+ if (cluster != null) {
+ cluster.shutdown(true, true);
+ }
+ } finally {
+ cluster = null;
+ }
+ }
+
+ void createImage(TreeWalk t, Path out,
+ Class<? extends BlockResolver> blockIdsClass) throws Exception {
+ ImageWriter.Options opts = ImageWriter.defaults();
+ opts.setConf(conf);
+ opts.output(out.toString())
+ .blocks(TextFileRegionFormat.class)
+ .blockIds(blockIdsClass);
+ try (ImageWriter w = new ImageWriter(opts)) {
+ for (TreePath e : t) {
+ w.accept(e);
+ }
+ }
+ }
+
+ void startCluster(Path nspath, int numDatanodes,
+ StorageType[] storageTypes,
+ StorageType[][] storageTypesPerDatanode)
+ throws IOException {
+ conf.set(DFS_NAMENODE_NAME_DIR_KEY, nspath.toString());
+
+ if (storageTypesPerDatanode != null) {
+ cluster = new MiniDFSCluster.Builder(conf)
+ .format(false)
+ .manageNameDfsDirs(false)
+ .numDataNodes(numDatanodes)
+ .storageTypes(storageTypesPerDatanode)
+ .build();
+ } else if (storageTypes != null) {
+ cluster = new MiniDFSCluster.Builder(conf)
+ .format(false)
+ .manageNameDfsDirs(false)
+ .numDataNodes(numDatanodes)
+ .storagesPerDatanode(storageTypes.length)
+ .storageTypes(storageTypes)
+ .build();
+ } else {
+ cluster = new MiniDFSCluster.Builder(conf)
+ .format(false)
+ .manageNameDfsDirs(false)
+ .numDataNodes(numDatanodes)
+ .build();
+ }
+ cluster.waitActive();
+ }
+
+ @Test(timeout = 20000)
+ public void testLoadImage() throws Exception {
+ final long seed = r.nextLong();
+ LOG.info("NAMEPATH: " + NAMEPATH);
+ createImage(new RandomTreeWalk(seed), NNDIRPATH, FixedBlockResolver.class);
+ startCluster(NNDIRPATH, 0, new StorageType[] {StorageType.PROVIDED}, null);
+
+ FileSystem fs = cluster.getFileSystem();
+ for (TreePath e : new RandomTreeWalk(seed)) {
+ FileStatus rs = e.getFileStatus();
+ Path hp = new Path(rs.getPath().toUri().getPath());
+ assertTrue(fs.exists(hp));
+ FileStatus hs = fs.getFileStatus(hp);
+ assertEquals(rs.getPath().toUri().getPath(),
+ hs.getPath().toUri().getPath());
+ assertEquals(rs.getPermission(), hs.getPermission());
+ assertEquals(rs.getLen(), hs.getLen());
+ assertEquals(SINGLEUSER, hs.getOwner());
+ assertEquals(SINGLEGROUP, hs.getGroup());
+ assertEquals(rs.getAccessTime(), hs.getAccessTime());
+ assertEquals(rs.getModificationTime(), hs.getModificationTime());
+ }
+ }
+
+ @Test(timeout=20000)
+ public void testBlockLoad() throws Exception {
+ conf.setClass(ImageWriter.Options.UGI_CLASS,
+ SingleUGIResolver.class, UGIResolver.class);
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class);
+ startCluster(NNDIRPATH, 1, new StorageType[] {StorageType.PROVIDED}, null);
+ }
+
+ @Test(timeout=500000)
+ public void testDefaultReplication() throws Exception {
+ int targetReplication = 2;
+ conf.setInt(FixedBlockMultiReplicaResolver.REPLICATION, targetReplication);
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockMultiReplicaResolver.class);
+ // make the last Datanode with only DISK
+ startCluster(NNDIRPATH, 3, null,
+ new StorageType[][] {
+ {StorageType.PROVIDED},
+ {StorageType.PROVIDED},
+ {StorageType.DISK}}
+ );
+ // wait for the replication to finish
+ Thread.sleep(50000);
+
+ FileSystem fs = cluster.getFileSystem();
+ int count = 0;
+ for (TreePath e : new FSTreeWalk(NAMEPATH, conf)) {
+ FileStatus rs = e.getFileStatus();
+ Path hp = removePrefix(NAMEPATH, rs.getPath());
+ LOG.info("hp " + hp.toUri().getPath());
+ //skip HDFS specific files, which may have been created later on.
+ if (hp.toString().contains("in_use.lock")
+ || hp.toString().contains("current")) {
+ continue;
+ }
+ e.accept(count++);
+ assertTrue(fs.exists(hp));
+ FileStatus hs = fs.getFileStatus(hp);
+
+ if (rs.isFile()) {
+ BlockLocation[] bl = fs.getFileBlockLocations(
+ hs.getPath(), 0, hs.getLen());
+ int i = 0;
+ for(; i < bl.length; i++) {
+ int currentRep = bl[i].getHosts().length;
+ assertEquals(targetReplication , currentRep);
+ }
+ }
+ }
+ }
+
+
+ static Path removePrefix(Path base, Path walk) {
+ Path wpath = new Path(walk.toUri().getPath());
+ Path bpath = new Path(base.toUri().getPath());
+ Path ret = new Path("/");
+ while (!(bpath.equals(wpath) || "".equals(wpath.getName()))) {
+ ret = "".equals(ret.getName())
+ ? new Path("/", wpath.getName())
+ : new Path(new Path("/", wpath.getName()),
+ new Path(ret.toString().substring(1)));
+ wpath = wpath.getParent();
+ }
+ if (!bpath.equals(wpath)) {
+ throw new IllegalArgumentException(base + " not a prefix of " + walk);
+ }
+ return ret;
+ }
+
+ @Test(timeout=30000)
+ public void testBlockRead() throws Exception {
+ conf.setClass(ImageWriter.Options.UGI_CLASS,
+ FsUGIResolver.class, UGIResolver.class);
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class);
+ startCluster(NNDIRPATH, 3, new StorageType[] {StorageType.PROVIDED}, null);
+ FileSystem fs = cluster.getFileSystem();
+ Thread.sleep(2000);
+ int count = 0;
+ // read NN metadata, verify contents match
+ for (TreePath e : new FSTreeWalk(NAMEPATH, conf)) {
+ FileStatus rs = e.getFileStatus();
+ Path hp = removePrefix(NAMEPATH, rs.getPath());
+ LOG.info("hp " + hp.toUri().getPath());
+ //skip HDFS specific files, which may have been created later on.
+ if(hp.toString().contains("in_use.lock")
+ || hp.toString().contains("current")) {
+ continue;
+ }
+ e.accept(count++);
+ assertTrue(fs.exists(hp));
+ FileStatus hs = fs.getFileStatus(hp);
+ assertEquals(hp.toUri().getPath(), hs.getPath().toUri().getPath());
+ assertEquals(rs.getPermission(), hs.getPermission());
+ assertEquals(rs.getOwner(), hs.getOwner());
+ assertEquals(rs.getGroup(), hs.getGroup());
+
+ if (rs.isFile()) {
+ assertEquals(rs.getLen(), hs.getLen());
+ try (ReadableByteChannel i = Channels.newChannel(
+ new FileInputStream(new File(rs.getPath().toUri())))) {
+ try (ReadableByteChannel j = Channels.newChannel(
+ fs.open(hs.getPath()))) {
+ ByteBuffer ib = ByteBuffer.allocate(4096);
+ ByteBuffer jb = ByteBuffer.allocate(4096);
+ while (true) {
+ int il = i.read(ib);
+ int jl = j.read(jb);
+ if (il < 0 || jl < 0) {
+ assertEquals(il, jl);
+ break;
+ }
+ ib.flip();
+ jb.flip();
+ int cmp = Math.min(ib.remaining(), jb.remaining());
+ for (int k = 0; k < cmp; ++k) {
+ assertEquals(ib.get(), jb.get());
+ }
+ ib.compact();
+ jb.compact();
+ }
+
+ }
+ }
+ }
+ }
+ }
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[12/50] [abbrv] hadoop git commit: YARN-6507. Add support in
NodeManager to isolate FPGA devices with CGroups. (Zhankun Tang via wangda)
Posted by vi...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java
new file mode 100644
index 0000000..d3d55fa
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java
@@ -0,0 +1,458 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.yarn.api.records.*;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.server.nodemanager.Context;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ResourceMappings;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperation;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandler;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceSet;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.FpgaDiscoverer;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin;
+import org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService;
+import org.apache.hadoop.yarn.util.resource.TestResourceUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+
+import java.io.IOException;
+import java.util.*;
+import java.util.concurrent.ConcurrentHashMap;
+
+import static org.mockito.Mockito.*;
+
+
+public class TestFpgaResourceHandler {
+ private Context mockContext;
+ private FpgaResourceHandlerImpl fpgaResourceHandler;
+ private Configuration configuration;
+ private CGroupsHandler mockCGroupsHandler;
+ private PrivilegedOperationExecutor mockPrivilegedExecutor;
+ private NMStateStoreService mockNMStateStore;
+ private ConcurrentHashMap<ContainerId, Container> runningContainersMap;
+ private IntelFpgaOpenclPlugin mockVendorPlugin;
+ private static final String vendorType = "IntelOpenCL";
+
+ @Before
+ public void setup() {
+ TestResourceUtils.addNewTypesToResources(ResourceInformation.FPGA_URI);
+ configuration = new YarnConfiguration();
+
+ mockCGroupsHandler = mock(CGroupsHandler.class);
+ mockPrivilegedExecutor = mock(PrivilegedOperationExecutor.class);
+ mockNMStateStore = mock(NMStateStoreService.class);
+ mockContext = mock(Context.class);
+ // Assumed devices parsed from output
+ List<FpgaResourceAllocator.FpgaDevice> list = new ArrayList<>();
+ list.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 0, null));
+ list.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 1, null));
+ list.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 2, null));
+ list.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 3, null));
+ list.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 4, null));
+ mockVendorPlugin = mockPlugin(vendorType, list);
+ FpgaDiscoverer.getInstance().setConf(configuration);
+ when(mockContext.getNMStateStore()).thenReturn(mockNMStateStore);
+ runningContainersMap = new ConcurrentHashMap<>();
+ when(mockContext.getContainers()).thenReturn(runningContainersMap);
+
+ fpgaResourceHandler = new FpgaResourceHandlerImpl(mockContext,
+ mockCGroupsHandler, mockPrivilegedExecutor, mockVendorPlugin);
+ }
+
+ @Test
+ public void testBootstrap() throws ResourceHandlerException {
+ // Case 1. auto
+ String allowed = "auto";
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, allowed);
+ fpgaResourceHandler.bootstrap(configuration);
+ verify(mockVendorPlugin, times(1)).initPlugin(configuration);
+ verify(mockCGroupsHandler, times(1)).initializeCGroupController(
+ CGroupsHandler.CGroupController.DEVICES);
+ Assert.assertEquals(5, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+ Assert.assertEquals(5, fpgaResourceHandler.getFpgaAllocator().getAllowedFpga().size());
+ // Case 2. subset of devices
+ fpgaResourceHandler = new FpgaResourceHandlerImpl(mockContext,
+ mockCGroupsHandler, mockPrivilegedExecutor, mockVendorPlugin);
+ allowed = "0,1,2";
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, allowed);
+ fpgaResourceHandler.bootstrap(configuration);
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getAllowedFpga().size());
+ List<FpgaResourceAllocator.FpgaDevice> allowedDevices = fpgaResourceHandler.getFpgaAllocator().getAllowedFpga();
+ for (String s : allowed.split(",")) {
+ boolean check = false;
+ for (FpgaResourceAllocator.FpgaDevice device : allowedDevices) {
+ if (device.getMinor().toString().equals(s)) {
+ check = true;
+ }
+ }
+ Assert.assertTrue("Minor:" + s +"found", check);
+ }
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+
+ // Case 3. User configuration contains invalid minor device number
+ fpgaResourceHandler = new FpgaResourceHandlerImpl(mockContext,
+ mockCGroupsHandler, mockPrivilegedExecutor, mockVendorPlugin);
+ allowed = "0,1,7";
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, allowed);
+ fpgaResourceHandler.bootstrap(configuration);
+ Assert.assertEquals(2, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+ Assert.assertEquals(2, fpgaResourceHandler.getFpgaAllocator().getAllowedFpga().size());
+ }
+
+ @Test
+ public void testBootstrapWithInvalidUserConfiguration() throws ResourceHandlerException {
+ // User configuration contains invalid minor device number
+ String allowed = "0,1,7";
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, allowed);
+ fpgaResourceHandler.bootstrap(configuration);
+ Assert.assertEquals(2, fpgaResourceHandler.getFpgaAllocator().getAllowedFpga().size());
+ Assert.assertEquals(2, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+
+ String[] invalidAllowedStrings = {"a,1,2,", "a,1,2", "0,1,2,#", "a", "1,"};
+ for (String s : invalidAllowedStrings) {
+ boolean invalidConfiguration = false;
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, s);
+ try {
+ fpgaResourceHandler.bootstrap(configuration);
+ } catch (ResourceHandlerException e) {
+ invalidConfiguration = true;
+ }
+ Assert.assertTrue(invalidConfiguration);
+ }
+
+ String[] allowedStrings = {"1,2", "1"};
+ for (String s : allowedStrings) {
+ boolean invalidConfiguration = false;
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, s);
+ try {
+ fpgaResourceHandler.bootstrap(configuration);
+ } catch (ResourceHandlerException e) {
+ invalidConfiguration = true;
+ }
+ Assert.assertFalse(invalidConfiguration);
+ }
+ }
+
+ @Test
+ public void testBootStrapWithEmptyUserConfiguration() throws ResourceHandlerException {
+ // User configuration contains invalid minor device number
+ String allowed = "";
+ boolean invalidConfiguration = false;
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, allowed);
+ try {
+ fpgaResourceHandler.bootstrap(configuration);
+ } catch (ResourceHandlerException e) {
+ invalidConfiguration = true;
+ }
+ Assert.assertTrue(invalidConfiguration);
+ }
+
+ @Test
+ public void testAllocationWithPreference() throws ResourceHandlerException, PrivilegedOperationException {
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, "0,1,2");
+ fpgaResourceHandler.bootstrap(configuration);
+ // Case 1. The id-0 container request 1 FPGA of IntelOpenCL type and GEMM IP
+ fpgaResourceHandler.preStart(mockContainer(0, 1, "GEMM"));
+ Assert.assertEquals(1, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ verifyDeniedDevices(getContainerId(0), Arrays.asList(1, 2));
+ List<FpgaResourceAllocator.FpgaDevice> list = fpgaResourceHandler.getFpgaAllocator()
+ .getUsedFpga().get(getContainerId(0).toString());
+ for (FpgaResourceAllocator.FpgaDevice device : list) {
+ Assert.assertEquals("IP should be updated to GEMM", "GEMM", device.getIPID());
+ }
+ // Case 2. The id-1 container request 3 FPGA of IntelOpenCL and GEMM IP. this should fail
+ boolean flag = false;
+ try {
+ fpgaResourceHandler.preStart(mockContainer(1, 3, "GZIP"));
+ } catch (ResourceHandlerException e) {
+ flag = true;
+ }
+ Assert.assertTrue(flag);
+ // Case 3. Release the id-0 container
+ fpgaResourceHandler.postComplete(getContainerId(0));
+ Assert.assertEquals(0, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+ // Now we have enough devices, re-allocate for the id-1 container
+ fpgaResourceHandler.preStart(mockContainer(1, 3, "GEMM"));
+ // Id-1 container should have 0 denied devices
+ verifyDeniedDevices(getContainerId(1), new ArrayList<>());
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(0, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+ // Release container id-1
+ fpgaResourceHandler.postComplete(getContainerId(1));
+ Assert.assertEquals(0, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+ // Case 4. Now all 3 devices should have IPID GEMM
+ // Try container id-2 and id-3
+ fpgaResourceHandler.preStart(mockContainer(2, 1, "GZIP"));
+ fpgaResourceHandler.postComplete(getContainerId(2));
+ fpgaResourceHandler.preStart(mockContainer(3, 2, "GEMM"));
+
+ // IPID should be GEMM for id-3 container
+ list = fpgaResourceHandler.getFpgaAllocator()
+ .getUsedFpga().get(getContainerId(3).toString());
+ for (FpgaResourceAllocator.FpgaDevice device : list) {
+ Assert.assertEquals("IPID should be GEMM", "GEMM", device.getIPID());
+ }
+ Assert.assertEquals(2, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(1, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+ fpgaResourceHandler.postComplete(getContainerId(3));
+ Assert.assertEquals(0, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+
+ // Case 5. id-4 request 0 FPGA device
+ fpgaResourceHandler.preStart(mockContainer(4, 0, ""));
+ // Deny all devices for id-4
+ verifyDeniedDevices(getContainerId(4), Arrays.asList(0, 1, 2));
+ Assert.assertEquals(0, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+
+ // Case 6. id-5 with invalid FPGA device
+ try {
+ fpgaResourceHandler.preStart(mockContainer(5, -2, ""));
+ } catch (ResourceHandlerException e) {
+ Assert.assertTrue(true);
+ }
+ }
+
+ @Test
+ public void testsAllocationWithExistingIPIDDevices() throws ResourceHandlerException, PrivilegedOperationException {
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, "0,1,2");
+ fpgaResourceHandler.bootstrap(configuration);
+ // The id-0 container request 3 FPGA of IntelOpenCL type and GEMM IP
+ fpgaResourceHandler.preStart(mockContainer(0, 3, "GEMM"));
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ List<FpgaResourceAllocator.FpgaDevice> list = fpgaResourceHandler.getFpgaAllocator()
+ .getUsedFpga().get(getContainerId(0).toString());
+ fpgaResourceHandler.postComplete(getContainerId(0));
+ for (FpgaResourceAllocator.FpgaDevice device : list) {
+ Assert.assertEquals("IP should be updated to GEMM", "GEMM", device.getIPID());
+ }
+
+ // Case 1. id-1 container request preStart, with no plugin.configureIP called
+ fpgaResourceHandler.preStart(mockContainer(1, 1, "GEMM"));
+ fpgaResourceHandler.preStart(mockContainer(2, 1, "GEMM"));
+ // we should have 3 times due to id-1 skip 1 invocation
+ verify(mockVendorPlugin, times(3)).configureIP(anyString(),anyString());
+ fpgaResourceHandler.postComplete(getContainerId(1));
+ fpgaResourceHandler.postComplete(getContainerId(2));
+
+ // Case 2. id-2 container request preStart, with 1 plugin.configureIP called
+ fpgaResourceHandler.preStart(mockContainer(1, 1, "GZIP"));
+ // we should have 4 times invocation
+ verify(mockVendorPlugin, times(4)).configureIP(anyString(),anyString());
+ }
+
+ @Test
+ public void testAllocationWithZeroDevices() throws ResourceHandlerException, PrivilegedOperationException {
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, "0,1,2");
+ fpgaResourceHandler.bootstrap(configuration);
+ // The id-0 container request 0 FPGA
+ fpgaResourceHandler.preStart(mockContainer(0, 0, null));
+ verifyDeniedDevices(getContainerId(0), Arrays.asList(0, 1, 2));
+ verify(mockVendorPlugin, times(0)).downloadIP(anyString(), anyString(), anyMap());
+ verify(mockVendorPlugin, times(0)).configureIP(anyString(), anyString());
+ }
+
+ @Test
+ public void testStateStore() throws ResourceHandlerException, IOException {
+ // Case 1. store 3 devices
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, "0,1,2");
+ fpgaResourceHandler.bootstrap(configuration);
+ Container container0 = mockContainer(0, 3, "GEMM");
+ fpgaResourceHandler.preStart(container0);
+ List<FpgaResourceAllocator.FpgaDevice> assigned =
+ fpgaResourceHandler.getFpgaAllocator().getUsedFpga().get(getContainerId(0).toString());
+ verify(mockNMStateStore).storeAssignedResources(container0,
+ ResourceInformation.FPGA_URI,
+ new ArrayList<>(assigned));
+ fpgaResourceHandler.postComplete(getContainerId(0));
+ // Case 2. ask 0, no store api called
+ Container container1 = mockContainer(1, 0, "");
+ fpgaResourceHandler.preStart(container1);
+ verify(mockNMStateStore, never()).storeAssignedResources(
+ eq(container1), eq(ResourceInformation.FPGA_URI), anyList());
+ }
+
+ @Test
+ public void testReacquireContainer() throws ResourceHandlerException {
+
+ Container c0 = mockContainer(0, 2, "GEMM");
+ List<FpgaResourceAllocator.FpgaDevice> assigned = new ArrayList<>();
+ assigned.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 0, null));
+ assigned.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 1, null));
+ // Mock we've stored the c0 states
+ mockStateStoreForContainer(c0, assigned);
+ // NM start
+ configuration.set(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES, "0,1,2");
+ fpgaResourceHandler.bootstrap(configuration);
+ Assert.assertEquals(0, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+ // Case 1. try recover state for id-0 container
+ fpgaResourceHandler.reacquireContainer(getContainerId(0));
+ // minor number matches
+ List<FpgaResourceAllocator.FpgaDevice> used = fpgaResourceHandler.getFpgaAllocator().
+ getUsedFpga().get(getContainerId(0).toString());
+ int count = 0;
+ for (FpgaResourceAllocator.FpgaDevice device : used) {
+ if (device.getMinor().equals(0)){
+ count++;
+ }
+ if (device.getMinor().equals(1)) {
+ count++;
+ }
+ }
+ Assert.assertEquals("Unexpected used minor number in allocator",2, count);
+ List<FpgaResourceAllocator.FpgaDevice> available = fpgaResourceHandler.getFpgaAllocator().
+ getAvailableFpga().get(vendorType);
+ count = 0;
+ for (FpgaResourceAllocator.FpgaDevice device : available) {
+ if (device.getMinor().equals(2)) {
+ count++;
+ }
+ }
+ Assert.assertEquals("Unexpected available minor number in allocator", 1, count);
+
+
+ // Case 2. Recover a not allowed device with minor number 5
+ Container c1 = mockContainer(1, 1, "GEMM");
+ assigned = new ArrayList<>();
+ assigned.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 5, null));
+ // Mock we've stored the c1 states
+ mockStateStoreForContainer(c1, assigned);
+ boolean flag = false;
+ try {
+ fpgaResourceHandler.reacquireContainer(getContainerId(1));
+ } catch (ResourceHandlerException e) {
+ flag = true;
+ }
+ Assert.assertTrue(flag);
+ Assert.assertEquals(2, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(1, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+
+ // Case 3. recover a already used device by other container
+ Container c2 = mockContainer(2, 1, "GEMM");
+ assigned = new ArrayList<>();
+ assigned.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 1, null));
+ // Mock we've stored the c2 states
+ mockStateStoreForContainer(c2, assigned);
+ flag = false;
+ try {
+ fpgaResourceHandler.reacquireContainer(getContainerId(2));
+ } catch (ResourceHandlerException e) {
+ flag = true;
+ }
+ Assert.assertTrue(flag);
+ Assert.assertEquals(2, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(1, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+
+ // Case 4. recover a normal container c3 with remaining minor device number 2
+ Container c3 = mockContainer(3, 1, "GEMM");
+ assigned = new ArrayList<>();
+ assigned.add(new FpgaResourceAllocator.FpgaDevice(vendorType, 247, 2, null));
+ // Mock we've stored the c2 states
+ mockStateStoreForContainer(c3, assigned);
+ fpgaResourceHandler.reacquireContainer(getContainerId(3));
+ Assert.assertEquals(3, fpgaResourceHandler.getFpgaAllocator().getUsedFpgaCount());
+ Assert.assertEquals(0, fpgaResourceHandler.getFpgaAllocator().getAvailableFpgaCount());
+ }
+
+ private void verifyDeniedDevices(ContainerId containerId,
+ List<Integer> deniedDevices)
+ throws ResourceHandlerException, PrivilegedOperationException {
+ verify(mockCGroupsHandler, atLeastOnce()).createCGroup(
+ CGroupsHandler.CGroupController.DEVICES, containerId.toString());
+
+ if (null != deniedDevices && !deniedDevices.isEmpty()) {
+ verify(mockPrivilegedExecutor, times(1)).executePrivilegedOperation(
+ new PrivilegedOperation(PrivilegedOperation.OperationType.FPGA, Arrays
+ .asList(FpgaResourceHandlerImpl.CONTAINER_ID_CLI_OPTION,
+ containerId.toString(),
+ FpgaResourceHandlerImpl.EXCLUDED_FPGAS_CLI_OPTION,
+ StringUtils.join(",", deniedDevices))), true);
+ } else if (deniedDevices.isEmpty()) {
+ verify(mockPrivilegedExecutor, times(1)).executePrivilegedOperation(
+ new PrivilegedOperation(PrivilegedOperation.OperationType.FPGA, Arrays
+ .asList(FpgaResourceHandlerImpl.CONTAINER_ID_CLI_OPTION,
+ containerId.toString())), true);
+ }
+ }
+
+ private static IntelFpgaOpenclPlugin mockPlugin(String type, List<FpgaResourceAllocator.FpgaDevice> list) {
+ IntelFpgaOpenclPlugin plugin = mock(IntelFpgaOpenclPlugin.class);
+ when(plugin.initPlugin(Mockito.anyObject())).thenReturn(true);
+ when(plugin.getFpgaType()).thenReturn(type);
+ when(plugin.downloadIP(Mockito.anyString(), Mockito.anyString(), Mockito.anyMap())).thenReturn("/tmp");
+ when(plugin.configureIP(Mockito.anyString(), Mockito.anyObject())).thenReturn(true);
+ when(plugin.discover(Mockito.anyInt())).thenReturn(list);
+ return plugin;
+ }
+
+
+ private static Container mockContainer(int id, int numFpga, String IPID) {
+ Container c = mock(Container.class);
+
+ Resource res = Resource.newInstance(1024, 1);
+ ResourceMappings resMapping = new ResourceMappings();
+ res.setResourceValue(ResourceInformation.FPGA_URI, numFpga);
+ when(c.getResource()).thenReturn(res);
+ when(c.getResourceMappings()).thenReturn(resMapping);
+
+ when(c.getContainerId()).thenReturn(getContainerId(id));
+
+ ContainerLaunchContext clc = mock(ContainerLaunchContext.class);
+ Map<String, String> envs = new HashMap<>();
+ if (numFpga > 0) {
+ envs.put("REQUESTED_FPGA_IP_ID", IPID);
+ }
+ when(c.getLaunchContext()).thenReturn(clc);
+ when(clc.getEnvironment()).thenReturn(envs);
+ when(c.getWorkDir()).thenReturn("/tmp");
+ ResourceSet resourceSet = new ResourceSet();
+ when(c.getResourceSet()).thenReturn(resourceSet);
+
+ return c;
+ }
+
+ private void mockStateStoreForContainer(Container container,
+ List<FpgaResourceAllocator.FpgaDevice> assigned) {
+ ResourceMappings rmap = new ResourceMappings();
+ ResourceMappings.AssignedResources ar =
+ new ResourceMappings.AssignedResources();
+ ar.updateAssignedResources(new ArrayList<>(assigned));
+ rmap.addAssignedResources(ResourceInformation.FPGA_URI, ar);
+ when(container.getResourceMappings()).thenReturn(rmap);
+ runningContainersMap.put(container.getContainerId(), container);
+ }
+
+ private static ContainerId getContainerId(int id) {
+ return ContainerId.newContainerId(ApplicationAttemptId
+ .newInstance(ApplicationId.newInstance(1234L, 1), 1), id);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java
new file mode 100644
index 0000000..87fb4e9
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/TestFpgaDiscoverer.java
@@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga;
+
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceAllocator;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import static org.mockito.Matchers.anyInt;
+import static org.mockito.Matchers.anyString;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class TestFpgaDiscoverer {
+
+ private String getTestParentFolder() {
+ File f = new File("target/temp/" + TestFpgaDiscoverer.class.getName());
+ return f.getAbsolutePath();
+ }
+
+ private void touchFile(File f) throws IOException {
+ new FileOutputStream(f).close();
+ }
+
+ @Before
+ public void before() throws IOException {
+ String folder = getTestParentFolder();
+ File f = new File(folder);
+ FileUtils.deleteDirectory(f);
+ f.mkdirs();
+ }
+
+ @Test
+ public void testLinuxFpgaResourceDiscoverPluginConfig() throws YarnException, IOException {
+ Configuration conf = new Configuration(false);
+ FpgaDiscoverer discoverer = FpgaDiscoverer.getInstance();
+
+ IntelFpgaOpenclPlugin openclPlugin = new IntelFpgaOpenclPlugin();
+ // because FPGA discoverer is a singleton, we use setPlugin to make
+ // FpgaDiscoverer.getInstance().diagnose() work in openclPlugin.initPlugin()
+ discoverer.setResourceHanderPlugin(openclPlugin);
+ openclPlugin.initPlugin(conf);
+ openclPlugin.setShell(mockPuginShell());
+
+ discoverer.initialize(conf);
+ // Case 1. No configuration set for binary
+ Assert.assertEquals("No configuration should return just a single binary name",
+ "aocl", openclPlugin.getPathToExecutable());
+
+ // Case 2. With correct configuration and file exists
+ File fakeBinary = new File(getTestParentFolder() + "/aocl");
+ conf.set(YarnConfiguration.NM_FPGA_PATH_TO_EXEC, getTestParentFolder() + "/aocl");
+ touchFile(fakeBinary);
+ discoverer.initialize(conf);
+ Assert.assertEquals("Correct configuration should return user setting",
+ getTestParentFolder() + "/aocl", openclPlugin.getPathToExecutable());
+
+ // Case 3. With correct configuration but file doesn't exists. Use default
+ fakeBinary.delete();
+ discoverer.initialize(conf);
+ Assert.assertEquals("Correct configuration but file doesn't exists should return just a single binary name",
+ "aocl", openclPlugin.getPathToExecutable());
+
+ }
+
+ @Test
+ public void testDiscoverPluginParser() throws YarnException {
+ String output = "------------------------- acl0 -------------------------\n" +
+ "Vendor: Nallatech ltd\n" +
+ "Phys Dev Name Status Information\n" +
+ "aclnalla_pcie0Passed nalla_pcie (aclnalla_pcie0)\n" +
+ " PCIe dev_id = 2494, bus:slot.func = 02:00.00, Gen3 x8\n" +
+ " FPGA temperature = 53.1 degrees C.\n" +
+ " Total Card Power Usage = 31.7 Watts.\n" +
+ " Device Power Usage = 0.0 Watts.\n" +
+ "DIAGNOSTIC_PASSED" +
+ "---------------------------------------------------------\n";
+ output = output +
+ "------------------------- acl1 -------------------------\n" +
+ "Vendor: Nallatech ltd\n" +
+ "Phys Dev Name Status Information\n" +
+ "aclnalla_pcie1Passed nalla_pcie (aclnalla_pcie1)\n" +
+ " PCIe dev_id = 2495, bus:slot.func = 03:00.00, Gen3 x8\n" +
+ " FPGA temperature = 43.1 degrees C.\n" +
+ " Total Card Power Usage = 11.7 Watts.\n" +
+ " Device Power Usage = 0.0 Watts.\n" +
+ "DIAGNOSTIC_PASSED" +
+ "---------------------------------------------------------\n";
+ output = output +
+ "------------------------- acl2 -------------------------\n" +
+ "Vendor: Intel(R) Corporation\n" +
+ "\n" +
+ "Phys Dev Name Status Information\n" +
+ "\n" +
+ "acla10_ref0 Passed Arria 10 Reference Platform (acla10_ref0)\n" +
+ " PCIe dev_id = 2494, bus:slot.func = 09:00.00, Gen2 x8\n" +
+ " FPGA temperature = 50.5781 degrees C.\n" +
+ "\n" +
+ "DIAGNOSTIC_PASSED\n" +
+ "---------------------------------------------------------\n";
+ Configuration conf = new Configuration(false);
+ IntelFpgaOpenclPlugin openclPlugin = new IntelFpgaOpenclPlugin();
+ FpgaDiscoverer.getInstance().setResourceHanderPlugin(openclPlugin);
+
+ openclPlugin.initPlugin(conf);
+ openclPlugin.setShell(mockPuginShell());
+
+ FpgaDiscoverer.getInstance().initialize(conf);
+
+ List<FpgaResourceAllocator.FpgaDevice> list = new LinkedList<>();
+
+ // Case 1. core parsing
+ openclPlugin.parseDiagnoseInfo(output, list);
+ Assert.assertEquals(3, list.size());
+ Assert.assertEquals("IntelOpenCL", list.get(0).getType());
+ Assert.assertEquals("247", list.get(0).getMajor().toString());
+ Assert.assertEquals("0", list.get(0).getMinor().toString());
+ Assert.assertEquals("acl0", list.get(0).getAliasDevName());
+ Assert.assertEquals("aclnalla_pcie0", list.get(0).getDevName());
+ Assert.assertEquals("02:00.00", list.get(0).getBusNum());
+ Assert.assertEquals("53.1 degrees C", list.get(0).getTemperature());
+ Assert.assertEquals("31.7 Watts", list.get(0).getCardPowerUsage());
+
+ Assert.assertEquals("IntelOpenCL", list.get(1).getType());
+ Assert.assertEquals("247", list.get(1).getMajor().toString());
+ Assert.assertEquals("1", list.get(1).getMinor().toString());
+ Assert.assertEquals("acl1", list.get(1).getAliasDevName());
+ Assert.assertEquals("aclnalla_pcie1", list.get(1).getDevName());
+ Assert.assertEquals("03:00.00", list.get(1).getBusNum());
+ Assert.assertEquals("43.1 degrees C", list.get(1).getTemperature());
+ Assert.assertEquals("11.7 Watts", list.get(1).getCardPowerUsage());
+
+ Assert.assertEquals("IntelOpenCL", list.get(2).getType());
+ Assert.assertEquals("246", list.get(2).getMajor().toString());
+ Assert.assertEquals("0", list.get(2).getMinor().toString());
+ Assert.assertEquals("acl2", list.get(2).getAliasDevName());
+ Assert.assertEquals("acla10_ref0", list.get(2).getDevName());
+ Assert.assertEquals("09:00.00", list.get(2).getBusNum());
+ Assert.assertEquals("50.5781 degrees C", list.get(2).getTemperature());
+ Assert.assertEquals("", list.get(2).getCardPowerUsage());
+
+ // Case 2. check alias map
+ Map<String, String> aliasMap = openclPlugin.getAliasMap();
+ Assert.assertEquals("acl0", aliasMap.get("247:0"));
+ Assert.assertEquals("acl1", aliasMap.get("247:1"));
+ Assert.assertEquals("acl2", aliasMap.get("246:0"));
+ }
+
+ private IntelFpgaOpenclPlugin.InnerShellExecutor mockPuginShell() {
+ IntelFpgaOpenclPlugin.InnerShellExecutor shell = mock(IntelFpgaOpenclPlugin.InnerShellExecutor.class);
+ when(shell.runDiagnose(anyString(),anyInt())).thenReturn("");
+ when(shell.getMajorAndMinorNumber("aclnalla_pcie0")).thenReturn("247:0");
+ when(shell.getMajorAndMinorNumber("aclnalla_pcie1")).thenReturn("247:1");
+ when(shell.getMajorAndMinorNumber("acla10_ref0")).thenReturn("246:0");
+ return shell;
+ }
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[17/50] [abbrv] hadoop git commit: MAPREDUCE-6994. Uploader tool for
Distributed Cache Deploy code changes (miklos.szegedi@cloudera.com via
rkanter)
Posted by vi...@apache.org.
MAPREDUCE-6994. Uploader tool for Distributed Cache Deploy code changes (miklos.szegedi@cloudera.com via rkanter)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3b78607a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3b78607a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3b78607a
Branch: refs/heads/HDFS-9806
Commit: 3b78607a02f3a81ad730975ecdfa35967413271d
Parents: 21d3627
Author: Robert Kanter <rk...@apache.org>
Authored: Fri Dec 1 12:11:43 2017 -0800
Committer: Robert Kanter <rk...@apache.org>
Committed: Fri Dec 1 12:12:15 2017 -0800
----------------------------------------------------------------------
hadoop-mapreduce-project/bin/mapred | 4 +
.../hadoop-mapreduce-client-uploader/pom.xml | 67 ++++
.../hadoop/mapred/uploader/DefaultJars.java | 46 +++
.../mapred/uploader/FrameworkUploader.java | 384 +++++++++++++++++++
.../mapred/uploader/UploaderException.java | 36 ++
.../hadoop/mapred/uploader/package-info.java | 28 ++
.../mapred/uploader/TestFrameworkUploader.java | 315 +++++++++++++++
.../hadoop-mapreduce-client/pom.xml | 1 +
8 files changed, 881 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b78607a/hadoop-mapreduce-project/bin/mapred
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/bin/mapred b/hadoop-mapreduce-project/bin/mapred
index f66f563..ce9ce21 100755
--- a/hadoop-mapreduce-project/bin/mapred
+++ b/hadoop-mapreduce-project/bin/mapred
@@ -32,6 +32,7 @@ function hadoop_usage
hadoop_add_subcommand "pipes" client "run a Pipes job"
hadoop_add_subcommand "queue" client "get information regarding JobQueues"
hadoop_add_subcommand "sampler" client "sampler"
+ hadoop_add_subcommand "frameworkuploader" admin "mapreduce framework upload"
hadoop_add_subcommand "version" client "print the version"
hadoop_generate_usage "${HADOOP_SHELL_EXECNAME}" true
}
@@ -92,6 +93,9 @@ function mapredcmd_case
sampler)
HADOOP_CLASSNAME=org.apache.hadoop.mapred.lib.InputSampler
;;
+ frameworkuploader)
+ HADOOP_CLASSNAME=org.apache.hadoop.mapred.uploader.FrameworkUploader
+ ;;
version)
HADOOP_CLASSNAME=org.apache.hadoop.util.VersionInfo
;;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b78607a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml
new file mode 100644
index 0000000..a721404
--- /dev/null
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/pom.xml
@@ -0,0 +1,67 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <parent>
+ <artifactId>hadoop-mapreduce-client</artifactId>
+ <groupId>org.apache.hadoop</groupId>
+ <version>3.1.0-SNAPSHOT</version>
+ </parent>
+ <modelVersion>4.0.0</modelVersion>
+ <artifactId>hadoop-mapreduce-client-uploader</artifactId>
+ <version>3.1.0-SNAPSHOT</version>
+ <name>Apache Hadoop MapReduce Uploader</name>
+
+ <dependencies>
+ <dependency>
+ <groupId>commons-cli</groupId>
+ <artifactId>commons-cli</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-compress</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-common</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-hdfs-client</artifactId>
+ </dependency>
+ </dependencies>
+ <properties>
+ <!-- Needed for generating FindBugs warnings using parent pom -->
+ <mr.basedir>${project.parent.basedir}/../</mr.basedir>
+ </properties>
+
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-jar-plugin</artifactId>
+ <configuration>
+ <archive>
+ <manifest>
+ <mainClass>org.apache.hadoop.mapred.uploader.FrameworkUploader</mainClass>
+ </manifest>
+ </archive>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+
+</project>
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b78607a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/DefaultJars.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/DefaultJars.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/DefaultJars.java
new file mode 100644
index 0000000..49ee64f
--- /dev/null
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/DefaultJars.java
@@ -0,0 +1,46 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.mapred.uploader;
+
+/**
+ * Default white list and black list implementations.
+ */
+final class DefaultJars {
+ static final String DEFAULT_EXCLUDED_MR_JARS =
+ ".*hadoop-yarn-server-applicationhistoryservice.*\\.jar," +
+ ".*hadoop-yarn-server-nodemanager.*\\.jar," +
+ ".*hadoop-yarn-server-resourcemanager.*\\.jar," +
+ ".*hadoop-yarn-server-router.*\\.jar," +
+ ".*hadoop-yarn-server-sharedcachemanager.*\\.jar," +
+ ".*hadoop-yarn-server-timeline-pluginstorage.*\\.jar," +
+ ".*hadoop-yarn-server-timelineservice.*\\.jar," +
+ ".*hadoop-yarn-server-timelineservice-hbase.*\\.jar,";
+
+ static final String DEFAULT_MR_JARS =
+ "$HADOOP_HOME/share/hadoop/common/.*\\.jar," +
+ "$HADOOP_HOME/share/hadoop/common/lib/.*\\.jar," +
+ "$HADOOP_HOME/share/hadoop/hdfs/.*\\.jar," +
+ "$HADOOP_HOME/share/hadoop/hdfs/lib/.*\\.jar," +
+ "$HADOOP_HOME/share/hadoop/mapreduce/.*\\.jar," +
+ "$HADOOP_HOME/share/hadoop/mapreduce/lib/.*\\.jar," +
+ "$HADOOP_HOME/share/hadoop/yarn/.*\\.jar," +
+ "$HADOOP_HOME/share/hadoop/yarn/lib/.*\\.jar,";
+
+ private DefaultJars() {}
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b78607a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
new file mode 100644
index 0000000..d1cd740
--- /dev/null
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java
@@ -0,0 +1,384 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.mapred.uploader;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.compress.archivers.ArchiveEntry;
+import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.SystemErasureCodingPolicies;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.util.GenericOptionsParser;
+import org.apache.hadoop.util.Shell;
+import org.apache.hadoop.util.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.util.zip.GZIPOutputStream;
+
+/**
+ * Upload a MapReduce framework tarball to HDFS.
+ * Usage:
+ * sudo -u mapred mapred frameworkuploader -fs hdfs://`hostname`:8020 -target
+ * /tmp/upload.tar.gz#mr-framework
+*/
+public class FrameworkUploader implements Runnable {
+ private static final Pattern VAR_SUBBER =
+ Pattern.compile(Shell.getEnvironmentVariableRegex());
+ private static final Logger LOG =
+ LoggerFactory.getLogger(FrameworkUploader.class);
+
+ @VisibleForTesting
+ String input = null;
+ @VisibleForTesting
+ String whitelist = null;
+ @VisibleForTesting
+ String blacklist = null;
+ @VisibleForTesting
+ String target = null;
+ @VisibleForTesting
+ short replication = 10;
+
+ @VisibleForTesting
+ Set<String> filteredInputFiles = new HashSet<>();
+ @VisibleForTesting
+ List<Pattern> whitelistedFiles = new LinkedList<>();
+ @VisibleForTesting
+ List<Pattern> blacklistedFiles = new LinkedList<>();
+
+ @VisibleForTesting
+ OutputStream targetStream = null;
+ private Path targetPath = null;
+ private String alias = null;
+
+ private void printHelp(Options options) {
+ HelpFormatter formatter = new HelpFormatter();
+ formatter.printHelp("mapred frameworkuploader", options);
+ }
+
+ public void run() {
+ try {
+ collectPackages();
+ buildPackage();
+ LOG.info("Uploaded " + target);
+ System.out.println("Suggested mapreduce.application.framework.path " +
+ target);
+ LOG.info(
+ "Suggested mapreduce.application.classpath $PWD/" + alias + "/*");
+ System.out.println("Suggested classpath $PWD/" + alias + "/*");
+ } catch (UploaderException|IOException e) {
+ LOG.error("Error in execution " + e.getMessage());
+ e.printStackTrace();
+ }
+ }
+
+ @VisibleForTesting
+ void collectPackages() throws UploaderException {
+ parseLists();
+ String[] list = StringUtils.split(input, File.pathSeparatorChar);
+ for (String item : list) {
+ LOG.info("Original source " + item);
+ String expanded = expandEnvironmentVariables(item, System.getenv());
+ LOG.info("Expanded source " + expanded);
+ if (expanded.endsWith("*")) {
+ File path = new File(expanded.substring(0, expanded.length() - 1));
+ if (path.isDirectory()) {
+ File[] files = path.listFiles();
+ if (files != null) {
+ for (File jar : files) {
+ if (!jar.isDirectory()) {
+ addJar(jar);
+ } else {
+ LOG.info("Ignored " + jar + " because it is a directory");
+ }
+ }
+ } else {
+ LOG.warn("Could not list directory " + path);
+ }
+ } else {
+ LOG.warn("Ignored " + expanded + ". It is not a directory");
+ }
+ } else if (expanded.endsWith(".jar")) {
+ File jarFile = new File(expanded);
+ addJar(jarFile);
+ } else if (!expanded.isEmpty()) {
+ LOG.warn("Ignored " + expanded + " only jars are supported");
+ }
+ }
+ }
+
+ private void beginUpload() throws IOException, UploaderException {
+ if (targetStream == null) {
+ validateTargetPath();
+ int lastIndex = target.indexOf('#');
+ targetPath =
+ new Path(
+ target.substring(
+ 0, lastIndex == -1 ? target.length() : lastIndex));
+ alias = lastIndex != -1 ?
+ target.substring(lastIndex + 1) :
+ targetPath.getName();
+ LOG.info("Target " + targetPath);
+ FileSystem fileSystem = targetPath.getFileSystem(new Configuration());
+ targetStream = fileSystem.create(targetPath, true);
+ }
+ }
+
+ @VisibleForTesting
+ void buildPackage() throws IOException, UploaderException {
+ beginUpload();
+ LOG.info("Compressing tarball");
+ try (TarArchiveOutputStream out = new TarArchiveOutputStream(
+ new GZIPOutputStream(targetStream))) {
+ for (String fullPath : filteredInputFiles) {
+ LOG.info("Adding " + fullPath);
+ File file = new File(fullPath);
+ try (FileInputStream inputStream = new FileInputStream(file)) {
+ ArchiveEntry entry = out.createArchiveEntry(file, file.getName());
+ out.putArchiveEntry(entry);
+ IOUtils.copyBytes(inputStream, out, 1024 * 1024);
+ out.closeArchiveEntry();
+ }
+ }
+ } finally {
+ if (targetStream != null) {
+ targetStream.close();
+ }
+ }
+
+ if (targetPath == null) {
+ return;
+ }
+
+ // Set file attributes
+ FileSystem fileSystem = targetPath.getFileSystem(new Configuration());
+ if (fileSystem instanceof DistributedFileSystem) {
+ LOG.info("Disabling Erasure Coding for path: " + targetPath);
+ DistributedFileSystem dfs = (DistributedFileSystem) fileSystem;
+ dfs.setErasureCodingPolicy(targetPath,
+ SystemErasureCodingPolicies.getReplicationPolicy().getName());
+ }
+
+ if (replication > 0) {
+ LOG.info("Set replication to " +
+ replication + " for path: " + targetPath);
+ fileSystem.setReplication(targetPath, replication);
+ }
+ }
+
+ private void parseLists() throws UploaderException {
+ Map<String, String> env = System.getenv();
+ for(Map.Entry<String, String> item : env.entrySet()) {
+ LOG.info("Environment " + item.getKey() + " " + item.getValue());
+ }
+ String[] whiteListItems = StringUtils.split(whitelist);
+ for (String pattern : whiteListItems) {
+ String expandedPattern =
+ expandEnvironmentVariables(pattern, env);
+ Pattern compiledPattern =
+ Pattern.compile("^" + expandedPattern + "$");
+ LOG.info("Whitelisted " + compiledPattern.toString());
+ whitelistedFiles.add(compiledPattern);
+ }
+ String[] blacklistItems = StringUtils.split(blacklist);
+ for (String pattern : blacklistItems) {
+ String expandedPattern =
+ expandEnvironmentVariables(pattern, env);
+ Pattern compiledPattern =
+ Pattern.compile("^" + expandedPattern + "$");
+ LOG.info("Blacklisted " + compiledPattern.toString());
+ blacklistedFiles.add(compiledPattern);
+ }
+ }
+
+ @VisibleForTesting
+ String expandEnvironmentVariables(String innerInput, Map<String, String> env)
+ throws UploaderException {
+ boolean found;
+ do {
+ found = false;
+ Matcher matcher = VAR_SUBBER.matcher(innerInput);
+ StringBuffer stringBuffer = new StringBuffer();
+ while (matcher.find()) {
+ found = true;
+ String var = matcher.group(1);
+ // replace $env with the child's env constructed by tt's
+ String replace = env.get(var);
+ // the env key is not present anywhere .. simply set it
+ if (replace == null) {
+ throw new UploaderException("Environment variable does not exist " +
+ var);
+ }
+ matcher.appendReplacement(
+ stringBuffer, Matcher.quoteReplacement(replace));
+ }
+ matcher.appendTail(stringBuffer);
+ innerInput = stringBuffer.toString();
+ } while (found);
+ return innerInput;
+ }
+
+ private void addJar(File jar) throws UploaderException{
+ boolean found = false;
+ if (!jar.getName().endsWith(".jar")) {
+ LOG.info("Ignored non-jar " + jar.getAbsolutePath());
+ }
+ for (Pattern pattern : whitelistedFiles) {
+ Matcher matcher = pattern.matcher(jar.getAbsolutePath());
+ if (matcher.matches()) {
+ LOG.info("Whitelisted " + jar.getAbsolutePath());
+ found = true;
+ break;
+ }
+ }
+ boolean excluded = false;
+ for (Pattern pattern : blacklistedFiles) {
+ Matcher matcher = pattern.matcher(jar.getAbsolutePath());
+ if (matcher.matches()) {
+ LOG.info("Blacklisted " + jar.getAbsolutePath());
+ excluded = true;
+ break;
+ }
+ }
+ if (found && !excluded) {
+ LOG.info("Whitelisted " + jar.getAbsolutePath());
+ if (!filteredInputFiles.add(jar.getAbsolutePath())) {
+ throw new UploaderException("Duplicate jar" + jar.getAbsolutePath());
+ }
+ }
+ if (!found) {
+ LOG.info("Ignored " + jar.getAbsolutePath() + " because it is missing " +
+ "from the whitelist");
+ } else if (excluded) {
+ LOG.info("Ignored " + jar.getAbsolutePath() + " because it is on " +
+ "the the blacklist");
+ }
+ }
+
+ private void validateTargetPath() throws UploaderException {
+ if (!target.startsWith("hdfs:/") &&
+ !target.startsWith("file:/")) {
+ throw new UploaderException("Target path is not hdfs or local " + target);
+ }
+ }
+
+ @VisibleForTesting
+ boolean parseArguments(String[] args) throws IOException {
+ Options opts = new Options();
+ opts.addOption(OptionBuilder.create("h"));
+ opts.addOption(OptionBuilder.create("help"));
+ opts.addOption(OptionBuilder
+ .withDescription("Input class path")
+ .hasArg().create("input"));
+ opts.addOption(OptionBuilder
+ .withDescription(
+ "Regex specifying the full path of jars to include in the" +
+ " framework tarball. Default is a hardcoded set of jars" +
+ " considered necessary to include")
+ .hasArg().create("whitelist"));
+ opts.addOption(OptionBuilder
+ .withDescription(
+ "Regex specifying the full path of jars to exclude in the" +
+ " framework tarball. Default is a hardcoded set of jars" +
+ " considered unnecessary to include")
+ .hasArg().create("blacklist"));
+ opts.addOption(OptionBuilder
+ .withDescription(
+ "Target file system to upload to." +
+ " Example: hdfs://foo.com:8020")
+ .hasArg().create("fs"));
+ opts.addOption(OptionBuilder
+ .withDescription(
+ "Target file to upload to with a reference name." +
+ " Example: /usr/mr-framework.tar.gz#mr-framework")
+ .hasArg().create("target"));
+ opts.addOption(OptionBuilder
+ .withDescription(
+ "Desired replication count")
+ .hasArg().create("replication"));
+ GenericOptionsParser parser = new GenericOptionsParser(opts, args);
+ if (parser.getCommandLine().hasOption("help") ||
+ parser.getCommandLine().hasOption("h")) {
+ printHelp(opts);
+ return false;
+ }
+ input = parser.getCommandLine().getOptionValue(
+ "input", System.getProperty("java.class.path"));
+ whitelist = parser.getCommandLine().getOptionValue(
+ "whitelist", DefaultJars.DEFAULT_MR_JARS);
+ blacklist = parser.getCommandLine().getOptionValue(
+ "blacklist", DefaultJars.DEFAULT_EXCLUDED_MR_JARS);
+ replication = Short.parseShort(parser.getCommandLine().getOptionValue(
+ "replication", "10"));
+ String fs = parser.getCommandLine()
+ .getOptionValue("fs", null);
+ if (fs == null) {
+ LOG.error("Target file system not specified");
+ printHelp(opts);
+ return false;
+ }
+ String path = parser.getCommandLine().getOptionValue("target",
+ "mr-framework.tar.gz#mr-framework");
+ if (path == null) {
+ LOG.error("Target directory not specified");
+ printHelp(opts);
+ return false;
+ }
+ StringBuilder absolutePath = new StringBuilder(fs);
+ absolutePath = absolutePath.append(path.startsWith("/") ? "" : "/");
+ absolutePath.append(path);
+ target = absolutePath.toString();
+
+ if (parser.getRemainingArgs().length > 0) {
+ LOG.warn("Unexpected parameters");
+ printHelp(opts);
+ return false;
+ }
+ return true;
+ }
+
+ /**
+ * Tool entry point.
+ * @param args arguments
+ * @throws IOException thrown on configuration errors
+ */
+ public static void main(String[] args) throws IOException {
+ FrameworkUploader uploader = new FrameworkUploader();
+ if(uploader.parseArguments(args)) {
+ uploader.run();
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b78607a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/UploaderException.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/UploaderException.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/UploaderException.java
new file mode 100644
index 0000000..73f6454
--- /dev/null
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/UploaderException.java
@@ -0,0 +1,36 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.mapred.uploader;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Framework uploaded exception type.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Stable
+class UploaderException extends Exception {
+
+ private static final long serialVersionUID = 1L;
+
+ UploaderException(String message) {
+ super(message);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b78607a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/package-info.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/package-info.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/package-info.java
new file mode 100644
index 0000000..4475e8e
--- /dev/null
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/package-info.java
@@ -0,0 +1,28 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Package org.apache.hadoop.mapred.uploader contains classes related to the
+ * MapReduce framework upload tool.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+package org.apache.hadoop.mapred.uploader;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b78607a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/test/java/org/apache/hadoop/mapred/uploader/TestFrameworkUploader.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/test/java/org/apache/hadoop/mapred/uploader/TestFrameworkUploader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/test/java/org/apache/hadoop/mapred/uploader/TestFrameworkUploader.java
new file mode 100644
index 0000000..9d03165
--- /dev/null
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/test/java/org/apache/hadoop/mapred/uploader/TestFrameworkUploader.java
@@ -0,0 +1,315 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.mapred.uploader;
+
+import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
+import org.apache.commons.compress.archivers.tar.TarArchiveInputStream;
+import org.apache.commons.io.FileUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Random;
+import java.util.Set;
+import java.util.zip.GZIPInputStream;
+
+/**
+ * Unit test class for FrameworkUploader.
+ */
+public class TestFrameworkUploader {
+ private static String testDir;
+
+ @Before
+ public void setUp() {
+ String testRootDir =
+ new File(System.getProperty("test.build.data", "/tmp"))
+ .getAbsolutePath()
+ .replace(' ', '+');
+ Random random = new Random(System.currentTimeMillis());
+ testDir = testRootDir + File.separatorChar +
+ Long.toString(random.nextLong());
+ }
+
+ /**
+ * Test requesting command line help.
+ * @throws IOException test failure
+ */
+ @Test
+ public void testHelp() throws IOException {
+ String[] args = new String[]{"-help"};
+ FrameworkUploader uploader = new FrameworkUploader();
+ boolean success = uploader.parseArguments(args);
+ Assert.assertFalse("Expected to print help", success);
+ Assert.assertEquals("Expected ignore run", null,
+ uploader.input);
+ Assert.assertEquals("Expected ignore run", null,
+ uploader.whitelist);
+ Assert.assertEquals("Expected ignore run", null,
+ uploader.target);
+ }
+
+ /**
+ * Test invalid argument parsing.
+ * @throws IOException test failure
+ */
+ @Test
+ public void testWrongArgument() throws IOException {
+ String[] args = new String[]{"-unexpected"};
+ FrameworkUploader uploader = new FrameworkUploader();
+ boolean success = uploader.parseArguments(args);
+ Assert.assertFalse("Expected to print help", success);
+ }
+
+ /**
+ * Test normal argument passing.
+ * @throws IOException test failure
+ */
+ @Test
+ public void testArguments() throws IOException {
+ String[] args =
+ new String[]{
+ "-input", "A",
+ "-whitelist", "B",
+ "-blacklist", "C",
+ "-fs", "hdfs://C:8020",
+ "-target", "D",
+ "-replication", "100"};
+ FrameworkUploader uploader = new FrameworkUploader();
+ boolean success = uploader.parseArguments(args);
+ Assert.assertTrue("Expected to print help", success);
+ Assert.assertEquals("Input mismatch", "A",
+ uploader.input);
+ Assert.assertEquals("Whitelist mismatch", "B",
+ uploader.whitelist);
+ Assert.assertEquals("Blacklist mismatch", "C",
+ uploader.blacklist);
+ Assert.assertEquals("Target mismatch", "hdfs://C:8020/D",
+ uploader.target);
+ Assert.assertEquals("Replication mismatch", 100,
+ uploader.replication);
+ }
+
+ /**
+ * Test whether we can filter a class path properly.
+ * @throws IOException test failure
+ */
+ @Test
+ public void testCollectPackages() throws IOException, UploaderException {
+ File parent = new File(testDir);
+ try {
+ parent.deleteOnExit();
+ Assert.assertTrue("Directory creation failed", parent.mkdirs());
+ File dirA = new File(parent, "A");
+ Assert.assertTrue(dirA.mkdirs());
+ File dirB = new File(parent, "B");
+ Assert.assertTrue(dirB.mkdirs());
+ File jarA = new File(dirA, "a.jar");
+ Assert.assertTrue(jarA.createNewFile());
+ File jarB = new File(dirA, "b.jar");
+ Assert.assertTrue(jarB.createNewFile());
+ File jarC = new File(dirA, "c.jar");
+ Assert.assertTrue(jarC.createNewFile());
+ File txtD = new File(dirA, "d.txt");
+ Assert.assertTrue(txtD.createNewFile());
+ File jarD = new File(dirB, "d.jar");
+ Assert.assertTrue(jarD.createNewFile());
+ File txtE = new File(dirB, "e.txt");
+ Assert.assertTrue(txtE.createNewFile());
+
+ FrameworkUploader uploader = new FrameworkUploader();
+ uploader.whitelist = ".*a\\.jar,.*b\\.jar,.*d\\.jar";
+ uploader.blacklist = ".*b\\.jar";
+ uploader.input = dirA.getAbsolutePath() + File.separatorChar + "*" +
+ File.pathSeparatorChar +
+ dirB.getAbsolutePath() + File.separatorChar + "*";
+ uploader.collectPackages();
+ Assert.assertEquals("Whitelist count error", 3,
+ uploader.whitelistedFiles.size());
+ Assert.assertEquals("Blacklist count error", 1,
+ uploader.blacklistedFiles.size());
+
+ Assert.assertTrue("File not collected",
+ uploader.filteredInputFiles.contains(jarA.getAbsolutePath()));
+ Assert.assertFalse("File collected",
+ uploader.filteredInputFiles.contains(jarB.getAbsolutePath()));
+ Assert.assertTrue("File not collected",
+ uploader.filteredInputFiles.contains(jarD.getAbsolutePath()));
+ Assert.assertEquals("Too many whitelists", 2,
+ uploader.filteredInputFiles.size());
+ } finally {
+ FileUtils.deleteDirectory(parent);
+ }
+ }
+
+ /**
+ * Test building a tarball from source jars.
+ */
+ @Test
+ public void testBuildTarBall() throws IOException, UploaderException {
+ File parent = new File(testDir);
+ try {
+ parent.deleteOnExit();
+ FrameworkUploader uploader = prepareTree(parent);
+
+ File gzipFile = new File("upload.tar.gz");
+ gzipFile.deleteOnExit();
+ Assert.assertTrue("Creating output", gzipFile.createNewFile());
+ uploader.targetStream = new FileOutputStream(gzipFile);
+
+ uploader.buildPackage();
+
+ TarArchiveInputStream result = null;
+ try {
+ result =
+ new TarArchiveInputStream(
+ new GZIPInputStream(new FileInputStream(gzipFile)));
+ Set<String> fileNames = new HashSet<>();
+ Set<Long> sizes = new HashSet<>();
+ TarArchiveEntry entry1 = result.getNextTarEntry();
+ fileNames.add(entry1.getName());
+ sizes.add(entry1.getSize());
+ TarArchiveEntry entry2 = result.getNextTarEntry();
+ fileNames.add(entry2.getName());
+ sizes.add(entry2.getSize());
+ Assert.assertTrue(
+ "File name error", fileNames.contains("a.jar"));
+ Assert.assertTrue(
+ "File size error", sizes.contains((long) 13));
+ Assert.assertTrue(
+ "File name error", fileNames.contains("b.jar"));
+ Assert.assertTrue(
+ "File size error", sizes.contains((long) 14));
+ } finally {
+ if (result != null) {
+ result.close();
+ }
+ }
+ } finally {
+ FileUtils.deleteDirectory(parent);
+ }
+ }
+
+ /**
+ * Test upload to HDFS.
+ */
+ @Test
+ public void testUpload() throws IOException, UploaderException {
+ final String fileName = "/upload.tar.gz";
+ File parent = new File(testDir);
+ try {
+ parent.deleteOnExit();
+
+ FrameworkUploader uploader = prepareTree(parent);
+
+ uploader.target = "file://" + parent.getAbsolutePath() + fileName;
+
+ uploader.buildPackage();
+ try (TarArchiveInputStream archiveInputStream = new TarArchiveInputStream(
+ new GZIPInputStream(
+ new FileInputStream(
+ parent.getAbsolutePath() + fileName)))) {
+ Set<String> fileNames = new HashSet<>();
+ Set<Long> sizes = new HashSet<>();
+ TarArchiveEntry entry1 = archiveInputStream.getNextTarEntry();
+ fileNames.add(entry1.getName());
+ sizes.add(entry1.getSize());
+ TarArchiveEntry entry2 = archiveInputStream.getNextTarEntry();
+ fileNames.add(entry2.getName());
+ sizes.add(entry2.getSize());
+ Assert.assertTrue(
+ "File name error", fileNames.contains("a.jar"));
+ Assert.assertTrue(
+ "File size error", sizes.contains((long) 13));
+ Assert.assertTrue(
+ "File name error", fileNames.contains("b.jar"));
+ Assert.assertTrue(
+ "File size error", sizes.contains((long) 14));
+ }
+ } finally {
+ FileUtils.deleteDirectory(parent);
+ }
+ }
+
+ /**
+ * Prepare a mock directory tree to compress and upload.
+ */
+ private FrameworkUploader prepareTree(File parent)
+ throws FileNotFoundException {
+ Assert.assertTrue(parent.mkdirs());
+ File dirA = new File(parent, "A");
+ Assert.assertTrue(dirA.mkdirs());
+ File jarA = new File(parent, "a.jar");
+ PrintStream printStream = new PrintStream(new FileOutputStream(jarA));
+ printStream.println("Hello World!");
+ printStream.close();
+ File jarB = new File(dirA, "b.jar");
+ printStream = new PrintStream(new FileOutputStream(jarB));
+ printStream.println("Hello Galaxy!");
+ printStream.close();
+
+ FrameworkUploader uploader = new FrameworkUploader();
+ uploader.filteredInputFiles.add(jarA.getAbsolutePath());
+ uploader.filteredInputFiles.add(jarB.getAbsolutePath());
+
+ return uploader;
+ }
+
+ /**
+ * Test regex pattern matching and environment variable replacement.
+ */
+ @Test
+ public void testEnvironmentReplacement() throws UploaderException {
+ String input = "C/$A/B,$B,D";
+ Map<String, String> map = new HashMap<>();
+ map.put("A", "X");
+ map.put("B", "Y");
+ map.put("C", "Z");
+ FrameworkUploader uploader = new FrameworkUploader();
+ String output = uploader.expandEnvironmentVariables(input, map);
+ Assert.assertEquals("Environment not expanded", "C/X/B,Y,D", output);
+
+ }
+
+ /**
+ * Test regex pattern matching and environment variable replacement.
+ */
+ @Test
+ public void testRecursiveEnvironmentReplacement()
+ throws UploaderException {
+ String input = "C/$A/B,$B,D";
+ Map<String, String> map = new HashMap<>();
+ map.put("A", "X");
+ map.put("B", "$C");
+ map.put("C", "Y");
+ FrameworkUploader uploader = new FrameworkUploader();
+ String output = uploader.expandEnvironmentVariables(input, map);
+ Assert.assertEquals("Environment not expanded", "C/X/B,Y,D", output);
+
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3b78607a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
index 274a821..a8350cb 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml
@@ -326,5 +326,6 @@
<module>hadoop-mapreduce-client-hs</module>
<module>hadoop-mapreduce-client-hs-plugins</module>
<module>hadoop-mapreduce-client-nativetask</module>
+ <module>hadoop-mapreduce-client-uploader</module>
</modules>
</project>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[06/50] [abbrv] hadoop git commit: HDFS-12877. Add open(PathHandle)
with default buffersize
Posted by vi...@apache.org.
HDFS-12877. Add open(PathHandle) with default buffersize
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0780fdb1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0780fdb1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0780fdb1
Branch: refs/heads/HDFS-9806
Commit: 0780fdb1ebdddd19744fbbca7fb05f8fe4bf4d28
Parents: a409425
Author: Chris Douglas <cd...@apache.org>
Authored: Thu Nov 30 15:13:16 2017 -0800
Committer: Chris Douglas <cd...@apache.org>
Committed: Thu Nov 30 15:13:16 2017 -0800
----------------------------------------------------------------------
.../main/java/org/apache/hadoop/fs/FileSystem.java | 15 +++++++++++++++
.../org/apache/hadoop/fs/TestFilterFileSystem.java | 1 +
.../java/org/apache/hadoop/fs/TestHarFileSystem.java | 1 +
3 files changed, 17 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0780fdb1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index be0ec87..a364921 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -957,6 +957,21 @@ public abstract class FileSystem extends Configured implements Closeable {
* resource directly and verify that the resource referenced
* satisfies constraints specified at its construciton.
* @param fd PathHandle object returned by the FS authority.
+ * @throws IOException IO failure
+ * @throws UnsupportedOperationException If {@link #open(PathHandle, int)}
+ * not overridden by subclass
+ */
+ public FSDataInputStream open(PathHandle fd) throws IOException {
+ return open(fd, getConf().getInt(IO_FILE_BUFFER_SIZE_KEY,
+ IO_FILE_BUFFER_SIZE_DEFAULT));
+ }
+
+ /**
+ * Open an FSDataInputStream matching the PathHandle instance. The
+ * implementation may encode metadata in PathHandle to address the
+ * resource directly and verify that the resource referenced
+ * satisfies constraints specified at its construciton.
+ * @param fd PathHandle object returned by the FS authority.
* @param bufferSize the size of the buffer to use
* @throws IOException IO failure
* @throws UnsupportedOperationException If not overridden by subclass
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0780fdb1/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
index 4cbb8ab..0e9a612 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFilterFileSystem.java
@@ -79,6 +79,7 @@ public class TestFilterFileSystem {
public boolean mkdirs(Path f);
public FSDataInputStream open(Path f);
+ public FSDataInputStream open(PathHandle f);
public FSDataOutputStream create(Path f);
public FSDataOutputStream create(Path f, boolean overwrite);
public FSDataOutputStream create(Path f, Progressable progress);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0780fdb1/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
index a1aa4de..1b69693 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestHarFileSystem.java
@@ -80,6 +80,7 @@ public class TestHarFileSystem {
public boolean mkdirs(Path f);
public FSDataInputStream open(Path f);
+ public FSDataInputStream open(PathHandle f);
public FSDataOutputStream create(Path f);
public FSDataOutputStream create(Path f, boolean overwrite);
public FSDataOutputStream create(Path f, Progressable progress);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[09/50] [abbrv] hadoop git commit: YARN-7546. Layout changes in Queue
UI to show queue details on right pane. Contributed by Vasudevan Skm.
Posted by vi...@apache.org.
YARN-7546. Layout changes in Queue UI to show queue details on right pane. Contributed by Vasudevan Skm.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4653aa3e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4653aa3e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4653aa3e
Branch: refs/heads/HDFS-9806
Commit: 4653aa3eb31fb23fa19136173685464d71f86613
Parents: 60fd0d7
Author: Sunil G <su...@apache.org>
Authored: Fri Dec 1 13:26:01 2017 +0530
Committer: Sunil G <su...@apache.org>
Committed: Fri Dec 1 13:26:01 2017 +0530
----------------------------------------------------------------------
.../main/webapp/app/components/tree-selector.js | 2 +-
.../main/webapp/app/controllers/yarn-queue.js | 6 +-
.../webapp/app/controllers/yarn-queue/apps.js | 6 +-
.../app/models/yarn-queue/capacity-queue.js | 11 ++-
.../src/main/webapp/app/styles/app.scss | 58 +++++++++++++-
.../src/main/webapp/app/styles/compose-box.scss | 39 ++++++++++
.../src/main/webapp/app/styles/layout.scss | 4 +
.../src/main/webapp/app/styles/variables.scss | 3 +-
.../yarn-queue/capacity-queue-info.hbs | 51 +++---------
.../components/yarn-queue/capacity-queue.hbs | 81 +++++++++++---------
.../components/yarn-queue/fair-queue.hbs | 66 ++++++++--------
.../components/yarn-queue/fifo-queue.hbs | 43 ++++++-----
.../main/webapp/app/templates/yarn-queue.hbs | 73 ++++++++++++------
.../webapp/app/templates/yarn-queue/apps.hbs | 15 +++-
.../webapp/app/templates/yarn-queue/info.hbs | 17 ++--
.../main/webapp/app/templates/yarn-queues.hbs | 5 +-
16 files changed, 300 insertions(+), 180 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
index 7a9d53b..4645a48 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js
@@ -146,7 +146,7 @@ export default Ember.Component.extend({
}.bind(this))
.on("dblclick", function (d) {
- document.location.href = "#/yarn-queue/" + d.queueData.get("name") + "/info";
+ document.location.href = "#/yarn-queue/" + d.queueData.get("name") + "/apps";
});
nodeEnter.append("circle")
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue.js
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue.js
index 3a72b60..e9f945b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue.js
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue.js
@@ -33,15 +33,11 @@ export default Ember.Controller.extend({
text: "Queues",
routeName: 'yarn-queues',
model: 'root'
- }, {
- text: `Queue [ ${queueName} ]`,
- routeName: 'yarn-queue.info',
- model: queueName
}];
if (path && path === "yarn-queue.apps") {
crumbs.push({
- text: "Applications",
+ text: `Queue [ ${queueName} ]`,
routeName: 'yarn-queue.apps',
model: queueName
});
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue/apps.js
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue/apps.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue/apps.js
index 905d96d..c10624e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue/apps.js
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-queue/apps.js
@@ -21,8 +21,10 @@ import TableDefinition from 'em-table/utils/table-definition';
import AppTableController from '../app-table-columns';
export default AppTableController.extend({
- // Your custom instance of table definition
- tableDefinition: TableDefinition.create(),
+ tableDefinition: TableDefinition.create({
+ enableFaceting: true,
+ rowCount: 25
+ }),
// Search text alias, any change in controller's searchText would affect the table's searchText, and vice-versa.
_selectedObserver: Ember.on("init", Ember.observer("model.selected", function () {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/capacity-queue.js
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/capacity-queue.js b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/capacity-queue.js
index 1d162e9..f892c2b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/capacity-queue.js
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-queue/capacity-queue.js
@@ -51,15 +51,18 @@ export default DS.Model.extend({
var floatToFixed = Converter.floatToFixed;
return [
{
- label: "Absolute Capacity",
- value: this.get("name") === "root" ? 100 : floatToFixed(this.get("absCapacity"))
- },
- {
label: "Absolute Used",
+ style: "primary",
value: this.get("name") === "root" ? floatToFixed(this.get("usedCapacity")) : floatToFixed(this.get("absUsedCapacity"))
},
{
+ label: "Absolute Capacity",
+ style: "primary",
+ value: this.get("name") === "root" ? 100 : floatToFixed(this.get("absCapacity"))
+ },
+ {
label: "Absolute Max Capacity",
+ style: "secondary",
value: this.get("name") === "root" ? 100 : floatToFixed(this.get("absMaxCapacity"))
}
];
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.scss
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.scss b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.scss
index 471e346..87ee9a9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.scss
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.scss
@@ -1,6 +1,7 @@
@import 'variables.scss';
@import 'layout.scss';
@import 'yarn-app.scss';
+@import './compose-box.scss';
/**
* Licensed to the Apache Software Foundation (ASF) under one
@@ -191,7 +192,7 @@ table.dataTable thead .sorting_desc_disabled {
.breadcrumb {
padding-bottom: 3px;
- background-color: none;
+ background: none;
}
.navbar-default .navbar-nav > li > a {
@@ -744,4 +745,57 @@ div.service-action-mask img {
background: none;
border: none;
box-shadow: none;
-}
\ No newline at end of file
+}
+
+.queue-page-breadcrumb,
+#tree-selector-container {
+ width: calc(100% - #{$compose-box-width});
+}
+
+#tree-selector-container {
+ overflow: scroll;
+}
+
+.flex {
+ display: flex;
+}
+
+.yarn-label {
+ border-radius: 3px;
+ margin-bottom: 5px;
+ border: 1px solid $yarn-panel-bg;
+ font-size: 12px;
+ > span {
+ padding: 5px;
+ }
+ &.primary {
+ display: inline-grid;
+ .label-key {
+ color: $yarn-panel-bg;
+ background: #666;
+ }
+ .label-value {
+ color: $yarn-panel-bg;
+ background: $yarn-success-border;
+ }
+ }
+ &.secondary {
+ display: inline-table;
+ .label-key {
+ color: $yarn-panel-bg;
+ background: #999;
+ }
+
+ .label-value {
+ color: $yarn-panel-bg;
+ background: yellowgreen;
+ }
+ }
+}
+
+.yarn-queues-container {
+ padding: 15px;
+ h3 {
+ margin-top: 0;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/compose-box.scss
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/compose-box.scss b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/compose-box.scss
new file mode 100644
index 0000000..0bfadb0
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/compose-box.scss
@@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+@import 'variables.scss';
+
+.yarn-compose-box {
+ position: fixed;
+ bottom: 0;
+ top: 0px;
+ right: 0px;
+ background-color: $yarn-panel-bg;
+ border: 1px solid $yarn-border-color;
+ border-radius: 3px;
+ box-shadow: 0 1px 1px rgba(0, 0, 0, 0.05);
+ max-width: $compose-box-width;
+ overflow: scroll;
+
+ .panel-heading {
+ background-color: $yarn-header-bg !important;
+ border-color: $yarn-border-color;
+ border-radius: 3px;
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/layout.scss
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/layout.scss b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/layout.scss
index d31f145..587df66 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/layout.scss
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/layout.scss
@@ -40,3 +40,7 @@
.tail-2 {
margin-right: 10px
}
+
+.top-1 {
+ margin-top: 5px;
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/variables.scss
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/variables.scss b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/variables.scss
index 8226770..e25b482 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/variables.scss
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/variables.scss
@@ -37,8 +37,7 @@ $yarn-warn-border: $color-yellow-secondary;
$yarn-warn-bg: $color-yellow-primary;
$yarn-gray-icon: $color-gray-40;
-
$yarn-muted-text: $color-gray-60;
-
+$compose-box-width: 400px;
//shadows
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue-info.hbs
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue-info.hbs b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue-info.hbs
index 7d44e69..a7260bc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue-info.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue-info.hbs
@@ -16,60 +16,29 @@
* limitations under the License.
}}
-<div class="row">
-
- <div class="col-lg-6 container-fluid">
+<div>
+ <div class="col-lg-6">
<div class="panel panel-default">
<div class="panel-heading">
- Queue Capacities: {{model.selected}}
+ Running Apps: {{model.selected}}
</div>
- <div class="container-fluid" id="capacity-bar-chart">
- <br/>
- {{bar-chart data=model.selectedQueue.capacitiesBarChartData
- title=""
- parentId="capacity-bar-chart"
- textWidth=170
- ratio=0.55
+ <div id="numapplications-donut-chart">
+ {{donut-chart data=model.selectedQueue.numOfApplicationsDonutChartData
+ showLabels=true
+ parentId="numapplications-donut-chart"
+ ratio=0.6
maxHeight=350}}
</div>
</div>
</div>
- <div class="col-lg-6 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Queue Information: {{model.selected}}
- </div>
- {{yarn-queue.capacity-queue-conf-table queue=model.selectedQueue}}
- </div>
- </div>
-
-</div>
-
-<div class="row">
-
- <div class="col-lg-6 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Running Apps: {{model.selected}}
- </div>
- <div class="container-fluid" id="numapplications-donut-chart">
- {{donut-chart data=model.selectedQueue.numOfApplicationsDonutChartData
- showLabels=true
- parentId="numapplications-donut-chart"
- ratio=0.6
- maxHeight=350}}
- </div>
- </div>
- </div>
-
{{#if model.selectedQueue.hasUserUsages}}
- <div class="col-lg-6 container-fluid">
+ <div class="col-lg-6">
<div class="panel panel-default">
<div class="panel-heading">
User Usages: {{model.selected}}
</div>
- <div class="container-fluid" id="userusage-donut-chart">
+ <div id="userusage-donut-chart">
{{donut-chart data=model.selectedQueue.userUsagesDonutChartData
showLabels=true
parentId="userusage-donut-chart"
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue.hbs
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue.hbs b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue.hbs
index 8b63b66..bb9a87e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/capacity-queue.hbs
@@ -19,45 +19,56 @@
{{queue-navigator model=model.queues selected=model.selected
used="usedCapacity" max="absMaxCapacity"}}
-<div class="row">
- <div class="col-lg-4 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Queue Information: {{model.selected}}
+<div class="yarn-compose-box yarn-queues-container">
+ <div>
+ <h3>
+ <a href="#/yarn-queue/{{model.selected}}/apps">
+ {{model.selected}}
+ </a>
+ </h3>
+ {{#if model.selectedQueue.state}}
+ <div>
+ {{em-table-simple-status-cell content=model.selectedQueue.state}}
</div>
- {{yarn-queue.capacity-queue-conf-table queue=model.selectedQueue}}
+ {{/if}}
+ <div class="top-1">
+ {{#each model.selectedQueue.capacitiesBarChartData as |item|}}
+ <span class="yarn-label {{item.style}}">
+ <span class="label-key"> {{lower item.label}}</span>
+ <span class="label-value">{{item.value}}%</span>
+ </span>
+ {{/each}}
</div>
- </div>
-
- <div class="col-lg-4 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Queue Capacities: {{model.selected}}
- </div>
- <div class="container-fluid" id="capacity-bar-chart">
- <br/>
- {{bar-chart data=model.selectedQueue.capacitiesBarChartData
- title=""
- parentId="capacity-bar-chart"
- textWidth=175
- ratio=0.55
- maxHeight=350}}
- </div>
+ <div class="top-1">
+ <span class="yarn-label secondary">
+ <span class="label-key">configured capacity</span>
+ <span class="label-value">{{model.selectedQueue.capacity}}%</span>
+ </span>
+ <span class="yarn-label secondary">
+ <span class="label-key">configured max capacity</span>
+ <span class="label-value">{{model.selectedQueue.maxCapacity}}%</span>
+ </span>
</div>
+ {{#if model.selectedQueue.isLeafQueue}}
+ <div class="top-1">
+ <span class="yarn-label secondary">
+ <span class="label-key">user limit</span>
+ <span class="label-value">{{model.selectedQueue.userLimit}}%</span>
+ </span>
+ <span class="yarn-label secondary">
+ <span class="label-key">user limit factor</span>
+ <span class="label-value">{{model.selectedQueue.userLimitFactor}}</span>
+ </span>
+ </div>
+ {{/if}}
</div>
- <div class="col-lg-4 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Running Apps: {{model.selected}}
- </div>
- <div class="container-fluid" id="numapplications-donut-chart">
- {{donut-chart data=model.selectedQueue.numOfApplicationsDonutChartData
- showLabels=true
- parentId="numapplications-donut-chart"
- ratio=0.6
- maxHeight=350}}
- </div>
- </div>
+ <h5> Running Apps </h5>
+ <div id="numapplications-donut-chart">
+ {{donut-chart data=model.selectedQueue.numOfApplicationsDonutChartData
+ showLabels=true
+ parentId="numapplications-donut-chart"
+ ratio=0.6
+ maxHeight=350}}
</div>
</div>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fair-queue.hbs
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fair-queue.hbs b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fair-queue.hbs
index 6d0e994..dcc80c1 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fair-queue.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fair-queue.hbs
@@ -19,44 +19,46 @@
{{queue-navigator model=model.queues selected=model.selected
used="usedResources.memory" max="clusterResources.memory"}}
-<div class="row">
- <div class="col-lg-4 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Queue Information: {{model.selected}}
+<div class="yarn-compose-box">
+ <div class="panel-heading">
+ Queue Information: {{model.selected}}
+ </div>
+ <div class="panel-body">
+ <div class="container-fluid">
+ <div class="panel panel-default">
+ {{yarn-queue.fair-queue-conf-table queue=model.selectedQueue}}
</div>
- {{yarn-queue.fair-queue-conf-table queue=model.selectedQueue}}
</div>
- </div>
- <div class="col-lg-4 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Queue Capacities: {{model.selected}}
- </div>
- <div class="container-fluid" id="capacity-bar-chart">
- <br/>
- {{bar-chart data=model.selectedQueue.capacitiesBarChartData
- title=""
- parentId="capacity-bar-chart"
- textWidth=175
- ratio=0.55
- maxHeight=350}}
+ <div class="container-fluid">
+ <div class="panel panel-default">
+ <div class="panel-heading">
+ Queue Capacities: {{model.selected}}
+ </div>
+ <div class="container-fluid" id="capacity-bar-chart">
+ <br/>
+ {{bar-chart data=model.selectedQueue.capacitiesBarChartData
+ title=""
+ parentId="capacity-bar-chart"
+ textWidth=175
+ ratio=0.55
+ maxHeight=350}}
+ </div>
</div>
</div>
- </div>
- <div class="col-lg-4 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Running Apps: {{model.selected}}
- </div>
- <div class="container-fluid" id="numapplications-donut-chart">
- {{donut-chart data=model.selectedQueue.numOfApplicationsDonutChartData
- showLabels=true
- parentId="numapplications-donut-chart"
- ratio=0.6
- maxHeight=350}}
+ <div class="container-fluid">
+ <div class="panel panel-default">
+ <div class="panel-heading">
+ Running Apps: {{model.selected}}
+ </div>
+ <div class="container-fluid" id="numapplications-donut-chart">
+ {{donut-chart data=model.selectedQueue.numOfApplicationsDonutChartData
+ showLabels=true
+ parentId="numapplications-donut-chart"
+ ratio=0.6
+ maxHeight=350}}
+ </div>
</div>
</div>
</div>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fifo-queue.hbs
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fifo-queue.hbs b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fifo-queue.hbs
index 90cbd27..98db5cb 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fifo-queue.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/yarn-queue/fifo-queue.hbs
@@ -19,29 +19,30 @@
{{queue-navigator model=model.queues selected=model.selected
used="usedNodeCapacity" max="totalNodeCapacity"}}
-<div class="row">
- <div class="col-lg-6 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Queue Information: {{model.selected}}
- </div>
- {{yarn-queue.fifo-queue-conf-table queue=model.selectedQueue}}
- </div>
+<div class="yarn-compose-box">
+ <div class="panel-heading">
+ Queue Information: {{model.selected}}
</div>
-
- <div class="col-lg-6 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- Queue Capacities: {{model.selected}}
+ <div class="panel-body">
+ <div class="container-fluid">
+ <div class="panel panel-default">
+ {{yarn-queue.fifo-queue-conf-table queue=model.selectedQueue}}
</div>
- <div class="container-fluid" id="capacity-bar-chart">
- <br/>
- {{bar-chart data=model.selectedQueue.capacitiesBarChartData
- title=""
- parentId="capacity-bar-chart"
- textWidth=175
- ratio=0.55
- maxHeight=350}}
+ </div>
+ <div class="container-fluid">
+ <div class="panel panel-default">
+ <div class="panel-heading">
+ Queue Capacities: {{model.selected}}
+ </div>
+ <div class="container-fluid" id="capacity-bar-chart">
+ <br/>
+ {{bar-chart data=model.selectedQueue.capacitiesBarChartData
+ title=""
+ parentId="capacity-bar-chart"
+ textWidth=175
+ ratio=0.55
+ maxHeight=350}}
+ </div>
</div>
</div>
</div>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
index ef2d285..87b596e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue.hbs
@@ -18,34 +18,61 @@
{{breadcrumb-bar breadcrumbs=breadcrumbs}}
-<div class="col-md-12 container-fluid">
- <div class="row">
-
- <div class="col-md-2 container-fluid">
- <div class="panel panel-default">
- <div class="panel-heading">
- <h4>Queue</h4>
+<div class="panel-group">
+ <div class="panel panel-default">
+ <div class="yarn-app-header">
+ <div class="flex">
+ <div class="top-1">
+ <h3>{{model.selected}}</h3>
+ {{#if model.selectedQueue.state}}
+ <div>
+ {{em-table-simple-status-cell content=model.selectedQueue.state}}
+ </div>
+ {{/if}}
+ <div class="top-1">
+ <span class="yarn-label secondary">
+ <span class="label-key">configured capacity</span>
+ <span class="label-value">{{model.selectedQueue.capacity}}%</span>
+ </span>
+ <span class="yarn-label secondary">
+ <span class="label-key">configured max capacity</span>
+ <span class="label-value">{{model.selectedQueue.maxCapacity}}%</span>
+ </span>
+ {{#if model.selectedQueue.isLeafQueue}}
+ <span class="yarn-label secondary">
+ <span class="label-key">user limit</span>
+ <span class="label-value">{{model.selectedQueue.userLimit}}%</span>
+ </span>
+ <span class="yarn-label secondary">
+ <span class="label-key">user limit factor</span>
+ <span class="label-value">{{model.selectedQueue.userLimitFactor}}</span>
+ </span>
+ {{/if}}
+ </div>
</div>
- <div class="panel-body">
- <ul class="nav nav-pills nav-stacked" id="stacked-menu">
- <ul class="nav nav-pills nav-stacked collapse in">
- {{#link-to 'yarn-queue.info' tagName="li"}}
- {{#link-to 'yarn-queue.info' model.selected}}Information
- {{/link-to}}
- {{/link-to}}
- {{#link-to 'yarn-queue.apps' tagName="li"}}
- {{#link-to 'yarn-queue.apps' model.selected}}Applications List
- {{/link-to}}
- {{/link-to}}
- </ul>
- </ul>
+ <div class="flex-right">
+ {{#each model.selectedQueue.capacitiesBarChartData as |item|}}
+ <span class="yarn-label primary">
+ <span class="label-key"> {{lower item.label}}</span>
+ <span class="label-value">{{item.value}}%</span>
+ </span>
+ {{/each}}
</div>
</div>
</div>
-
- <div class="col-md-10 container-fluid">
+ <div class="panel-heading">
+ <div class="clearfix">
+ <ul class="nav nav-pills">
+ <ul class="nav nav-pills collapse in">
+ {{#link-to 'yarn-queue.apps' tagName="li" class=(if (eq target.currentPath 'yarn-queue.apps') "active")}}
+ {{#link-to 'yarn-queue.apps' appId (query-params service=model.serviceName)}}Applications{{/link-to}}
+ {{/link-to}}
+ </ul>
+ </ul>
+ </div>
+ </div>
+ <div class="panel-body yarn-app-body">
{{outlet}}
</div>
-
</div>
</div>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/apps.hbs
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/apps.hbs b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/apps.hbs
index 4a508c1..6417910 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/apps.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/apps.hbs
@@ -17,9 +17,20 @@
}}
<div class="row">
- <div class="col-lg-12 container-fluid">
+ <div class="col-lg-12">
+ <div class="row">
+ {{#if (eq model.queues.firstObject.type "capacity")}}
+ {{yarn-queue.capacity-queue-info model=model}}
+ {{else if (eq model.queues.firstObject.type "fair")}}
+ {{yarn-queue.fair-queue-info model=model}}
+ {{else}}
+ {{yarn-queue.fifo-queue-info model=model}}
+ {{/if}}
+ </div>
+ </div>
+ <div class="col-lg-12 yarn-applications-container">
{{#if model.apps}}
- {{em-table columns=columns rows=model.apps}}
+ {{em-table columns=columns rows=model.apps definition=tableDefinitio}}
{{else}}
<h4 align = "center">Could not find any applications from this cluster</h4>
{{/if}}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/info.hbs
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/info.hbs b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/info.hbs
index 2f138a7..b84425a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/info.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queue/info.hbs
@@ -15,11 +15,12 @@
* See the License for the specific language governing permissions and
* limitations under the License.
}}
-
-{{#if (eq model.queues.firstObject.type "capacity")}}
- {{yarn-queue.capacity-queue-info model=model}}
-{{else if (eq model.queues.firstObject.type "fair")}}
- {{yarn-queue.fair-queue-info model=model}}
-{{else}}
- {{yarn-queue.fifo-queue-info model=model}}
-{{/if}}
+<div class="row">
+ {{#if (eq model.queues.firstObject.type "capacity")}}
+ {{yarn-queue.capacity-queue-info model=model}}
+ {{else if (eq model.queues.firstObject.type "fair")}}
+ {{yarn-queue.fair-queue-info model=model}}
+ {{else}}
+ {{yarn-queue.fifo-queue-info model=model}}
+ {{/if}}
+</div>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4653aa3e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
index fccdb5b..b3165d5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-queues.hbs
@@ -15,8 +15,9 @@
* See the License for the specific language governing permissions and
* limitations under the License.
}}
-
-{{breadcrumb-bar breadcrumbs=breadcrumbs}}
+<div class="queue-page-breadcrumb">
+ {{breadcrumb-bar breadcrumbs=breadcrumbs}}
+</div>
<div class="container-fluid">
{{#if (eq model.queues.firstObject.type "capacity")}}
{{yarn-queue.capacity-queue model=model}}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[22/50] [abbrv] hadoop git commit: HDFS-10706. [READ] Add tool
generating FSImage from external store
Posted by vi...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSingleUGIResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSingleUGIResolver.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSingleUGIResolver.java
new file mode 100644
index 0000000..9aef106
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSingleUGIResolver.java
@@ -0,0 +1,148 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+import static org.junit.Assert.*;
+
+/**
+ * Validate resolver assigning all paths to a single owner/group.
+ */
+public class TestSingleUGIResolver {
+
+ @Rule public TestName name = new TestName();
+
+ private static final int TESTUID = 10101;
+ private static final int TESTGID = 10102;
+ private static final String TESTUSER = "tenaqvyybdhragqvatbf";
+ private static final String TESTGROUP = "tnyybcvatlnxf";
+
+ private SingleUGIResolver ugi = new SingleUGIResolver();
+
+ @Before
+ public void setup() {
+ Configuration conf = new Configuration(false);
+ conf.setInt(SingleUGIResolver.UID, TESTUID);
+ conf.setInt(SingleUGIResolver.GID, TESTGID);
+ conf.set(SingleUGIResolver.USER, TESTUSER);
+ conf.set(SingleUGIResolver.GROUP, TESTGROUP);
+ ugi.setConf(conf);
+ System.out.println(name.getMethodName());
+ }
+
+ @Test
+ public void testRewrite() {
+ FsPermission p1 = new FsPermission((short)0755);
+ match(ugi.resolve(file("dingo", "dingo", p1)), p1);
+ match(ugi.resolve(file(TESTUSER, "dingo", p1)), p1);
+ match(ugi.resolve(file("dingo", TESTGROUP, p1)), p1);
+ match(ugi.resolve(file(TESTUSER, TESTGROUP, p1)), p1);
+
+ FsPermission p2 = new FsPermission((short)0x8000);
+ match(ugi.resolve(file("dingo", "dingo", p2)), p2);
+ match(ugi.resolve(file(TESTUSER, "dingo", p2)), p2);
+ match(ugi.resolve(file("dingo", TESTGROUP, p2)), p2);
+ match(ugi.resolve(file(TESTUSER, TESTGROUP, p2)), p2);
+
+ Map<Integer, String> ids = ugi.ugiMap();
+ assertEquals(2, ids.size());
+ assertEquals(TESTUSER, ids.get(10101));
+ assertEquals(TESTGROUP, ids.get(10102));
+ }
+
+ @Test
+ public void testDefault() {
+ String user;
+ try {
+ user = UserGroupInformation.getCurrentUser().getShortUserName();
+ } catch (IOException e) {
+ user = "hadoop";
+ }
+ Configuration conf = new Configuration(false);
+ ugi.setConf(conf);
+ Map<Integer, String> ids = ugi.ugiMap();
+ assertEquals(2, ids.size());
+ assertEquals(user, ids.get(0));
+ assertEquals(user, ids.get(1));
+ }
+
+ @Test(expected=IllegalArgumentException.class)
+ public void testInvalidUid() {
+ Configuration conf = ugi.getConf();
+ conf.setInt(SingleUGIResolver.UID, (1 << 24) + 1);
+ ugi.setConf(conf);
+ ugi.resolve(file(TESTUSER, TESTGROUP, new FsPermission((short)0777)));
+ }
+
+ @Test(expected=IllegalArgumentException.class)
+ public void testInvalidGid() {
+ Configuration conf = ugi.getConf();
+ conf.setInt(SingleUGIResolver.GID, (1 << 24) + 1);
+ ugi.setConf(conf);
+ ugi.resolve(file(TESTUSER, TESTGROUP, new FsPermission((short)0777)));
+ }
+
+ @Test(expected=IllegalStateException.class)
+ public void testDuplicateIds() {
+ Configuration conf = new Configuration(false);
+ conf.setInt(SingleUGIResolver.UID, 4344);
+ conf.setInt(SingleUGIResolver.GID, 4344);
+ conf.set(SingleUGIResolver.USER, TESTUSER);
+ conf.set(SingleUGIResolver.GROUP, TESTGROUP);
+ ugi.setConf(conf);
+ ugi.ugiMap();
+ }
+
+ static void match(long encoded, FsPermission p) {
+ assertEquals(p, new FsPermission((short)(encoded & 0xFFFF)));
+ long uid = (encoded >>> UGIResolver.USER_STRID_OFFSET);
+ uid &= UGIResolver.USER_GROUP_STRID_MASK;
+ assertEquals(TESTUID, uid);
+ long gid = (encoded >>> UGIResolver.GROUP_STRID_OFFSET);
+ gid &= UGIResolver.USER_GROUP_STRID_MASK;
+ assertEquals(TESTGID, gid);
+ }
+
+ static FileStatus file(String user, String group, FsPermission perm) {
+ Path p = new Path("foo://bar:4344/baz/dingo");
+ return new FileStatus(
+ 4344 * (1 << 20), /* long length, */
+ false, /* boolean isdir, */
+ 1, /* int block_replication, */
+ 256 * (1 << 20), /* long blocksize, */
+ 0L, /* long modification_time, */
+ 0L, /* long access_time, */
+ perm, /* FsPermission permission, */
+ user, /* String owner, */
+ group, /* String group, */
+ p); /* Path path */
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/test/resources/log4j.properties
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/resources/log4j.properties b/hadoop-tools/hadoop-fs2img/src/test/resources/log4j.properties
new file mode 100644
index 0000000..2ebf29e
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/test/resources/log4j.properties
@@ -0,0 +1,24 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# log4j configuration used during build and unit tests
+
+log4j.rootLogger=INFO,stdout
+log4j.threshold=ALL
+log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p [%t]: %c{2} (%F:%M(%L)) - %m%n
+
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-tools-dist/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-tools-dist/pom.xml b/hadoop-tools/hadoop-tools-dist/pom.xml
index 28faa9f..4b90361 100644
--- a/hadoop-tools/hadoop-tools-dist/pom.xml
+++ b/hadoop-tools/hadoop-tools-dist/pom.xml
@@ -128,6 +128,12 @@
<scope>compile</scope>
<version>${project.version}</version>
</dependency>
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-fs2img</artifactId>
+ <scope>compile</scope>
+ <version>${project.version}</version>
+ </dependency>
</dependencies>
<build>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/pom.xml b/hadoop-tools/pom.xml
index 6f95f11..c030045 100644
--- a/hadoop-tools/pom.xml
+++ b/hadoop-tools/pom.xml
@@ -48,6 +48,7 @@
<module>hadoop-kafka</module>
<module>hadoop-azure-datalake</module>
<module>hadoop-aliyun</module>
+ <module>hadoop-fs2img</module>
</modules>
<build>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[13/50] [abbrv] hadoop git commit: YARN-6507. Add support in
NodeManager to isolate FPGA devices with CGroups. (Zhankun Tang via wangda)
Posted by vi...@apache.org.
YARN-6507. Add support in NodeManager to isolate FPGA devices with CGroups. (Zhankun Tang via wangda)
Change-Id: Ic9afd841805f1035423915a0b0add5f3ba96cf9d
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7225ec0c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7225ec0c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7225ec0c
Branch: refs/heads/HDFS-9806
Commit: 7225ec0ceb49ae8f5588484297a20f07ec047420
Parents: 5304698
Author: Wangda Tan <wa...@apache.org>
Authored: Fri Dec 1 10:50:49 2017 -0800
Committer: Wangda Tan <wa...@apache.org>
Committed: Fri Dec 1 10:50:49 2017 -0800
----------------------------------------------------------------------
.../yarn/api/records/ResourceInformation.java | 5 +-
.../hadoop/yarn/conf/YarnConfiguration.java | 25 +-
.../src/main/resources/yarn-default.xml | 42 +-
.../linux/privileged/PrivilegedOperation.java | 1 +
.../resources/fpga/FpgaResourceAllocator.java | 413 +++++++++++++++++
.../resources/fpga/FpgaResourceHandlerImpl.java | 220 +++++++++
.../resourceplugin/ResourcePluginManager.java | 8 +-
.../fpga/AbstractFpgaVendorPlugin.java | 90 ++++
.../resourceplugin/fpga/FpgaDiscoverer.java | 139 ++++++
.../fpga/FpgaNodeResourceUpdateHandler.java | 71 +++
.../resourceplugin/fpga/FpgaResourcePlugin.java | 105 +++++
.../fpga/IntelFpgaOpenclPlugin.java | 396 ++++++++++++++++
.../resources/fpga/TestFpgaResourceHandler.java | 458 +++++++++++++++++++
.../resourceplugin/fpga/TestFpgaDiscoverer.java | 187 ++++++++
14 files changed, 2155 insertions(+), 5 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
index 67592cc..a8198d8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceInformation.java
@@ -42,6 +42,7 @@ public class ResourceInformation implements Comparable<ResourceInformation> {
public static final String MEMORY_URI = "memory-mb";
public static final String VCORES_URI = "vcores";
public static final String GPU_URI = "yarn.io/gpu";
+ public static final String FPGA_URI = "yarn.io/fpga";
public static final ResourceInformation MEMORY_MB =
ResourceInformation.newInstance(MEMORY_URI, "Mi");
@@ -49,9 +50,11 @@ public class ResourceInformation implements Comparable<ResourceInformation> {
ResourceInformation.newInstance(VCORES_URI);
public static final ResourceInformation GPUS =
ResourceInformation.newInstance(GPU_URI);
+ public static final ResourceInformation FPGAS =
+ ResourceInformation.newInstance(FPGA_URI);
public static final Map<String, ResourceInformation> MANDATORY_RESOURCES =
- ImmutableMap.of(MEMORY_URI, MEMORY_MB, VCORES_URI, VCORES, GPU_URI, GPUS);
+ ImmutableMap.of(MEMORY_URI, MEMORY_MB, VCORES_URI, VCORES, GPU_URI, GPUS, FPGA_URI, FPGAS);
/**
* Get the name for the resource.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index c1024ea..831abf5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -1514,13 +1514,36 @@ public class YarnConfiguration extends Configuration {
public static final String DEFAULT_NVIDIA_DOCKER_PLUGIN_V1_ENDPOINT =
"http://localhost:3476/v1.0/docker/cli";
+ /**
+ * Prefix for FPGA configurations. Work in progress: This configuration
+ * parameter may be changed/removed in the future.
+ */
+ @Private
+ public static final String NM_FPGA_RESOURCE_PREFIX =
+ NM_RESOURCE_PLUGINS + ".fpga.";
+
+ @Private
+ public static final String NM_FPGA_ALLOWED_DEVICES =
+ NM_FPGA_RESOURCE_PREFIX + "allowed-fpga-devices";
+
+ @Private
+ public static final String NM_FPGA_PATH_TO_EXEC =
+ NM_FPGA_RESOURCE_PREFIX + "path-to-discovery-executables";
+
+ @Private
+ public static final String NM_FPGA_VENDOR_PLUGIN =
+ NM_FPGA_RESOURCE_PREFIX + "vendor-plugin.class";
+
+ @Private
+ public static final String DEFAULT_NM_FPGA_VENDOR_PLUGIN =
+ "org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin";
/** NM Webapp address.**/
public static final String NM_WEBAPP_ADDRESS = NM_PREFIX + "webapp.address";
public static final int DEFAULT_NM_WEBAPP_PORT = 8042;
public static final String DEFAULT_NM_WEBAPP_ADDRESS = "0.0.0.0:" +
DEFAULT_NM_WEBAPP_PORT;
-
+
/** NM Webapp https address.**/
public static final String NM_WEBAPP_HTTPS_ADDRESS = NM_PREFIX
+ "webapp.https.address";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index dd9c6bd..2550c42 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3512,7 +3512,8 @@
<property>
<description>
Enable additional discovery/isolation of resources on the NodeManager,
- split by comma. By default, this is empty. Acceptable values: { "yarn-io/gpu" }.
+ split by comma. By default, this is empty.
+ Acceptable values: { "yarn-io/gpu", "yarn-io/fpga"}.
</description>
<name>yarn.nodemanager.resource-plugins</name>
<value></value>
@@ -3559,6 +3560,43 @@
<value>http://localhost:3476/v1.0/docker/cli</value>
</property>
->>>>>>> theirs
+ <property>
+ <description>
+ Specify one vendor plugin to handle FPGA devices discovery/IP download/configure.
+ Only IntelFpgaOpenclPlugin is supported by default.
+ We only allow one NM configured with one vendor FPGA plugin now since the end user can put the same
+ vendor's cards in one host. And this also simplify our design.
+ </description>
+ <name>yarn.nodemanager.resource-plugins.fpga.vendor-plugin.class</name>
+ <value>org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin</value>
+ </property>
+
+ <property>
+ <description>
+ When yarn.nodemanager.resource.fpga.allowed-fpga-devices=auto specified,
+ YARN NodeManager needs to run FPGA discovery binary (now only support
+ IntelFpgaOpenclPlugin) to get FPGA information.
+ When value is empty (default), YARN NodeManager will try to locate
+ discovery executable from vendor plugin's preference
+ </description>
+ <name>yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables</name>
+ <value></value>
+ </property>
+
+ <property>
+ <description>
+ Specify FPGA devices which can be managed by YARN NodeManager, split by comma
+ Number of FPGA devices will be reported to RM to make scheduling decisions.
+ Set to auto (default) let YARN automatically discover FPGA resource from
+ system.
+
+ Manually specify FPGA devices if admin only want subset of FPGA devices managed by YARN.
+ At present, since we can only configure one major number in c-e.cfg, FPGA device is
+ identified by their minor device number. A common approach to get minor
+ device number of FPGA is using "aocl diagnose" and check uevent with device name.
+ </description>
+ <name>yarn.nodemanager.resource-plugins.fpga.allowed-fpga-devices</name>
+ <value>0,1</value>
+ </property>
</configuration>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperation.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperation.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperation.java
index db0b225..ad8c22f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperation.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/privileged/PrivilegedOperation.java
@@ -52,6 +52,7 @@ public class PrivilegedOperation {
ADD_PID_TO_CGROUP(""), //no CLI switch supported yet.
RUN_DOCKER_CMD("--run-docker"),
GPU("--module-gpu"),
+ FPGA("--module-fpga"),
LIST_AS_USER(""); //no CLI switch supported yet.
private final String option;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceAllocator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceAllocator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceAllocator.java
new file mode 100644
index 0000000..62dd3c4
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceAllocator.java
@@ -0,0 +1,413 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga;
+
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.ImmutableList;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.server.nodemanager.Context;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.FpgaDiscoverer;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.*;
+
+import static org.apache.hadoop.yarn.api.records.ResourceInformation.FPGA_URI;
+
+
+/**
+ * This FPGA resource allocator tends to be used by different FPGA vendor's plugin
+ * A "type" parameter is taken into consideration when allocation
+ * */
+public class FpgaResourceAllocator {
+
+ static final Log LOG = LogFactory.getLog(FpgaResourceAllocator.class);
+
+ private List<FpgaDevice> allowedFpgas = new LinkedList<>();
+
+ //key is resource type of FPGA, vendor plugin supported ID
+ private LinkedHashMap<String, List<FpgaDevice>> availableFpga = new LinkedHashMap<>();
+
+ //key is requetor, aka. container ID
+ private LinkedHashMap<String, List<FpgaDevice>> usedFpgaByRequestor = new LinkedHashMap<>();
+
+ private Context nmContext;
+
+ @VisibleForTesting
+ public HashMap<String, List<FpgaDevice>> getAvailableFpga() {
+ return availableFpga;
+ }
+
+ @VisibleForTesting
+ public List<FpgaDevice> getAllowedFpga() {
+ return allowedFpgas;
+ }
+
+ public FpgaResourceAllocator(Context ctx) {
+ this.nmContext = ctx;
+ }
+
+ @VisibleForTesting
+ public int getAvailableFpgaCount() {
+ int count = 0;
+ for (List<FpgaDevice> l : availableFpga.values()) {
+ count += l.size();
+ }
+ return count;
+ }
+
+ @VisibleForTesting
+ public HashMap<String, List<FpgaDevice>> getUsedFpga() {
+ return usedFpgaByRequestor;
+ }
+
+ @VisibleForTesting
+ public int getUsedFpgaCount() {
+ int count = 0;
+ for (List<FpgaDevice> l : usedFpgaByRequestor.values()) {
+ count += l.size();
+ }
+ return count;
+ }
+
+ public static class FpgaAllocation {
+
+ private List<FpgaDevice> allowed = Collections.emptyList();
+
+ private List<FpgaDevice> denied = Collections.emptyList();
+
+ FpgaAllocation(List<FpgaDevice> allowed, List<FpgaDevice> denied) {
+ if (allowed != null) {
+ this.allowed = ImmutableList.copyOf(allowed);
+ }
+ if (denied != null) {
+ this.denied = ImmutableList.copyOf(denied);
+ }
+ }
+
+ public List<FpgaDevice> getAllowed() {
+ return allowed;
+ }
+
+ public List<FpgaDevice> getDenied() {
+ return denied;
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("\nFpgaAllocation\n\tAllowed:\n");
+ for (FpgaDevice device : allowed) {
+ sb.append("\t\t");
+ sb.append(device + "\n");
+ }
+ sb.append("\tDenied\n");
+ for (FpgaDevice device : denied) {
+ sb.append("\t\t");
+ sb.append(device + "\n");
+ }
+ return sb.toString();
+ }
+ }
+
+ public static class FpgaDevice implements Comparable<FpgaDevice>, Serializable {
+
+ private static final long serialVersionUID = 1L;
+
+ private String type;
+ private Integer major;
+ private Integer minor;
+ // IP file identifier. matrix multiplication for instance
+ private String IPID;
+ // the device name under /dev
+ private String devName;
+ // the alias device name. Intel use acl number acl0 to acl31
+ private String aliasDevName;
+ // lspci output's bus number: 02:00.00 (bus:slot.func)
+ private String busNum;
+ private String temperature;
+ private String cardPowerUsage;
+
+ public String getType() {
+ return type;
+ }
+
+ public Integer getMajor() {
+ return major;
+ }
+
+ public Integer getMinor() {
+ return minor;
+ }
+
+ public String getIPID() {
+ return IPID;
+ }
+
+ public void setIPID(String IPID) {
+ this.IPID = IPID;
+ }
+
+ public String getDevName() {
+ return devName;
+ }
+
+ public void setDevName(String devName) {
+ this.devName = devName;
+ }
+
+ public String getAliasDevName() {
+ return aliasDevName;
+ }
+
+ public void setAliasDevName(String aliasDevName) {
+ this.aliasDevName = aliasDevName;
+ }
+
+ public String getBusNum() {
+ return busNum;
+ }
+
+ public void setBusNum(String busNum) {
+ this.busNum = busNum;
+ }
+
+ public String getTemperature() {
+ return temperature;
+ }
+
+ public String getCardPowerUsage() {
+ return cardPowerUsage;
+ }
+
+ public FpgaDevice(String type, Integer major, Integer minor, String IPID) {
+ this.type = type;
+ this.major = major;
+ this.minor = minor;
+ this.IPID = IPID;
+ }
+
+ public FpgaDevice(String type, Integer major,
+ Integer minor, String IPID, String devName,
+ String aliasDevName, String busNum, String temperature, String cardPowerUsage) {
+ this.type = type;
+ this.major = major;
+ this.minor = minor;
+ this.IPID = IPID;
+ this.devName = devName;
+ this.aliasDevName = aliasDevName;
+ this.busNum = busNum;
+ this.temperature = temperature;
+ this.cardPowerUsage = cardPowerUsage;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null) {
+ return false;
+ }
+ if (!(obj instanceof FpgaDevice)) {
+ return false;
+ }
+ FpgaDevice other = (FpgaDevice) obj;
+ if (other.getType().equals(this.type) &&
+ other.getMajor().equals(this.major) &&
+ other.getMinor().equals(this.minor)) {
+ return true;
+ }
+ return false;
+ }
+
+ @Override
+ public int hashCode() {
+ final int prime = 31;
+ int result = 1;
+ result = prime * result + ((type == null) ? 0 : type.hashCode());
+ result = prime * result + ((major == null) ? 0 : major.hashCode());
+ result = prime * result + ((minor == null) ? 0 : minor.hashCode());
+ return result;
+ }
+
+ @Override
+ public int compareTo(FpgaDevice o) {
+ return 0;
+ }
+
+ @Override
+ public String toString() {
+ return "FPGA Device:(Type: " + this.type + ", Major: " +
+ this.major + ", Minor: " + this.minor + ", IPID: " + this.IPID + ")";
+ }
+ }
+
+ public synchronized void addFpga(String type, List<FpgaDevice> list) {
+ availableFpga.putIfAbsent(type, new LinkedList<>());
+ for (FpgaDevice device : list) {
+ if (!allowedFpgas.contains(device)) {
+ allowedFpgas.add(device);
+ availableFpga.get(type).add(device);
+ }
+ }
+ LOG.info("Add a list of FPGA Devices: " + list);
+ }
+
+ public synchronized void updateFpga(String requestor,
+ FpgaDevice device, String newIPID) {
+ List<FpgaDevice> usedFpgas = usedFpgaByRequestor.get(requestor);
+ int index = findMatchedFpga(usedFpgas, device);
+ if (-1 != index) {
+ usedFpgas.get(index).setIPID(newIPID);
+ } else {
+ LOG.warn("Failed to update FPGA due to unknown reason " +
+ "that no record for this allocated device:" + device);
+ }
+ LOG.info("Update IPID to " + newIPID +
+ " for this allocated device:" + device);
+ }
+
+ private synchronized int findMatchedFpga(List<FpgaDevice> devices, FpgaDevice item) {
+ int i = 0;
+ for (; i < devices.size(); i++) {
+ if (devices.get(i) == item) {
+ return i;
+ }
+ }
+ return -1;
+ }
+
+ /**
+ * Assign {@link FpgaAllocation} with preferred IPID, if no, with random FPGAs
+ * @param type vendor plugin supported FPGA device type
+ * @param count requested FPGA slot count
+ * @param container container id
+ * @param IPIDPreference allocate slot with this IPID first
+ * @return Instance consists two List of allowed and denied {@link FpgaDevice}
+ * @throws ResourceHandlerException When failed to allocate or write state store
+ * */
+ public synchronized FpgaAllocation assignFpga(String type, long count,
+ Container container, String IPIDPreference) throws ResourceHandlerException {
+ List<FpgaDevice> currentAvailableFpga = availableFpga.get(type);
+ String requestor = container.getContainerId().toString();
+ if (null == currentAvailableFpga) {
+ throw new ResourceHandlerException("No such type of FPGA resource available: " + type);
+ }
+ if (count < 0 || count > currentAvailableFpga.size()) {
+ throw new ResourceHandlerException("Invalid FPGA request count or not enough, requested:" +
+ count + ", available:" + getAvailableFpgaCount());
+ }
+ if (count > 0) {
+ // Allocate devices with matching IP first, then any device is ok
+ List<FpgaDevice> assignedFpgas = new LinkedList<>();
+ int matchIPCount = 0;
+ for (int i = 0; i < currentAvailableFpga.size(); i++) {
+ if ( null != currentAvailableFpga.get(i).getIPID() &&
+ currentAvailableFpga.get(i).getIPID().equalsIgnoreCase(IPIDPreference)) {
+ assignedFpgas.add(currentAvailableFpga.get(i));
+ currentAvailableFpga.remove(i);
+ matchIPCount++;
+ }
+ }
+ int remaining = (int) count - matchIPCount;
+ while (remaining > 0) {
+ assignedFpgas.add(currentAvailableFpga.remove(0));
+ remaining--;
+ }
+
+ // Record in state store if we allocated anything
+ if (!assignedFpgas.isEmpty()) {
+ try {
+ nmContext.getNMStateStore().storeAssignedResources(container,
+ FPGA_URI, new LinkedList<>(assignedFpgas));
+ } catch (IOException e) {
+ // failed, give the allocation back
+ currentAvailableFpga.addAll(assignedFpgas);
+ throw new ResourceHandlerException(e);
+ }
+
+ // update state store success, update internal used FPGAs
+ usedFpgaByRequestor.putIfAbsent(requestor, new LinkedList<>());
+ usedFpgaByRequestor.get(requestor).addAll(assignedFpgas);
+ }
+
+ return new FpgaAllocation(assignedFpgas, currentAvailableFpga);
+ }
+ return new FpgaAllocation(null, allowedFpgas);
+ }
+
+ public synchronized void recoverAssignedFpgas(ContainerId containerId) throws ResourceHandlerException {
+ Container c = nmContext.getContainers().get(containerId);
+ if (null == c) {
+ throw new ResourceHandlerException(
+ "This shouldn't happen, cannot find container with id="
+ + containerId);
+ }
+
+ for (Serializable fpgaDevice :
+ c.getResourceMappings().getAssignedResources(FPGA_URI)) {
+ if (!(fpgaDevice instanceof FpgaDevice)) {
+ throw new ResourceHandlerException(
+ "Trying to recover allocated FPGA devices, however it"
+ + " is not FpgaDevice type, this shouldn't happen");
+ }
+
+ // Make sure it is in allowed FPGA device.
+ if (!allowedFpgas.contains(fpgaDevice)) {
+ throw new ResourceHandlerException("Try to recover FpgaDevice = " + fpgaDevice
+ + " however it is not in allowed device list:" + StringUtils
+ .join(";", allowedFpgas));
+ }
+
+ // Make sure it is not occupied by anybody else
+ Iterator<Map.Entry<String, List<FpgaDevice>>> iterator =
+ getUsedFpga().entrySet().iterator();
+ while (iterator.hasNext()) {
+ if (iterator.next().getValue().contains(fpgaDevice)) {
+ throw new ResourceHandlerException("Try to recover FpgaDevice = " + fpgaDevice
+ + " however it is already assigned to others");
+ }
+ }
+ getUsedFpga().putIfAbsent(containerId.toString(), new LinkedList<>());
+ getUsedFpga().get(containerId.toString()).add((FpgaDevice) fpgaDevice);
+ // remove them from available list
+ getAvailableFpga().get(((FpgaDevice) fpgaDevice).getType()).remove(fpgaDevice);
+ }
+ }
+
+ public synchronized void cleanupAssignFpgas(String requestor) {
+ List<FpgaDevice> usedFpgas = usedFpgaByRequestor.get(requestor);
+ if (usedFpgas != null) {
+ for (FpgaDevice device : usedFpgas) {
+ // Add back to availableFpga
+ availableFpga.get(device.getType()).add(device);
+ }
+ usedFpgaByRequestor.remove(requestor);
+ }
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceHandlerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceHandlerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceHandlerImpl.java
new file mode 100644
index 0000000..bf3d9b0
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/FpgaResourceHandlerImpl.java
@@ -0,0 +1,220 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.server.nodemanager.Context;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperation;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandler;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandler;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.AbstractFpgaVendorPlugin;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.FpgaDiscoverer;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.hadoop.yarn.api.records.ResourceInformation.FPGA_URI;
+
+@InterfaceStability.Unstable
+@InterfaceAudience.Private
+public class FpgaResourceHandlerImpl implements ResourceHandler {
+
+ static final Log LOG = LogFactory.getLog(FpgaResourceHandlerImpl.class);
+
+ private final String REQUEST_FPGA_IP_ID_KEY = "REQUESTED_FPGA_IP_ID";
+
+ private AbstractFpgaVendorPlugin vendorPlugin;
+
+ private FpgaResourceAllocator allocator;
+
+ private CGroupsHandler cGroupsHandler;
+
+ public static final String EXCLUDED_FPGAS_CLI_OPTION = "--excluded_fpgas";
+ public static final String CONTAINER_ID_CLI_OPTION = "--container_id";
+ private PrivilegedOperationExecutor privilegedOperationExecutor;
+
+ @VisibleForTesting
+ public FpgaResourceHandlerImpl(Context nmContext,
+ CGroupsHandler cGroupsHandler,
+ PrivilegedOperationExecutor privilegedOperationExecutor,
+ AbstractFpgaVendorPlugin plugin) {
+ this.allocator = new FpgaResourceAllocator(nmContext);
+ this.vendorPlugin = plugin;
+ FpgaDiscoverer.getInstance().setResourceHanderPlugin(vendorPlugin);
+ this.cGroupsHandler = cGroupsHandler;
+ this.privilegedOperationExecutor = privilegedOperationExecutor;
+ }
+
+ @VisibleForTesting
+ public FpgaResourceAllocator getFpgaAllocator() {
+ return allocator;
+ }
+
+ public String getRequestedIPID(Container container) {
+ String r= container.getLaunchContext().getEnvironment().
+ get(REQUEST_FPGA_IP_ID_KEY);
+ return r == null ? "" : r;
+ }
+
+ @Override
+ public List<PrivilegedOperation> bootstrap(Configuration configuration) throws ResourceHandlerException {
+ // The plugin should be initilized by FpgaDiscoverer already
+ if (!vendorPlugin.initPlugin(configuration)) {
+ throw new ResourceHandlerException("FPGA plugin initialization failed", null);
+ }
+ LOG.info("FPGA Plugin bootstrap success.");
+ // Get avialable devices minor numbers from toolchain or static configuration
+ List<FpgaResourceAllocator.FpgaDevice> fpgaDeviceList = FpgaDiscoverer.getInstance().discover();
+ allocator.addFpga(vendorPlugin.getFpgaType(), fpgaDeviceList);
+ this.cGroupsHandler.initializeCGroupController(CGroupsHandler.CGroupController.DEVICES);
+ return null;
+ }
+
+ @Override
+ public List<PrivilegedOperation> preStart(Container container) throws ResourceHandlerException {
+ // 1. Get requested FPGA type and count, choose corresponding FPGA plugin(s)
+ // 2. Use allocator.assignFpga(type, count) to get FPGAAllocation
+ // 3. If required, download to ensure IP file exists and configure IP file for all devices
+ List<PrivilegedOperation> ret = new ArrayList<>();
+ String containerIdStr = container.getContainerId().toString();
+ Resource requestedResource = container.getResource();
+
+ // Create device cgroups for the container
+ cGroupsHandler.createCGroup(CGroupsHandler.CGroupController.DEVICES,
+ containerIdStr);
+
+ long deviceCount = requestedResource.getResourceValue(FPGA_URI);
+ LOG.info(containerIdStr + " requested " + deviceCount + " Intel FPGA(s)");
+ String ipFilePath = null;
+ try {
+
+ // allocate even request 0 FPGA because we need to deny all device numbers for this container
+ FpgaResourceAllocator.FpgaAllocation allocation = allocator.assignFpga(
+ vendorPlugin.getFpgaType(), deviceCount,
+ container, getRequestedIPID(container));
+ LOG.info("FpgaAllocation:" + allocation);
+
+ PrivilegedOperation privilegedOperation = new PrivilegedOperation(PrivilegedOperation.OperationType.FPGA,
+ Arrays.asList(CONTAINER_ID_CLI_OPTION, containerIdStr));
+ if (!allocation.getDenied().isEmpty()) {
+ List<Integer> denied = new ArrayList<>();
+ allocation.getDenied().forEach(device -> denied.add(device.getMinor()));
+ privilegedOperation.appendArgs(Arrays.asList(EXCLUDED_FPGAS_CLI_OPTION,
+ StringUtils.join(",", denied)));
+ }
+ privilegedOperationExecutor.executePrivilegedOperation(privilegedOperation, true);
+
+ if (deviceCount > 0) {
+ /**
+ * We only support flashing one IP for all devices now. If user don't set this
+ * environment variable, we assume that user's application can find the IP file by
+ * itself.
+ * Note that the IP downloading and reprogramming in advance in YARN is not necessary because
+ * the OpenCL application may find the IP file and reprogram device on the fly. But YARN do this
+ * for the containers will achieve the quickest reprogram path
+ *
+ * For instance, REQUESTED_FPGA_IP_ID = "matrix_mul" will make all devices
+ * programmed with matrix multiplication IP
+ *
+ * In the future, we may support "matrix_mul:1,gzip:2" format to support different IP
+ * for different devices
+ *
+ * */
+ ipFilePath = vendorPlugin.downloadIP(getRequestedIPID(container), container.getWorkDir(),
+ container.getResourceSet().getLocalizedResources());
+ if (ipFilePath.isEmpty()) {
+ LOG.warn("FPGA plugin failed to download IP but continue, please check the value of environment viable: " +
+ REQUEST_FPGA_IP_ID_KEY + " if you want yarn to help");
+ } else {
+ LOG.info("IP file path:" + ipFilePath);
+ List<FpgaResourceAllocator.FpgaDevice> allowed = allocation.getAllowed();
+ String majorMinorNumber;
+ for (int i = 0; i < allowed.size(); i++) {
+ majorMinorNumber = allowed.get(i).getMajor() + ":" + allowed.get(i).getMinor();
+ String currentIPID = allowed.get(i).getIPID();
+ if (null != currentIPID &&
+ currentIPID.equalsIgnoreCase(getRequestedIPID(container))) {
+ LOG.info("IP already in device \"" + allowed.get(i).getAliasDevName() + "," +
+ majorMinorNumber + "\", skip reprogramming");
+ continue;
+ }
+ if (vendorPlugin.configureIP(ipFilePath, majorMinorNumber)) {
+ // update the allocator that we update an IP of a device
+ allocator.updateFpga(containerIdStr, allowed.get(i),
+ getRequestedIPID(container));
+ //TODO: update the node constraint label
+ }
+ }
+ }
+ }
+ } catch (ResourceHandlerException re) {
+ allocator.cleanupAssignFpgas(containerIdStr);
+ cGroupsHandler.deleteCGroup(CGroupsHandler.CGroupController.DEVICES,
+ containerIdStr);
+ throw re;
+ } catch (PrivilegedOperationException e) {
+ allocator.cleanupAssignFpgas(containerIdStr);
+ cGroupsHandler.deleteCGroup(CGroupsHandler.CGroupController.DEVICES, containerIdStr);
+ LOG.warn("Could not update cgroup for container", e);
+ throw new ResourceHandlerException(e);
+ }
+ //isolation operation
+ ret.add(new PrivilegedOperation(
+ PrivilegedOperation.OperationType.ADD_PID_TO_CGROUP,
+ PrivilegedOperation.CGROUP_ARG_PREFIX
+ + cGroupsHandler.getPathForCGroupTasks(
+ CGroupsHandler.CGroupController.DEVICES, containerIdStr)));
+ return ret;
+ }
+
+ @Override
+ public List<PrivilegedOperation> reacquireContainer(ContainerId containerId) throws ResourceHandlerException {
+ allocator.recoverAssignedFpgas(containerId);
+ return null;
+ }
+
+ @Override
+ public List<PrivilegedOperation> postComplete(ContainerId containerId) throws ResourceHandlerException {
+ allocator.cleanupAssignFpgas(containerId.toString());
+ cGroupsHandler.deleteCGroup(CGroupsHandler.CGroupController.DEVICES,
+ containerId.toString());
+ return null;
+ }
+
+ @Override
+ public List<PrivilegedOperation> teardown() throws ResourceHandlerException {
+ return null;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/ResourcePluginManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/ResourcePluginManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/ResourcePluginManager.java
index 73d6038..12d679b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/ResourcePluginManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/ResourcePluginManager.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.server.nodemanager.Context;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.FpgaResourcePlugin;
import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuResourcePlugin;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -33,6 +34,7 @@ import java.util.HashMap;
import java.util.Map;
import java.util.Set;
+import static org.apache.hadoop.yarn.api.records.ResourceInformation.FPGA_URI;
import static org.apache.hadoop.yarn.api.records.ResourceInformation.GPU_URI;
/**
@@ -42,7 +44,7 @@ public class ResourcePluginManager {
private static final Logger LOG =
LoggerFactory.getLogger(ResourcePluginManager.class);
private static final Set<String> SUPPORTED_RESOURCE_PLUGINS = ImmutableSet.of(
- GPU_URI);
+ GPU_URI, FPGA_URI);
private Map<String, ResourcePlugin> configuredPlugins = Collections.EMPTY_MAP;
@@ -77,6 +79,10 @@ public class ResourcePluginManager {
plugin = new GpuResourcePlugin();
}
+ if (resourceName.equals(FPGA_URI)) {
+ plugin = new FpgaResourcePlugin();
+ }
+
if (plugin == null) {
throw new YarnException(
"This shouldn't happen, plugin=" + resourceName
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java
new file mode 100644
index 0000000..60ea57c
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/AbstractFpgaVendorPlugin.java
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceAllocator;
+
+import java.util.List;
+import java.util.Map;
+
+
+/**
+ * FPGA plugin interface for vendor to implement. Used by {@link FpgaDiscoverer} and
+ * {@link org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceHandlerImpl}
+ * to discover devices/download IP/configure IP
+ * */
+
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public interface AbstractFpgaVendorPlugin extends Configurable{
+
+ /**
+ * Check vendor's toolchain and required environment
+ * */
+ boolean initPlugin(Configuration conf);
+
+ /**
+ * Diagnose the devices using vendor toolchain but no need to parse device information
+ * */
+ boolean diagnose(int timeout);
+
+ /**
+ * Discover the vendor's FPGA devices with execution time constraint
+ * @param timeout The vendor plugin should return result during this time
+ * @return The result will be added to FPGAResourceAllocator for later scheduling
+ * */
+ List<FpgaResourceAllocator.FpgaDevice> discover(int timeout);
+
+ /**
+ * Since all vendor plugins share a {@link org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceAllocator}
+ * which distinguish FPGA devices by type. Vendor plugin must report this.
+ * */
+ String getFpgaType();
+
+ /**
+ * The vendor plugin download required IP files to a required directory.
+ * It should check if the IP file has already been downloaded.
+ * @param id The identifier for IP file. Comes from application, ie. matrix_multi_v1
+ * @param dstDir The plugin should download IP file to this directory
+ * @param localizedResources The container localized resource can be searched for IP file. Key is
+ * localized file path and value is soft link names
+ * @return The absolute path string of IP file
+ * */
+ String downloadIP(String id, String dstDir, Map<Path, List<String>> localizedResources);
+
+ /**
+ * The vendor plugin configure an IP file to a device
+ * @param ipPath The absolute path of the IP file
+ * @param majorMinorNumber The device in format <major:minor>
+ * @return configure device ok or not
+ * */
+ boolean configureIP(String ipPath, String majorMinorNumber);
+
+ @Override
+ void setConf(Configuration conf);
+
+ @Override
+ Configuration getConf();
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java
new file mode 100644
index 0000000..8d32a18
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaDiscoverer.java
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandlerException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceAllocator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Iterator;
+import java.util.List;
+
+public class FpgaDiscoverer {
+
+ public static final Logger LOG = LoggerFactory.getLogger(
+ FpgaDiscoverer.class);
+
+ private static FpgaDiscoverer instance;
+
+ private Configuration conf = null;
+
+ private AbstractFpgaVendorPlugin plugin = null;
+
+ private List<FpgaResourceAllocator.FpgaDevice> currentFpgaInfo = null;
+
+ // shell command timeout
+ private static final int MAX_EXEC_TIMEOUT_MS = 10 * 1000;
+
+ static {
+ instance = new FpgaDiscoverer();
+ }
+
+ public static FpgaDiscoverer getInstance() {
+ return instance;
+ }
+
+ @VisibleForTesting
+ public synchronized static FpgaDiscoverer setInstance(FpgaDiscoverer newInstance) {
+ instance = newInstance;
+ return instance;
+ }
+
+ @VisibleForTesting
+ public synchronized void setConf(Configuration conf) {
+ this.conf = conf;
+ }
+
+ public List<FpgaResourceAllocator.FpgaDevice> getCurrentFpgaInfo() {
+ return currentFpgaInfo;
+ }
+
+ public synchronized void setResourceHanderPlugin(AbstractFpgaVendorPlugin plugin) {
+ this.plugin = plugin;
+ }
+
+ public synchronized boolean diagnose() {
+ return this.plugin.diagnose(MAX_EXEC_TIMEOUT_MS);
+ }
+
+ public synchronized void initialize(Configuration conf) throws YarnException {
+ this.conf = conf;
+ this.plugin.initPlugin(conf);
+ // Try to diagnose FPGA
+ LOG.info("Trying to diagnose FPGA information ...");
+ if (!diagnose()) {
+ LOG.warn("Failed to pass FPGA devices diagnose");
+ }
+ }
+
+ /**
+ * get avialable devices minor numbers from toolchain or static configuration
+ * */
+ public synchronized List<FpgaResourceAllocator.FpgaDevice> discover() throws ResourceHandlerException {
+ List<FpgaResourceAllocator.FpgaDevice> list;
+ String allowed = this.conf.get(YarnConfiguration.NM_FPGA_ALLOWED_DEVICES);
+ // whatever static or auto discover, we always needs
+ // the vendor plugin to discover. For instance, IntelFpgaOpenclPlugin need to
+ // setup a mapping of <major:minor> to <aliasDevName>
+ list = this.plugin.discover(MAX_EXEC_TIMEOUT_MS);
+ if (0 == list.size()) {
+ throw new ResourceHandlerException("No FPGA devices detected!");
+ }
+ currentFpgaInfo = list;
+ if (allowed.equalsIgnoreCase(
+ YarnConfiguration.AUTOMATICALLY_DISCOVER_GPU_DEVICES)) {
+ return list;
+ } else if (allowed.matches("(\\d,)*\\d")){
+ String[] minors = allowed.split(",");
+ Iterator<FpgaResourceAllocator.FpgaDevice> iterator = list.iterator();
+ // remove the non-configured minor numbers
+ FpgaResourceAllocator.FpgaDevice t;
+ while (iterator.hasNext()) {
+ boolean valid = false;
+ t = iterator.next();
+ for (String minorNumber : minors) {
+ if (t.getMinor().toString().equals(minorNumber)) {
+ valid = true;
+ break;
+ }
+ }
+ if (!valid) {
+ iterator.remove();
+ }
+ }
+ // if the count of user configured is still larger than actual
+ if (list.size() != minors.length) {
+ LOG.warn("We continue although there're mistakes in user's configuration " +
+ YarnConfiguration.NM_FPGA_ALLOWED_DEVICES +
+ "user configured:" + allowed + ", while the real:" + list.toString());
+ }
+ } else {
+ throw new ResourceHandlerException("Invalid value configured for " +
+ YarnConfiguration.NM_FPGA_ALLOWED_DEVICES + ":\"" + allowed + "\"");
+ }
+ return list;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaNodeResourceUpdateHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaNodeResourceUpdateHandler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaNodeResourceUpdateHandler.java
new file mode 100644
index 0000000..7511d8f
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaNodeResourceUpdateHandler.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga;
+
+
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceInformation;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceAllocator;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.NodeResourceUpdaterPlugin;
+import org.apache.hadoop.yarn.util.resource.ResourceUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.hadoop.yarn.api.records.ResourceInformation.FPGA_URI;
+
+public class FpgaNodeResourceUpdateHandler extends NodeResourceUpdaterPlugin {
+ private static final Logger LOG = LoggerFactory.getLogger(
+ FpgaNodeResourceUpdateHandler.class);
+
+ @Override
+ public void updateConfiguredResource(Resource res) throws YarnException {
+ LOG.info("Initializing configured FPGA resources for the NodeManager.");
+ List<FpgaResourceAllocator.FpgaDevice> list = FpgaDiscoverer.getInstance().getCurrentFpgaInfo();
+ List<Integer> minors = new LinkedList<>();
+ for (FpgaResourceAllocator.FpgaDevice device : list) {
+ minors.add(device.getMinor());
+ }
+ if (minors.isEmpty()) {
+ LOG.info("Didn't find any usable FPGAs on the NodeManager.");
+ return;
+ }
+ long count = minors.size();
+
+ Map<String, ResourceInformation> configuredResourceTypes =
+ ResourceUtils.getResourceTypes();
+ if (!configuredResourceTypes.containsKey(FPGA_URI)) {
+ throw new YarnException("Wrong configurations, found " + count +
+ " usable FPGAs, however " + FPGA_URI
+ + " resource-type is not configured inside"
+ + " resource-types.xml, please configure it to enable FPGA feature or"
+ + " remove " + FPGA_URI + " from "
+ + YarnConfiguration.NM_RESOURCE_PLUGINS);
+ }
+
+ res.setResourceValue(FPGA_URI, count);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaResourcePlugin.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaResourcePlugin.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaResourcePlugin.java
new file mode 100644
index 0000000..44d093e
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/FpgaResourcePlugin.java
@@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+import org.apache.hadoop.yarn.server.nodemanager.Context;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandler;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.ResourceHandler;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceHandlerImpl;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.DockerCommandPlugin;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.NodeResourceUpdaterPlugin;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.ResourcePlugin;
+import org.apache.hadoop.yarn.server.nodemanager.webapp.dao.NMResourceInfo;
+
+public class FpgaResourcePlugin implements ResourcePlugin {
+ private static final Log LOG = LogFactory.getLog(FpgaResourcePlugin.class);
+
+ private ResourceHandler fpgaResourceHandler = null;
+
+ private AbstractFpgaVendorPlugin vendorPlugin = null;
+ private FpgaNodeResourceUpdateHandler fpgaNodeResourceUpdateHandler = null;
+
+ private AbstractFpgaVendorPlugin createFpgaVendorPlugin(Configuration conf) {
+ String vendorPluginClass = conf.get(YarnConfiguration.NM_FPGA_VENDOR_PLUGIN,
+ YarnConfiguration.DEFAULT_NM_FPGA_VENDOR_PLUGIN);
+ LOG.info("Using FPGA vendor plugin: " + vendorPluginClass);
+ try {
+ Class<?> schedulerClazz = Class.forName(vendorPluginClass);
+ if (AbstractFpgaVendorPlugin.class.isAssignableFrom(schedulerClazz)) {
+ return (AbstractFpgaVendorPlugin) ReflectionUtils.newInstance(schedulerClazz,
+ conf);
+ } else {
+ throw new YarnRuntimeException("Class: " + vendorPluginClass
+ + " not instance of " + AbstractFpgaVendorPlugin.class.getCanonicalName());
+ }
+ } catch (ClassNotFoundException e) {
+ throw new YarnRuntimeException("Could not instantiate FPGA vendor plugin: "
+ + vendorPluginClass, e);
+ }
+ }
+
+ @Override
+ public void initialize(Context context) throws YarnException {
+ // Get vendor plugin from configuration
+ this.vendorPlugin = createFpgaVendorPlugin(context.getConf());
+ FpgaDiscoverer.getInstance().setResourceHanderPlugin(vendorPlugin);
+ FpgaDiscoverer.getInstance().initialize(context.getConf());
+ fpgaNodeResourceUpdateHandler = new FpgaNodeResourceUpdateHandler();
+ }
+
+ @Override
+ public ResourceHandler createResourceHandler(
+ Context nmContext, CGroupsHandler cGroupsHandler,
+ PrivilegedOperationExecutor privilegedOperationExecutor) {
+ if (fpgaResourceHandler == null) {
+ fpgaResourceHandler = new FpgaResourceHandlerImpl(nmContext,
+ cGroupsHandler, privilegedOperationExecutor, vendorPlugin);
+ }
+ return fpgaResourceHandler;
+ }
+
+ @Override
+ public NodeResourceUpdaterPlugin getNodeResourceHandlerInstance() {
+ return fpgaNodeResourceUpdateHandler;
+ }
+
+ @Override
+ public void cleanup() throws YarnException {
+
+ }
+
+ @Override
+ public DockerCommandPlugin getDockerCommandPluginInstance() {
+ return null;
+ }
+
+ @Override
+ public NMResourceInfo getNMResourceInfo() throws YarnException {
+ return null;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7225ec0c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java
new file mode 100644
index 0000000..f2e82b8
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/fpga/IntelFpgaOpenclPlugin.java
@@ -0,0 +1,396 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.util.Shell;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.fpga.FpgaResourceAllocator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.*;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+/**
+ * Intel FPGA for OpenCL plugin.
+ * The key points are:
+ * 1. It uses Intel's toolchain "aocl" to discover devices/reprogram IP to the device
+ * before container launch to achieve a quickest reprogramming path
+ * 2. It avoids reprogramming by maintaining a mapping of device to FPGA IP ID
+ * 3. It assume IP file is distributed to container directory
+ */
+public class IntelFpgaOpenclPlugin implements AbstractFpgaVendorPlugin {
+ public static final Logger LOG = LoggerFactory.getLogger(
+ IntelFpgaOpenclPlugin.class);
+
+ private boolean initialized = false;
+ private Configuration conf;
+ private InnerShellExecutor shell;
+
+ protected static final String DEFAULT_BINARY_NAME = "aocl";
+
+ protected static final String ALTERAOCLSDKROOT_NAME = "ALTERAOCLSDKROOT";
+
+ private String pathToExecutable = null;
+
+ // a mapping of major:minor number to acl0-31
+ private Map<String, String> aliasMap;
+
+ public IntelFpgaOpenclPlugin() {
+ this.shell = new InnerShellExecutor();
+ }
+
+ public String getDefaultBinaryName() {
+ return DEFAULT_BINARY_NAME;
+ }
+
+ public String getDefaultPathToExecutable() {
+ return System.getenv(ALTERAOCLSDKROOT_NAME);
+ }
+
+ public static String getDefaultPathEnvName() {
+ return ALTERAOCLSDKROOT_NAME;
+ }
+
+ @VisibleForTesting
+ public String getPathToExecutable() {
+ return pathToExecutable;
+ }
+
+ public void setPathToExecutable(String pathToExecutable) {
+ this.pathToExecutable = pathToExecutable;
+ }
+
+ @VisibleForTesting
+ public void setShell(InnerShellExecutor shell) {
+ this.shell = shell;
+ }
+
+ public Map<String, String> getAliasMap() {
+ return aliasMap;
+ }
+
+ /**
+ * Check the Intel FPGA for OpenCL toolchain
+ * */
+ @Override
+ public boolean initPlugin(Configuration conf) {
+ this.aliasMap = new HashMap<>();
+ if (this.initialized) {
+ return true;
+ }
+ // Find the proper toolchain, mainly aocl
+ String pluginDefaultBinaryName = getDefaultBinaryName();
+ String pathToExecutable = conf.get(YarnConfiguration.NM_FPGA_PATH_TO_EXEC,
+ "");
+ if (pathToExecutable.isEmpty()) {
+ pathToExecutable = pluginDefaultBinaryName;
+ }
+ // Validate file existence
+ File binaryPath = new File(pathToExecutable);
+ if (!binaryPath.exists()) {
+ // When binary not exist, fail
+ LOG.warn("Failed to find FPGA discoverer executable configured in " +
+ YarnConfiguration.NM_FPGA_PATH_TO_EXEC +
+ ", please check! Try default path");
+ pathToExecutable = pluginDefaultBinaryName;
+ // Try to find in plugin's preferred path
+ String pluginDefaultPreferredPath = getDefaultPathToExecutable();
+ if (null == pluginDefaultPreferredPath) {
+ LOG.warn("Failed to find FPGA discoverer executable from system environment " +
+ getDefaultPathEnvName()+
+ ", please check your environment!");
+ } else {
+ binaryPath = new File(pluginDefaultPreferredPath + "/bin", pluginDefaultBinaryName);
+ if (binaryPath.exists()) {
+ pathToExecutable = pluginDefaultPreferredPath;
+ } else {
+ pathToExecutable = pluginDefaultBinaryName;
+ LOG.warn("Failed to find FPGA discoverer executable in " +
+ pluginDefaultPreferredPath + ", file doesn't exists! Use default binary" + pathToExecutable);
+ }
+ }
+ }
+ setPathToExecutable(pathToExecutable);
+ if (!diagnose(10*1000)) {
+ LOG.warn("Intel FPGA for OpenCL diagnose failed!");
+ this.initialized = false;
+ } else {
+ this.initialized = true;
+ }
+ return this.initialized;
+ }
+
+ @Override
+ public List<FpgaResourceAllocator.FpgaDevice> discover(int timeout) {
+ List<FpgaResourceAllocator.FpgaDevice> list = new LinkedList<>();
+ String output;
+ output = getDiagnoseInfo(timeout);
+ if (null == output) {
+ return list;
+ }
+ parseDiagnoseInfo(output, list);
+ return list;
+ }
+
+ public static class InnerShellExecutor {
+
+ // ls /dev/<devName>
+ // return a string in format <major:minor>
+ public String getMajorAndMinorNumber(String devName) {
+ String output = null;
+ Shell.ShellCommandExecutor shexec = new Shell.ShellCommandExecutor(
+ new String[]{"stat", "-c", "%t:%T", "/dev/" + devName});
+ try {
+ LOG.debug("Get FPGA major-minor numbers from /dev/" + devName);
+ shexec.execute();
+ String[] strs = shexec.getOutput().trim().split(":");
+ LOG.debug("stat output:" + shexec.getOutput());
+ output = Integer.parseInt(strs[0], 16) + ":" + Integer.parseInt(strs[1], 16);
+ } catch (IOException e) {
+ String msg =
+ "Failed to get major-minor number from reading /dev/" + devName;
+ LOG.warn(msg);
+ LOG.debug("Command output:" + shexec.getOutput() + ", exit code:" +
+ shexec.getExitCode());
+ }
+ return output;
+ }
+
+ public String runDiagnose(String binary, int timeout) {
+ String output = null;
+ Shell.ShellCommandExecutor shexec = new Shell.ShellCommandExecutor(
+ new String[]{binary, "diagnose"});
+ try {
+ shexec.execute();
+ } catch (IOException e) {
+ // aocl diagnose exit code is 1 even it success.
+ // we ignore it because we only wants the output
+ String msg =
+ "Failed to execute " + binary + " diagnose, exception message:" + e
+ .getMessage() +", output:" + output + ", continue ...";
+ LOG.warn(msg);
+ LOG.debug(shexec.getOutput());
+ }
+ return shexec.getOutput();
+ }
+
+ }
+
+ /**
+ * One real sample output of Intel FPGA SDK 17.0's "aocl diagnose" is as below:
+ * "
+ * aocl diagnose: Running diagnose from /home/fpga/intelFPGA_pro/17.0/hld/board/nalla_pcie/linux64/libexec
+ *
+ * ------------------------- acl0 -------------------------
+ * Vendor: Nallatech ltd
+ *
+ * Phys Dev Name Status Information
+ *
+ * aclnalla_pcie0Passed nalla_pcie (aclnalla_pcie0)
+ * PCIe dev_id = 2494, bus:slot.func = 02:00.00, Gen3 x8
+ * FPGA temperature = 54.4 degrees C.
+ * Total Card Power Usage = 31.7 Watts.
+ * Device Power Usage = 0.0 Watts.
+ *
+ * DIAGNOSTIC_PASSED
+ * ---------------------------------------------------------
+ * "
+ *
+ * While per Intel's guide, the output(should be outdated or prior SDK version's) is as below:
+ *
+ * "
+ * aocl diagnose: Running diagnostic from ALTERAOCLSDKROOT/board/<board_name>/
+ * <platform>/libexec
+ * Verified that the kernel mode driver is installed on the host machine.
+ * Using board package from vendor: <board_vendor_name>
+ * Querying information for all supported devices that are installed on the host
+ * machine ...
+ *
+ * device_name Status Information
+ *
+ * acl0 Passed <descriptive_board_name>
+ * PCIe dev_id = <device_ID>, bus:slot.func = 02:00.00,
+ * at Gen 2 with 8 lanes.
+ * FPGA temperature=43.0 degrees C.
+ * acl1 Passed <descriptive_board_name>
+ * PCIe dev_id = <device_ID>, bus:slot.func = 03:00.00,
+ * at Gen 2 with 8 lanes.
+ * FPGA temperature = 35.0 degrees C.
+ *
+ * Found 2 active device(s) installed on the host machine, to perform a full
+ * diagnostic on a specific device, please run aocl diagnose <device_name>
+ *
+ * DIAGNOSTIC_PASSED
+ * "
+ * But this method only support the first output
+ * */
+ public void parseDiagnoseInfo(String output, List<FpgaResourceAllocator.FpgaDevice> list) {
+ if (output.contains("DIAGNOSTIC_PASSED")) {
+ Matcher headerStartMatcher = Pattern.compile("acl[0-31]").matcher(output);
+ Matcher headerEndMatcher = Pattern.compile("(?i)DIAGNOSTIC_PASSED").matcher(output);
+ int sectionStartIndex;
+ int sectionEndIndex;
+ String aliasName;
+ while (headerStartMatcher.find()) {
+ sectionStartIndex = headerStartMatcher.end();
+ String section = null;
+ aliasName = headerStartMatcher.group();
+ while (headerEndMatcher.find(sectionStartIndex)) {
+ sectionEndIndex = headerEndMatcher.start();
+ section = output.substring(sectionStartIndex, sectionEndIndex);
+ break;
+ }
+ if (null == section) {
+ LOG.warn("Unsupported diagnose output");
+ return;
+ }
+ // devName, \(.*\)
+ // busNum, bus:slot.func\s=\s.*,
+ // FPGA temperature\s=\s.*
+ // Total\sCard\sPower\sUsage\s=\s.*
+ String[] fieldRegexes = new String[]{"\\(.*\\)\n", "(?i)bus:slot.func\\s=\\s.*,",
+ "(?i)FPGA temperature\\s=\\s.*", "(?i)Total\\sCard\\sPower\\sUsage\\s=\\s.*"};
+ String[] fields = new String[4];
+ String tempFieldValue;
+ for (int i = 0; i < fieldRegexes.length; i++) {
+ Matcher fieldMatcher = Pattern.compile(fieldRegexes[i]).matcher(section);
+ if (!fieldMatcher.find()) {
+ LOG.warn("Couldn't find " + fieldRegexes[i] + " pattern");
+ fields[i] = "";
+ continue;
+ }
+ tempFieldValue = fieldMatcher.group().trim();
+ if (i == 0) {
+ // special case for Device name
+ fields[i] = tempFieldValue.substring(1, tempFieldValue.length() - 1);
+ } else {
+ String ss = tempFieldValue.split("=")[1].trim();
+ fields[i] = ss.substring(0, ss.length() - 1);
+ }
+ }
+ String majorMinorNumber = this.shell.getMajorAndMinorNumber(fields[0]);
+ if (null != majorMinorNumber) {
+ String[] mmn = majorMinorNumber.split(":");
+ this.aliasMap.put(majorMinorNumber, aliasName);
+ list.add(new FpgaResourceAllocator.FpgaDevice(getFpgaType(),
+ Integer.parseInt(mmn[0]),
+ Integer.parseInt(mmn[1]), null,
+ fields[0], aliasName, fields[1], fields[2], fields[3]));
+ }
+ }// end while
+ }// end if
+ }
+
+ public String getDiagnoseInfo(int timeout) {
+ return this.shell.runDiagnose(this.pathToExecutable,timeout);
+ }
+
+ @Override
+ public boolean diagnose(int timeout) {
+ String output = getDiagnoseInfo(timeout);
+ if (null != output && output.contains("DIAGNOSTIC_PASSED")) {
+ return true;
+ }
+ return false;
+ }
+
+ /**
+ * this is actually the opencl platform type
+ * */
+ @Override
+ public String getFpgaType() {
+ return "IntelOpenCL";
+ }
+
+ @Override
+ public String downloadIP(String id, String dstDir, Map<Path, List<String>> localizedResources) {
+ // Assume .aocx IP file is distributed by DS to local dir
+ String r = "";
+ Path path;
+ LOG.info("Got environment: " + id + ", search IP file in localized resources");
+ if (null == id || id.isEmpty()) {
+ LOG.warn("IP_ID environment is empty, skip downloading");
+ return r;
+ }
+ if (localizedResources != null) {
+ for (Map.Entry<Path, List<String>> resourceEntry :
+ localizedResources.entrySet()) {
+ path = resourceEntry.getKey();
+ LOG.debug("Check:" + path.toUri().toString());
+ if (path.getName().toLowerCase().contains(id.toLowerCase()) && path.getName().endsWith(".aocx")) {
+ r = path.toUri().toString();
+ LOG.debug("Found: " + r);
+ break;
+ }
+ }
+ } else {
+ LOG.warn("Localized resource is null!");
+ }
+ return r;
+ }
+
+ /**
+ * Program one device.
+ * It's ok for the offline "aocl program" failed because the application will always invoke API to program
+ * The reason we do offline reprogramming is to make the application's program process faster
+ * @param ipPath the absolute path to the aocx IP file
+ * @param majorMinorNumber major:minor string
+ * @return True or False
+ * */
+ @Override
+ public boolean configureIP(String ipPath, String majorMinorNumber) {
+ // perform offline program the IP to get a quickest reprogramming sequence
+ // we need a mapping of "major:minor" to "acl0" to issue command "aocl program <acl0> <ipPath>"
+ Shell.ShellCommandExecutor shexec;
+ String aclName;
+ aclName = this.aliasMap.get(majorMinorNumber);
+ shexec = new Shell.ShellCommandExecutor(
+ new String[]{this.pathToExecutable, "program", aclName, ipPath});
+ try {
+ shexec.execute();
+ if (0 == shexec.getExitCode()) {
+ LOG.debug(shexec.getOutput());
+ LOG.info("Intel aocl program " + ipPath + " to " + aclName + " successfully");
+ } else {
+ return false;
+ }
+ } catch (IOException e) {
+ LOG.error("Intel aocl program " + ipPath + " to " + aclName + " failed!");
+ e.printStackTrace();
+ return false;
+ }
+ return true;
+ }
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ }
+
+ @Override
+ public Configuration getConf() {
+ return this.conf;
+ }
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[30/50] [abbrv] hadoop git commit: HDFS-12605. [READ]
TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails after
rebase
Posted by vi...@apache.org.
HDFS-12605. [READ] TestNameNodeProvidedImplementation#testProvidedDatanodeFailures fails after rebase
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2cf4faad
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2cf4faad
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2cf4faad
Branch: refs/heads/HDFS-9806
Commit: 2cf4faadb85c5a5af7a3c76901d593ed357c2785
Parents: 50c8b91
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Wed Oct 18 13:53:11 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../hdfs/server/blockmanagement/DatanodeDescriptor.java | 12 ++++++++++++
.../namenode/TestNameNodeProvidedImplementation.java | 6 +++---
2 files changed, 15 insertions(+), 3 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cf4faad/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index 28a3d1a..e3d6582 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -489,6 +489,18 @@ public class DatanodeDescriptor extends DatanodeInfo {
synchronized (storageMap) {
DatanodeStorageInfo storage = storageMap.get(s.getStorageID());
if (null == storage) {
+ LOG.info("Adding new storage ID {} for DN {}", s.getStorageID(),
+ getXferAddr());
+ DFSTopologyNodeImpl parent = null;
+ if (getParent() instanceof DFSTopologyNodeImpl) {
+ parent = (DFSTopologyNodeImpl) getParent();
+ }
+ StorageType type = s.getStorageType();
+ if (!hasStorageType(type) && parent != null) {
+ // we are about to add a type this node currently does not have,
+ // inform the parent that a new type is added to this datanode
+ parent.childAddStorage(getName(), type);
+ }
storageMap.put(s.getStorageID(), s);
} else {
assert storage == s : "found " + storage + " expected " + s;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2cf4faad/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 3f937c4..d622b9e 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -481,13 +481,13 @@ public class TestNameNodeProvidedImplementation {
assertEquals(providedDatanode2.getDatanodeUuid(),
dnInfos[0].getDatanodeUuid());
- //stop the 2nd provided datanode
- cluster.stopDataNode(1);
+ // stop the 2nd provided datanode
+ MiniDFSCluster.DataNodeProperties providedDNProperties2 =
+ cluster.stopDataNode(0);
// make NameNode detect that datanode is down
BlockManagerTestUtil.noticeDeadDatanode(
cluster.getNameNode(),
providedDatanode2.getDatanodeId().getXferAddr());
-
getAndCheckBlockLocations(client, filename, 0);
//restart the provided datanode
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[10/50] [abbrv] hadoop git commit: YARN-7487. Ensure volume to
include GPU base libraries after created by plugin. Contributed by Wangda
Tan.
Posted by vi...@apache.org.
YARN-7487. Ensure volume to include GPU base libraries after created by plugin. Contributed by Wangda Tan.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/556aea3f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/556aea3f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/556aea3f
Branch: refs/heads/HDFS-9806
Commit: 556aea3f367bdbd4e4db601bea0ca9bf2adde063
Parents: 4653aa3
Author: Sunil G <su...@apache.org>
Authored: Fri Dec 1 13:36:28 2017 +0530
Committer: Sunil G <su...@apache.org>
Committed: Fri Dec 1 13:36:28 2017 +0530
----------------------------------------------------------------------
.../runtime/DockerLinuxContainerRuntime.java | 63 ++++++-
.../runtime/docker/DockerVolumeCommand.java | 29 +++-
.../gpu/NvidiaDockerV1CommandPlugin.java | 2 +-
.../container-executor/impl/utils/docker-util.c | 106 +++++++-----
.../test/utils/test_docker_util.cc | 5 +-
.../runtime/TestDockerContainerRuntime.java | 170 ++++++++++++++++---
6 files changed, 304 insertions(+), 71 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556aea3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
index e61dc23..20359ea 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/DockerLinuxContainerRuntime.java
@@ -337,7 +337,7 @@ public class DockerLinuxContainerRuntime implements LinuxContainerRuntime {
return false;
}
- private void runDockerVolumeCommand(DockerVolumeCommand dockerVolumeCommand,
+ private String runDockerVolumeCommand(DockerVolumeCommand dockerVolumeCommand,
Container container) throws ContainerExecutionException {
try {
String commandFile = dockerClient.writeCommandToTempFile(
@@ -351,6 +351,7 @@ public class DockerLinuxContainerRuntime implements LinuxContainerRuntime {
LOG.info("ContainerId=" + container.getContainerId()
+ ", docker volume output for " + dockerVolumeCommand + ": "
+ output);
+ return output;
} catch (ContainerExecutionException e) {
LOG.error("Error when writing command to temp file, command="
+ dockerVolumeCommand,
@@ -378,15 +379,73 @@ public class DockerLinuxContainerRuntime implements LinuxContainerRuntime {
plugin.getDockerCommandPluginInstance();
if (dockerCommandPlugin != null) {
DockerVolumeCommand dockerVolumeCommand =
- dockerCommandPlugin.getCreateDockerVolumeCommand(ctx.getContainer());
+ dockerCommandPlugin.getCreateDockerVolumeCommand(
+ ctx.getContainer());
if (dockerVolumeCommand != null) {
runDockerVolumeCommand(dockerVolumeCommand, container);
+
+ // After volume created, run inspect to make sure volume properly
+ // created.
+ if (dockerVolumeCommand.getSubCommand().equals(
+ DockerVolumeCommand.VOLUME_CREATE_SUB_COMMAND)) {
+ checkDockerVolumeCreated(dockerVolumeCommand, container);
+ }
}
}
}
}
}
+ private void checkDockerVolumeCreated(
+ DockerVolumeCommand dockerVolumeCreationCommand, Container container)
+ throws ContainerExecutionException {
+ DockerVolumeCommand dockerVolumeInspectCommand = new DockerVolumeCommand(
+ DockerVolumeCommand.VOLUME_LS_SUB_COMMAND);
+ dockerVolumeInspectCommand.setFormat("{{.Name}},{{.Driver}}");
+ String output = runDockerVolumeCommand(dockerVolumeInspectCommand,
+ container);
+
+ // Parse output line by line and check if it matches
+ String volumeName = dockerVolumeCreationCommand.getVolumeName();
+ String driverName = dockerVolumeCreationCommand.getDriverName();
+ if (driverName == null) {
+ driverName = "local";
+ }
+
+ for (String line : output.split("\n")) {
+ line = line.trim();
+ String[] arr = line.split(",");
+ String v = arr[0].trim();
+ String d = null;
+ if (arr.length > 1) {
+ d = arr[1].trim();
+ }
+ if (d != null && volumeName.equals(v) && driverName.equals(d)) {
+ // Good we found it.
+ LOG.info(
+ "Docker volume-name=" + volumeName + " driver-name=" + driverName
+ + " already exists for container=" + container
+ .getContainerId() + ", continue...");
+ return;
+ }
+ }
+
+ // Couldn't find the volume
+ String message =
+ " Couldn't find volume=" + volumeName + " driver=" + driverName
+ + " for container=" + container.getContainerId()
+ + ", please check error message in log to understand "
+ + "why this happens.";
+ LOG.error(message);
+
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("All docker volumes in the system, command="
+ + dockerVolumeInspectCommand.toString());
+ }
+
+ throw new ContainerExecutionException(message);
+ }
+
private void validateContainerNetworkType(String network)
throws ContainerExecutionException {
if (allowedNetworks.contains(network)) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556aea3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerVolumeCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerVolumeCommand.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerVolumeCommand.java
index a477c93..aac7685 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerVolumeCommand.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerVolumeCommand.java
@@ -27,23 +27,50 @@ import java.util.regex.Pattern;
*/
public class DockerVolumeCommand extends DockerCommand {
public static final String VOLUME_COMMAND = "volume";
- public static final String VOLUME_CREATE_COMMAND = "create";
+ public static final String VOLUME_CREATE_SUB_COMMAND = "create";
+ public static final String VOLUME_LS_SUB_COMMAND = "ls";
+
// Regex pattern for volume name
public static final Pattern VOLUME_NAME_PATTERN = Pattern.compile(
"[a-zA-Z0-9][a-zA-Z0-9_.-]*");
+ private String volumeName;
+ private String driverName;
+ private String subCommand;
+
public DockerVolumeCommand(String subCommand) {
super(VOLUME_COMMAND);
+ this.subCommand = subCommand;
super.addCommandArguments("sub-command", subCommand);
}
public DockerVolumeCommand setVolumeName(String volumeName) {
super.addCommandArguments("volume", volumeName);
+ this.volumeName = volumeName;
return this;
}
public DockerVolumeCommand setDriverName(String driverName) {
super.addCommandArguments("driver", driverName);
+ this.driverName = driverName;
+ return this;
+ }
+
+ public String getVolumeName() {
+ return volumeName;
+ }
+
+ public String getDriverName() {
+ return driverName;
+ }
+
+ public String getSubCommand() {
+ return subCommand;
+ }
+
+ public DockerVolumeCommand setFormat(String format) {
+ super.addCommandArguments("format", format);
return this;
}
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556aea3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/NvidiaDockerV1CommandPlugin.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/NvidiaDockerV1CommandPlugin.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/NvidiaDockerV1CommandPlugin.java
index 73d7048..c2e315a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/NvidiaDockerV1CommandPlugin.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/gpu/NvidiaDockerV1CommandPlugin.java
@@ -301,7 +301,7 @@ public class NvidiaDockerV1CommandPlugin implements DockerCommandPlugin {
if (newVolumeName != null) {
DockerVolumeCommand command = new DockerVolumeCommand(
- DockerVolumeCommand.VOLUME_CREATE_COMMAND);
+ DockerVolumeCommand.VOLUME_CREATE_SUB_COMMAND);
command.setDriverName(volumeDriver);
command.setVolumeName(newVolumeName);
return command;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556aea3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
index e88eeac..a0138d1 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
@@ -299,29 +299,19 @@ static int value_permitted(const struct configuration* executor_cfg,
int get_docker_volume_command(const char *command_file, const struct configuration *conf, char *out,
const size_t outlen) {
int ret = 0;
- char *driver = NULL, *volume_name = NULL, *sub_command = NULL;
+ char *driver = NULL, *volume_name = NULL, *sub_command = NULL, *format = NULL;
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_VOLUME_COMMAND, &command_config);
if (ret != 0) {
return ret;
}
sub_command = get_configuration_value("sub-command", DOCKER_COMMAND_FILE_SECTION, &command_config);
- if (sub_command == NULL || 0 != strcmp(sub_command, "create")) {
- fprintf(ERRORFILE, "\"create\" is the only acceptable sub-command of volume.\n");
- ret = INVALID_DOCKER_VOLUME_COMMAND;
- goto cleanup;
- }
-
- volume_name = get_configuration_value("volume", DOCKER_COMMAND_FILE_SECTION, &command_config);
- if (volume_name == NULL || validate_volume_name(volume_name) != 0) {
- fprintf(ERRORFILE, "%s is not a valid volume name.\n", volume_name);
- ret = INVALID_DOCKER_VOLUME_NAME;
- goto cleanup;
- }
- driver = get_configuration_value("driver", DOCKER_COMMAND_FILE_SECTION, &command_config);
- if (driver == NULL) {
- ret = INVALID_DOCKER_VOLUME_DRIVER;
+ if ((sub_command == NULL) || ((0 != strcmp(sub_command, "create")) &&
+ (0 != strcmp(sub_command, "ls")))) {
+ fprintf(ERRORFILE, "\"create/ls\" are the only acceptable sub-command of volume, input sub_command=\"%s\"\n",
+ sub_command);
+ ret = INVALID_DOCKER_VOLUME_COMMAND;
goto cleanup;
}
@@ -338,42 +328,76 @@ int get_docker_volume_command(const char *command_file, const struct configurati
goto cleanup;
}
- ret = add_to_buffer(out, outlen, " create");
- if (ret != 0) {
- goto cleanup;
- }
+ if (0 == strcmp(sub_command, "create")) {
+ volume_name = get_configuration_value("volume", DOCKER_COMMAND_FILE_SECTION, &command_config);
+ if (volume_name == NULL || validate_volume_name(volume_name) != 0) {
+ fprintf(ERRORFILE, "%s is not a valid volume name.\n", volume_name);
+ ret = INVALID_DOCKER_VOLUME_NAME;
+ goto cleanup;
+ }
- ret = add_to_buffer(out, outlen, " --name=");
- if (ret != 0) {
- goto cleanup;
- }
+ driver = get_configuration_value("driver", DOCKER_COMMAND_FILE_SECTION, &command_config);
+ if (driver == NULL) {
+ ret = INVALID_DOCKER_VOLUME_DRIVER;
+ goto cleanup;
+ }
- ret = add_to_buffer(out, outlen, volume_name);
- if (ret != 0) {
- goto cleanup;
- }
+ ret = add_to_buffer(out, outlen, " create");
+ if (ret != 0) {
+ goto cleanup;
+ }
- if (!value_permitted(conf, "docker.allowed.volume-drivers", driver)) {
- fprintf(ERRORFILE, "%s is not permitted docker.allowed.volume-drivers\n",
- driver);
- ret = INVALID_DOCKER_VOLUME_DRIVER;
- goto cleanup;
- }
+ ret = add_to_buffer(out, outlen, " --name=");
+ if (ret != 0) {
+ goto cleanup;
+ }
- ret = add_to_buffer(out, outlen, " --driver=");
- if (ret != 0) {
- goto cleanup;
- }
+ ret = add_to_buffer(out, outlen, volume_name);
+ if (ret != 0) {
+ goto cleanup;
+ }
- ret = add_to_buffer(out, outlen, driver);
- if (ret != 0) {
- goto cleanup;
+ if (!value_permitted(conf, "docker.allowed.volume-drivers", driver)) {
+ fprintf(ERRORFILE, "%s is not permitted docker.allowed.volume-drivers\n",
+ driver);
+ ret = INVALID_DOCKER_VOLUME_DRIVER;
+ goto cleanup;
+ }
+
+ ret = add_to_buffer(out, outlen, " --driver=");
+ if (ret != 0) {
+ goto cleanup;
+ }
+
+ ret = add_to_buffer(out, outlen, driver);
+ if (ret != 0) {
+ goto cleanup;
+ }
+ } else if (0 == strcmp(sub_command, "ls")) {
+ format = get_configuration_value("format", DOCKER_COMMAND_FILE_SECTION, &command_config);
+
+ ret = add_to_buffer(out, outlen, " ls");
+ if (ret != 0) {
+ goto cleanup;
+ }
+
+ if (format) {
+ ret = add_to_buffer(out, outlen, " --format=");
+ if (ret != 0) {
+ goto cleanup;
+ }
+ ret = add_to_buffer(out, outlen, format);
+ if (ret != 0) {
+ goto cleanup;
+ }
+ }
}
cleanup:
free(driver);
free(volume_name);
free(sub_command);
+ free(format);
// clean up out buffer
if (ret != 0) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556aea3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
index 96b5d40..0c1c4bf 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
@@ -1132,12 +1132,15 @@ namespace ContainerExecutor {
file_cmd_vec.push_back(std::make_pair<std::string, std::string>(
"[docker-command-execution]\n docker-command=volume\n sub-command=create\n volume=volume1 \n driver=driver1",
"volume create --name=volume1 --driver=driver1"));
+ file_cmd_vec.push_back(std::make_pair<std::string, std::string>(
+ "[docker-command-execution]\n docker-command=volume\n format={{.Name}},{{.Driver}}\n sub-command=ls",
+ "volume ls --format={{.Name}},{{.Driver}}"));
std::vector<std::pair<std::string, int> > bad_file_cmd_vec;
// Wrong subcommand
bad_file_cmd_vec.push_back(std::make_pair<std::string, int>(
- "[docker-command-execution]\n docker-command=volume\n sub-command=ls\n volume=volume1 \n driver=driver1",
+ "[docker-command-execution]\n docker-command=volume\n sub-command=inspect\n volume=volume1 \n driver=driver1",
static_cast<int>(INVALID_DOCKER_VOLUME_COMMAND)));
// Volume not specified
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556aea3f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
index 6135493..4d32427 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
@@ -1301,7 +1301,7 @@ public class TestDockerContainerRuntime {
//single invocation expected
//due to type erasure + mocking, this verification requires a suppress
// warning annotation on the entire method
- verify(mockExecutor, times(1))
+ verify(mockExecutor, times(2))
.executePrivilegedOperation(anyList(), opCaptor.capture(), any(
File.class), anyMap(), anyBoolean(), anyBoolean());
@@ -1309,7 +1309,9 @@ public class TestDockerContainerRuntime {
// hence, reset mock here
Mockito.reset(mockExecutor);
- PrivilegedOperation op = opCaptor.getValue();
+ List<PrivilegedOperation> allCaptures = opCaptor.getAllValues();
+
+ PrivilegedOperation op = allCaptures.get(0);
Assert.assertEquals(PrivilegedOperation.OperationType
.RUN_DOCKER_CMD, op.getOperationType());
@@ -1317,14 +1319,151 @@ public class TestDockerContainerRuntime {
FileInputStream fileInputStream = new FileInputStream(commandFile);
String fileContent = new String(IOUtils.toByteArray(fileInputStream));
Assert.assertEquals("[docker-command-execution]\n"
- + " docker-command=volume\n" + " sub-command=create\n"
- + " volume=volume1\n", fileContent);
+ + " docker-command=volume\n" + " driver=local\n"
+ + " sub-command=create\n" + " volume=volume1\n", fileContent);
+ fileInputStream.close();
+
+ op = allCaptures.get(1);
+ Assert.assertEquals(PrivilegedOperation.OperationType
+ .RUN_DOCKER_CMD, op.getOperationType());
+
+ commandFile = new File(StringUtils.join(",", op.getArguments()));
+ fileInputStream = new FileInputStream(commandFile);
+ fileContent = new String(IOUtils.toByteArray(fileInputStream));
+ Assert.assertEquals("[docker-command-execution]\n"
+ + " docker-command=volume\n" + " format={{.Name}},{{.Driver}}\n"
+ + " sub-command=ls\n", fileContent);
+ fileInputStream.close();
+ }
+
+ private static class MockDockerCommandPlugin implements DockerCommandPlugin {
+ private final String volume;
+ private final String driver;
+
+ public MockDockerCommandPlugin(String volume, String driver) {
+ this.volume = volume;
+ this.driver = driver;
+ }
+
+ @Override
+ public void updateDockerRunCommand(DockerRunCommand dockerRunCommand,
+ Container container) throws ContainerExecutionException {
+ dockerRunCommand.setVolumeDriver("driver-1");
+ dockerRunCommand.addReadOnlyMountLocation("/source/path",
+ "/destination/path", true);
+ }
+
+ @Override
+ public DockerVolumeCommand getCreateDockerVolumeCommand(Container container)
+ throws ContainerExecutionException {
+ return new DockerVolumeCommand("create").setVolumeName(volume)
+ .setDriverName(driver);
+ }
+
+ @Override
+ public DockerVolumeCommand getCleanupDockerVolumesCommand(
+ Container container) throws ContainerExecutionException {
+ return null;
+ }
+ }
+
+ private void testDockerCommandPluginWithVolumesOutput(
+ String dockerVolumeListOutput, boolean expectFail)
+ throws PrivilegedOperationException, ContainerExecutionException,
+ IOException {
+ mockExecutor = Mockito
+ .mock(PrivilegedOperationExecutor.class);
+
+ DockerLinuxContainerRuntime runtime = new DockerLinuxContainerRuntime(
+ mockExecutor, mockCGroupsHandler);
+ when(mockExecutor
+ .executePrivilegedOperation(anyList(), any(PrivilegedOperation.class),
+ any(File.class), anyMap(), anyBoolean(), anyBoolean())).thenReturn(
+ null);
+ when(mockExecutor
+ .executePrivilegedOperation(anyList(), any(PrivilegedOperation.class),
+ any(File.class), anyMap(), anyBoolean(), anyBoolean())).thenReturn(
+ dockerVolumeListOutput);
+
+ Context nmContext = mock(Context.class);
+ ResourcePluginManager rpm = mock(ResourcePluginManager.class);
+ Map<String, ResourcePlugin> pluginsMap = new HashMap<>();
+ ResourcePlugin plugin1 = mock(ResourcePlugin.class);
+
+ // Create the docker command plugin logic, which will set volume driver
+ DockerCommandPlugin dockerCommandPlugin = new MockDockerCommandPlugin(
+ "volume1", "local");
+
+ when(plugin1.getDockerCommandPluginInstance()).thenReturn(
+ dockerCommandPlugin);
+ ResourcePlugin plugin2 = mock(ResourcePlugin.class);
+ pluginsMap.put("plugin1", plugin1);
+ pluginsMap.put("plugin2", plugin2);
+
+ when(rpm.getNameToPlugins()).thenReturn(pluginsMap);
+
+ when(nmContext.getResourcePluginManager()).thenReturn(rpm);
+
+ runtime.initialize(conf, nmContext);
+
+ ContainerRuntimeContext containerRuntimeContext = builder.build();
+
+ try {
+ runtime.prepareContainer(containerRuntimeContext);
+
+ checkVolumeCreateCommand();
+
+ runtime.launchContainer(containerRuntimeContext);
+ } catch (ContainerExecutionException e) {
+ if (expectFail) {
+ // Expected
+ return;
+ } else{
+ Assert.fail("Should successfully prepareContainers" + e);
+ }
+ }
+ if (expectFail) {
+ Assert.fail(
+ "Should fail because output is illegal");
+ }
+ }
+
+ @Test
+ public void testDockerCommandPluginCheckVolumeAfterCreation()
+ throws Exception {
+ // For following tests, we expect to have volume1,local in output
+
+ // Failure cases
+ testDockerCommandPluginWithVolumesOutput("", true);
+ testDockerCommandPluginWithVolumesOutput("volume1", true);
+ testDockerCommandPluginWithVolumesOutput("local", true);
+ testDockerCommandPluginWithVolumesOutput("volume2,local", true);
+ testDockerCommandPluginWithVolumesOutput("volum1,something", true);
+ testDockerCommandPluginWithVolumesOutput("volum1,something\nvolum2,local",
+ true);
+
+ // Success case
+ testDockerCommandPluginWithVolumesOutput("volume1,local\n", false);
+ testDockerCommandPluginWithVolumesOutput(
+ "volume_xyz,nvidia\nvolume1,local\n\n", false);
+ testDockerCommandPluginWithVolumesOutput(" volume1, local \n", false);
+ testDockerCommandPluginWithVolumesOutput(
+ "volume_xyz,\tnvidia\n volume1,\tlocal\n\n", false);
}
+
@Test
public void testDockerCommandPlugin() throws Exception {
DockerLinuxContainerRuntime runtime =
new DockerLinuxContainerRuntime(mockExecutor, mockCGroupsHandler);
+ when(mockExecutor
+ .executePrivilegedOperation(anyList(), any(PrivilegedOperation.class),
+ any(File.class), anyMap(), anyBoolean(), anyBoolean())).thenReturn(
+ null);
+ when(mockExecutor
+ .executePrivilegedOperation(anyList(), any(PrivilegedOperation.class),
+ any(File.class), anyMap(), anyBoolean(), anyBoolean())).thenReturn(
+ "volume1,local");
Context nmContext = mock(Context.class);
ResourcePluginManager rpm = mock(ResourcePluginManager.class);
@@ -1332,27 +1471,8 @@ public class TestDockerContainerRuntime {
ResourcePlugin plugin1 = mock(ResourcePlugin.class);
// Create the docker command plugin logic, which will set volume driver
- DockerCommandPlugin dockerCommandPlugin = new DockerCommandPlugin() {
- @Override
- public void updateDockerRunCommand(DockerRunCommand dockerRunCommand,
- Container container) throws ContainerExecutionException {
- dockerRunCommand.setVolumeDriver("driver-1");
- dockerRunCommand.addReadOnlyMountLocation("/source/path",
- "/destination/path", true);
- }
-
- @Override
- public DockerVolumeCommand getCreateDockerVolumeCommand(Container container)
- throws ContainerExecutionException {
- return new DockerVolumeCommand("create").setVolumeName("volume1");
- }
-
- @Override
- public DockerVolumeCommand getCleanupDockerVolumesCommand(Container container)
- throws ContainerExecutionException {
- return null;
- }
- };
+ DockerCommandPlugin dockerCommandPlugin = new MockDockerCommandPlugin(
+ "volume1", "local");
when(plugin1.getDockerCommandPluginInstance()).thenReturn(
dockerCommandPlugin);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[39/50] [abbrv] hadoop git commit: HDFS-12779. [READ] Allow cluster
id to be specified to the Image generation tool
Posted by vi...@apache.org.
HDFS-12779. [READ] Allow cluster id to be specified to the Image generation tool
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ec6f48fe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ec6f48fe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ec6f48fe
Branch: refs/heads/HDFS-9806
Commit: ec6f48fe6c09819b96cdb218f3255da51a067656
Parents: 5baee3d
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu Nov 9 14:09:14 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../hdfs/server/protocol/NamespaceInfo.java | 4 ++++
.../hdfs/server/namenode/FileSystemImage.java | 4 ++++
.../hdfs/server/namenode/ImageWriter.java | 11 ++++++++-
.../TestNameNodeProvidedImplementation.java | 24 +++++++++++++++++++-
4 files changed, 41 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec6f48fe/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
index 66ce9ee..433d9b7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/NamespaceInfo.java
@@ -160,6 +160,10 @@ public class NamespaceInfo extends StorageInfo {
return state;
}
+ public void setClusterID(String clusterID) {
+ this.clusterID = clusterID;
+ }
+
@Override
public String toString(){
return super.toString() + ";bpid=" + blockPoolID;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec6f48fe/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
index 2e57c9f..b66c830 100644
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
@@ -68,6 +68,7 @@ public class FileSystemImage implements Tool {
options.addOption("b", "blockclass", true, "Block output class");
options.addOption("i", "blockidclass", true, "Block resolver class");
options.addOption("c", "cachedirs", true, "Max active dirents");
+ options.addOption("cid", "clusterID", true, "Cluster ID");
options.addOption("h", "help", false, "Print usage");
return options;
}
@@ -112,6 +113,9 @@ public class FileSystemImage implements Tool {
case "c":
opts.cache(Integer.parseInt(o.getValue()));
break;
+ case "cid":
+ opts.clusterID(o.getValue());
+ break;
default:
throw new UnsupportedOperationException("Internal error");
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec6f48fe/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
index 390bb39..9bd8852 100644
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
@@ -126,13 +126,16 @@ public class ImageWriter implements Closeable {
throw new IllegalStateException("Incompatible layout " +
info.getLayoutVersion() + " (expected " + LAYOUT_VERSION);
}
+ // set the cluster id, if given
+ if (opts.clusterID.length() > 0) {
+ info.setClusterID(opts.clusterID);
+ }
stor.format(info);
blockPoolID = info.getBlockPoolID();
}
outdir = new Path(tmp, "current");
out = outfs.create(new Path(outdir, "fsimage_0000000000000000000"));
} else {
- // XXX necessary? writing a NNStorage now...
outdir = null;
outfs = null;
out = opts.outStream;
@@ -517,6 +520,7 @@ public class ImageWriter implements Closeable {
private UGIResolver ugis;
private Class<? extends UGIResolver> ugisClass;
private BlockAliasMap<FileRegion> blocks;
+ private String clusterID;
@SuppressWarnings("rawtypes")
private Class<? extends BlockAliasMap> aliasMap;
@@ -543,6 +547,7 @@ public class ImageWriter implements Closeable {
NullBlockAliasMap.class, BlockAliasMap.class);
blockIdsClass = conf.getClass(BLOCK_RESOLVER_CLASS,
FixedBlockResolver.class, BlockResolver.class);
+ clusterID = "";
}
@Override
@@ -601,6 +606,10 @@ public class ImageWriter implements Closeable {
return this;
}
+ public Options clusterID(String clusterID) {
+ this.clusterID = clusterID;
+ return this;
+ }
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ec6f48fe/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 1f6aebb..22f00aa 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -155,11 +155,18 @@ public class TestNameNodeProvidedImplementation {
void createImage(TreeWalk t, Path out,
Class<? extends BlockResolver> blockIdsClass) throws Exception {
+ createImage(t, out, blockIdsClass, "");
+ }
+
+ void createImage(TreeWalk t, Path out,
+ Class<? extends BlockResolver> blockIdsClass, String clusterID)
+ throws Exception {
ImageWriter.Options opts = ImageWriter.defaults();
opts.setConf(conf);
opts.output(out.toString())
.blocks(TextFileRegionAliasMap.class)
- .blockIds(blockIdsClass);
+ .blockIds(blockIdsClass)
+ .clusterID(clusterID);
try (ImageWriter w = new ImageWriter(opts)) {
for (TreePath e : t) {
w.accept(e);
@@ -562,4 +569,19 @@ public class TestNameNodeProvidedImplementation {
dnInfos[0].getDatanodeUuid());
}
}
+
+ @Test
+ public void testSetClusterID() throws Exception {
+ String clusterID = "PROVIDED-CLUSTER";
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class, clusterID);
+ // 2 Datanodes, 1 PROVIDED and other DISK
+ startCluster(NNDIRPATH, 2, null,
+ new StorageType[][] {
+ {StorageType.PROVIDED},
+ {StorageType.DISK}},
+ false);
+ NameNode nn = cluster.getNameNode();
+ assertEquals(clusterID, nn.getNamesystem().getClusterId());
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[29/50] [abbrv] hadoop git commit: HDFS-12584. [READ] Fix errors in
image generation tool from latest rebase
Posted by vi...@apache.org.
HDFS-12584. [READ] Fix errors in image generation tool from latest rebase
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/50c8b91c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/50c8b91c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/50c8b91c
Branch: refs/heads/HDFS-9806
Commit: 50c8b91cf4ca4506e176c24331c234c7b04b5af4
Parents: 7eabf01
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Tue Oct 3 14:44:17 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
hadoop-tools/hadoop-fs2img/pom.xml | 4 +--
.../hdfs/server/namenode/RandomTreeWalk.java | 28 +++++++++-----------
2 files changed, 14 insertions(+), 18 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/50c8b91c/hadoop-tools/hadoop-fs2img/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/pom.xml b/hadoop-tools/hadoop-fs2img/pom.xml
index 36096b7..e1411f8 100644
--- a/hadoop-tools/hadoop-fs2img/pom.xml
+++ b/hadoop-tools/hadoop-fs2img/pom.xml
@@ -17,12 +17,12 @@
<parent>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-project</artifactId>
- <version>3.0.0-alpha3-SNAPSHOT</version>
+ <version>3.1.0-SNAPSHOT</version>
<relativePath>../../hadoop-project</relativePath>
</parent>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-fs2img</artifactId>
- <version>3.0.0-alpha3-SNAPSHOT</version>
+ <version>3.1.0-SNAPSHOT</version>
<description>fs2img</description>
<name>fs2img</name>
<packaging>jar</packaging>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/50c8b91c/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
index c82c489..d002e4a 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
@@ -113,22 +113,18 @@ public class RandomTreeWalk extends TreeWalk {
final long len = isDir ? 0 : r.nextInt(Integer.MAX_VALUE);
final int nblocks = 0 == len ? 0 : (((int)((len - 1) / blocksize)) + 1);
BlockLocation[] blocks = genBlocks(r, nblocks, blocksize, len);
- try {
- return new LocatedFileStatus(new FileStatus(
- len, /* long length, */
- isDir, /* boolean isdir, */
- 1, /* int block_replication, */
- blocksize, /* long blocksize, */
- 0L, /* long modification_time, */
- 0L, /* long access_time, */
- null, /* FsPermission permission, */
- "hadoop", /* String owner, */
- "hadoop", /* String group, */
- name), /* Path path */
- blocks);
- } catch (IOException e) {
- throw new RuntimeException(e);
- }
+ return new LocatedFileStatus(new FileStatus(
+ len, /* long length, */
+ isDir, /* boolean isdir, */
+ 1, /* int block_replication, */
+ blocksize, /* long blocksize, */
+ 0L, /* long modification_time, */
+ 0L, /* long access_time, */
+ null, /* FsPermission permission, */
+ "hadoop", /* String owner, */
+ "hadoop", /* String group, */
+ name), /* Path path */
+ blocks);
}
BlockLocation[] genBlocks(Random r, int nblocks, int blocksize, long len) {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[37/50] [abbrv] hadoop git commit: HDFS-11673. [READ] Handle failures
of Datanode with PROVIDED storage
Posted by vi...@apache.org.
HDFS-11673. [READ] Handle failures of Datanode with PROVIDED storage
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cf2ef643
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cf2ef643
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cf2ef643
Branch: refs/heads/HDFS-9806
Commit: cf2ef64392476d1bb91d6c5b8d1fa490fed93487
Parents: aa5b154
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu Jun 1 16:01:31 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../hdfs/server/blockmanagement/BlockInfo.java | 12 +++-
.../server/blockmanagement/BlockManager.java | 5 +-
.../server/blockmanagement/BlockProvider.java | 18 +++--
.../blockmanagement/ProvidedStorageMap.java | 54 +++++++++++++--
.../blockmanagement/TestProvidedStorageMap.java | 10 ++-
.../TestNameNodeProvidedImplementation.java | 72 +++++++++++++++++++-
6 files changed, 150 insertions(+), 21 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf2ef643/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
index e9d235c..eb09b7b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfo.java
@@ -24,6 +24,7 @@ import java.util.NoSuchElementException;
import com.google.common.base.Preconditions;
import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.BlockType;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.BlockUCState;
@@ -188,8 +189,15 @@ public abstract class BlockInfo extends Block
int len = getCapacity();
for(int idx = 0; idx < len; idx++) {
DatanodeStorageInfo cur = getStorageInfo(idx);
- if(cur != null && cur.getDatanodeDescriptor() == dn) {
- return cur;
+ if(cur != null) {
+ if (cur.getStorageType() == StorageType.PROVIDED) {
+ //if block resides on provided storage, only match the storage ids
+ if (dn.getStorageInfo(cur.getStorageID()) != null) {
+ return cur;
+ }
+ } else if (cur.getDatanodeDescriptor() == dn) {
+ return cur;
+ }
}
}
return null;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf2ef643/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index df5d23a..38dcad2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -1504,6 +1504,7 @@ public class BlockManager implements BlockStatsMXBean {
/** Remove the blocks associated to the given datanode. */
void removeBlocksAssociatedTo(final DatanodeDescriptor node) {
+ providedStorageMap.removeDatanode(node);
for (DatanodeStorageInfo storage : node.getStorageInfos()) {
final Iterator<BlockInfo> it = storage.getBlockIterator();
//add the BlockInfos to a new collection as the
@@ -2452,7 +2453,7 @@ public class BlockManager implements BlockStatsMXBean {
// !#! Register DN with provided storage, not with storage owned by DN
// !#! DN should still have a ref to the DNStorageInfo
DatanodeStorageInfo storageInfo =
- providedStorageMap.getStorage(node, storage);
+ providedStorageMap.getStorage(node, storage, context);
if (storageInfo == null) {
// We handle this for backwards compatibility.
@@ -2579,7 +2580,7 @@ public class BlockManager implements BlockStatsMXBean {
}
}
- private Collection<Block> processReport(
+ Collection<Block> processReport(
final DatanodeStorageInfo storageInfo,
final BlockListAsLongs report,
BlockReportContext context) throws IOException {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf2ef643/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
index d8bed16..2214868 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
import java.io.IOException;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap.ProvidedBlockList;
+import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
import org.apache.hadoop.hdfs.util.RwLock;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -52,14 +53,23 @@ public abstract class BlockProvider implements Iterable<Block> {
* start the processing of block report for provided blocks.
* @throws IOException
*/
- void start() throws IOException {
+ void start(BlockReportContext context) throws IOException {
assert lock.hasWriteLock() : "Not holding write lock";
if (hasDNs) {
return;
}
- LOG.info("Calling process first blk report from storage: " + storage);
- // first pass; periodic refresh should call bm.processReport
- bm.processFirstBlockReport(storage, new ProvidedBlockList(iterator()));
+ if (storage.getBlockReportCount() == 0) {
+ LOG.info("Calling process first blk report from storage: " + storage);
+ // first pass; periodic refresh should call bm.processReport
+ bm.processFirstBlockReport(storage, new ProvidedBlockList(iterator()));
+ } else {
+ bm.processReport(storage, new ProvidedBlockList(iterator()), context);
+ }
hasDNs = true;
}
+
+ void stop() {
+ assert lock.hasWriteLock() : "Not holding write lock";
+ hasDNs = false;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf2ef643/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 0faf16d..5717e0c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -40,6 +40,7 @@ import org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State;
import org.apache.hadoop.hdfs.util.RwLock;
@@ -103,17 +104,18 @@ public class ProvidedStorageMap {
/**
* @param dn datanode descriptor
* @param s data node storage
+ * @param context the block report context
* @return the {@link DatanodeStorageInfo} for the specified datanode.
* If {@code s} corresponds to a provided storage, the storage info
* representing provided storage is returned.
* @throws IOException
*/
- DatanodeStorageInfo getStorage(DatanodeDescriptor dn, DatanodeStorage s)
- throws IOException {
+ DatanodeStorageInfo getStorage(DatanodeDescriptor dn, DatanodeStorage s,
+ BlockReportContext context) throws IOException {
if (providedEnabled && storageId.equals(s.getStorageID())) {
if (StorageType.PROVIDED.equals(s.getStorageType())) {
// poll service, initiate
- blockProvider.start();
+ blockProvider.start(context);
dn.injectStorage(providedStorageInfo);
return providedDescriptor.getProvidedStorage(dn, s);
}
@@ -134,6 +136,15 @@ public class ProvidedStorageMap {
return new ProvidedBlocksBuilder(maxValue);
}
+ public void removeDatanode(DatanodeDescriptor dnToRemove) {
+ if (providedDescriptor != null) {
+ int remainingDatanodes = providedDescriptor.remove(dnToRemove);
+ if (remainingDatanodes == 0) {
+ blockProvider.stop();
+ }
+ }
+ }
+
/**
* Builder used for creating {@link LocatedBlocks} when a block is provided.
*/
@@ -282,7 +293,7 @@ public class ProvidedStorageMap {
DatanodeStorageInfo createProvidedStorage(DatanodeStorage ds) {
assert null == storageMap.get(ds.getStorageID());
- DatanodeStorageInfo storage = new DatanodeStorageInfo(this, ds);
+ DatanodeStorageInfo storage = new ProvidedDatanodeStorageInfo(this, ds);
storage.setHeartbeatedSinceFailover(true);
storageMap.put(storage.getStorageID(), storage);
return storage;
@@ -381,6 +392,22 @@ public class ProvidedStorageMap {
}
}
+ int remove(DatanodeDescriptor dnToRemove) {
+ // this operation happens under the FSNamesystem lock;
+ // no additional synchronization required.
+ if (dnToRemove != null) {
+ DatanodeDescriptor storedDN = dns.get(dnToRemove.getDatanodeUuid());
+ if (storedDN != null) {
+ dns.remove(dnToRemove.getDatanodeUuid());
+ }
+ }
+ return dns.size();
+ }
+
+ int activeProvidedDatanodes() {
+ return dns.size();
+ }
+
@Override
public boolean equals(Object obj) {
return (this == obj) || super.equals(obj);
@@ -393,6 +420,25 @@ public class ProvidedStorageMap {
}
/**
+ * The DatanodeStorageInfo used for the provided storage.
+ */
+ static class ProvidedDatanodeStorageInfo extends DatanodeStorageInfo {
+
+ ProvidedDatanodeStorageInfo(ProvidedDescriptor dn, DatanodeStorage ds) {
+ super(dn, ds);
+ }
+
+ @Override
+ boolean removeBlock(BlockInfo b) {
+ ProvidedDescriptor dn = (ProvidedDescriptor) getDatanodeDescriptor();
+ if (dn.activeProvidedDatanodes() == 0) {
+ return super.removeBlock(b);
+ } else {
+ return false;
+ }
+ }
+ }
+ /**
* Used to emulate block reports for provided blocks.
*/
static class ProvidedBlockList extends BlockListAsLongs {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf2ef643/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
index 50e2fed..2296c82 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
@@ -119,9 +119,9 @@ public class TestProvidedStorageMap {
when(nameSystemLock.hasWriteLock()).thenReturn(true);
DatanodeStorageInfo dns1Provided = providedMap.getStorage(dn1,
- dn1ProvidedStorage);
+ dn1ProvidedStorage, null);
DatanodeStorageInfo dns1Disk = providedMap.getStorage(dn1,
- dn1DiskStorage);
+ dn1DiskStorage, null);
assertTrue("The provided storages should be equal",
dns1Provided == providedMapStorage);
@@ -131,7 +131,7 @@ public class TestProvidedStorageMap {
DatanodeStorageInfo dnsDisk = new DatanodeStorageInfo(dn1, dn1DiskStorage);
dn1.injectStorage(dnsDisk);
assertTrue("Disk storage must match the injected storage info",
- dnsDisk == providedMap.getStorage(dn1, dn1DiskStorage));
+ dnsDisk == providedMap.getStorage(dn1, dn1DiskStorage, null));
//create a 2nd datanode
DatanodeDescriptor dn2 = createDatanodeDescriptor(5010);
@@ -142,12 +142,10 @@ public class TestProvidedStorageMap {
StorageType.PROVIDED);
DatanodeStorageInfo dns2Provided = providedMap.getStorage(
- dn2, dn2ProvidedStorage);
+ dn2, dn2ProvidedStorage, null);
assertTrue("The provided storages should be equal",
dns2Provided == providedMapStorage);
assertTrue("The DatanodeDescriptor should contain the provided storage",
dn2.getStorageInfo(providedStorageID) == providedMapStorage);
-
-
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf2ef643/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index e171557..60b306f 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -45,11 +45,14 @@ import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockFormatProvider;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockProvider;
import org.apache.hadoop.hdfs.server.common.BlockFormat;
import org.apache.hadoop.hdfs.server.common.FileRegionProvider;
import org.apache.hadoop.hdfs.server.common.TextFileRegionFormat;
import org.apache.hadoop.hdfs.server.common.TextFileRegionProvider;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+
import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY;
import org.junit.After;
@@ -406,9 +409,9 @@ public class TestNameNodeProvidedImplementation {
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
startCluster(NNDIRPATH, 2, null,
- new StorageType[][] {
- {StorageType.PROVIDED},
- {StorageType.DISK}},
+ new StorageType[][]{
+ {StorageType.PROVIDED},
+ {StorageType.DISK}},
false);
String filename = "/" + filePrefix + (numFiles - 1) + fileSuffix;
@@ -433,4 +436,67 @@ public class TestNameNodeProvidedImplementation {
assertEquals(cluster.getDataNodes().get(0).getDatanodeUuid(),
infos[0].getDatanodeUuid());
}
+
+ @Test
+ public void testProvidedDatanodeFailures() throws Exception {
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class);
+ startCluster(NNDIRPATH, 3, null,
+ new StorageType[][] {
+ {StorageType.PROVIDED},
+ {StorageType.PROVIDED},
+ {StorageType.DISK}},
+ false);
+
+ DataNode providedDatanode1 = cluster.getDataNodes().get(0);
+ DataNode providedDatanode2 = cluster.getDataNodes().get(1);
+
+ DFSClient client = new DFSClient(new InetSocketAddress("localhost",
+ cluster.getNameNodePort()), cluster.getConfiguration(0));
+
+ if (numFiles >= 1) {
+ String filename = "/" + filePrefix + (numFiles - 1) + fileSuffix;
+
+ DatanodeInfo[] dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ //the location should be one of the provided DNs available
+ assertTrue(
+ dnInfos[0].getDatanodeUuid().equals(
+ providedDatanode1.getDatanodeUuid())
+ || dnInfos[0].getDatanodeUuid().equals(
+ providedDatanode2.getDatanodeUuid()));
+
+ //stop the 1st provided datanode
+ MiniDFSCluster.DataNodeProperties providedDNProperties1 =
+ cluster.stopDataNode(0);
+
+ //make NameNode detect that datanode is down
+ BlockManagerTestUtil.noticeDeadDatanode(
+ cluster.getNameNode(),
+ providedDatanode1.getDatanodeId().getXferAddr());
+
+ //should find the block on the 2nd provided datanode
+ dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ assertEquals(providedDatanode2.getDatanodeUuid(),
+ dnInfos[0].getDatanodeUuid());
+
+ //stop the 2nd provided datanode
+ cluster.stopDataNode(1);
+ // make NameNode detect that datanode is down
+ BlockManagerTestUtil.noticeDeadDatanode(
+ cluster.getNameNode(),
+ providedDatanode2.getDatanodeId().getXferAddr());
+
+ getAndCheckBlockLocations(client, filename, 0);
+
+ //restart the provided datanode
+ cluster.restartDataNode(providedDNProperties1, true);
+ cluster.waitActive();
+
+ //should find the block on the 1st provided datanode now
+ dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ //not comparing UUIDs as the datanode can now have a different one.
+ assertEquals(providedDatanode1.getDatanodeId().getXferAddr(),
+ dnInfos[0].getXferAddr());
+ }
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[50/50] [abbrv] hadoop git commit: HDFS-12665. [AliasMap] Create a
version of the AliasMap that runs in memory in the Namenode (leveldb)
Posted by vi...@apache.org.
HDFS-12665. [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/36957f0d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/36957f0d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/36957f0d
Branch: refs/heads/HDFS-9806
Commit: 36957f0d20a1caeb389dbc11806891108942c9ea
Parents: 8da735e
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu Nov 30 10:37:28 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:18:48 2017 -0800
----------------------------------------------------------------------
.../hdfs/protocol/ProvidedStorageLocation.java | 85 +++++
.../hadoop/hdfs/protocolPB/PBHelperClient.java | 32 ++
.../src/main/proto/hdfs.proto | 14 +
hadoop-hdfs-project/hadoop-hdfs/pom.xml | 7 +-
.../org/apache/hadoop/hdfs/DFSConfigKeys.java | 9 +
.../hdfs/protocolPB/AliasMapProtocolPB.java | 35 ++
.../AliasMapProtocolServerSideTranslatorPB.java | 120 +++++++
...yAliasMapProtocolClientSideTranslatorPB.java | 159 +++++++++
.../apache/hadoop/hdfs/protocolPB/PBHelper.java | 28 ++
.../hdfs/server/aliasmap/InMemoryAliasMap.java | 213 ++++++++++++
.../aliasmap/InMemoryAliasMapProtocol.java | 92 +++++
.../aliasmap/InMemoryLevelDBAliasMapServer.java | 141 ++++++++
.../hadoop/hdfs/server/common/FileRegion.java | 89 ++---
.../common/blockaliasmap/BlockAliasMap.java | 19 +-
.../impl/InMemoryLevelDBAliasMapClient.java | 156 +++++++++
.../impl/TextFileRegionAliasMap.java | 40 ++-
.../datanode/FinalizedProvidedReplica.java | 11 +
.../hdfs/server/datanode/ReplicaBuilder.java | 7 +-
.../fsdataset/impl/ProvidedVolumeImpl.java | 38 +--
.../hadoop/hdfs/server/namenode/NameNode.java | 21 ++
.../src/main/proto/AliasMapProtocol.proto | 60 ++++
.../src/main/resources/hdfs-default.xml | 34 ++
.../server/aliasmap/ITestInMemoryAliasMap.java | 126 +++++++
.../server/aliasmap/TestInMemoryAliasMap.java | 45 +++
.../blockmanagement/TestProvidedStorageMap.java | 1 -
.../impl/TestInMemoryLevelDBAliasMapClient.java | 341 +++++++++++++++++++
.../impl/TestLevelDbMockAliasMapClient.java | 116 +++++++
.../fsdataset/impl/TestProvidedImpl.java | 9 +-
hadoop-project/pom.xml | 8 +-
hadoop-tools/hadoop-fs2img/pom.xml | 6 +
.../hdfs/server/namenode/NullBlockAliasMap.java | 9 +-
.../TestNameNodeProvidedImplementation.java | 65 +++-
32 files changed, 2016 insertions(+), 120 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
new file mode 100644
index 0000000..eee58ba
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ProvidedStorageLocation.java
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocol;
+
+import org.apache.hadoop.fs.Path;
+
+import javax.annotation.Nonnull;
+import java.util.Arrays;
+
+/**
+ * ProvidedStorageLocation is a location in an external storage system
+ * containing the data for a block (~Replica).
+ */
+public class ProvidedStorageLocation {
+ private final Path path;
+ private final long offset;
+ private final long length;
+ private final byte[] nonce;
+
+ public ProvidedStorageLocation(Path path, long offset, long length,
+ byte[] nonce) {
+ this.path = path;
+ this.offset = offset;
+ this.length = length;
+ this.nonce = Arrays.copyOf(nonce, nonce.length);
+ }
+
+ public @Nonnull Path getPath() {
+ return path;
+ }
+
+ public long getOffset() {
+ return offset;
+ }
+
+ public long getLength() {
+ return length;
+ }
+
+ public @Nonnull byte[] getNonce() {
+ // create a copy of the nonce and return it.
+ return Arrays.copyOf(nonce, nonce.length);
+ }
+
+ @Override
+ public boolean equals(Object o) {
+ if (this == o) {
+ return true;
+ }
+ if (o == null || getClass() != o.getClass()) {
+ return false;
+ }
+
+ ProvidedStorageLocation that = (ProvidedStorageLocation) o;
+
+ if ((offset != that.offset) || (length != that.length)
+ || !path.equals(that.path)) {
+ return false;
+ }
+ return Arrays.equals(nonce, that.nonce);
+ }
+
+ @Override
+ public int hashCode() {
+ int result = path.hashCode();
+ result = 31 * result + (int) (offset ^ (offset >>> 32));
+ result = 31 * result + (int) (length ^ (length >>> 32));
+ result = 31 * result + Arrays.hashCode(nonce);
+ return result;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
index 460112e..74fe34c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
@@ -97,6 +97,7 @@ import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
import org.apache.hadoop.hdfs.protocol.LocatedStripedBlock;
import org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats;
import org.apache.hadoop.hdfs.protocol.OpenFileEntry;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo;
import org.apache.hadoop.hdfs.protocol.RollingUpgradeStatus;
import org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing;
@@ -3242,4 +3243,35 @@ public class PBHelperClient {
}
return ret;
}
+
+ public static ProvidedStorageLocation convert(
+ HdfsProtos.ProvidedStorageLocationProto providedStorageLocationProto) {
+ if (providedStorageLocationProto == null) {
+ return null;
+ }
+ String path = providedStorageLocationProto.getPath();
+ long length = providedStorageLocationProto.getLength();
+ long offset = providedStorageLocationProto.getOffset();
+ ByteString nonce = providedStorageLocationProto.getNonce();
+
+ if (path == null || length == -1 || offset == -1 || nonce == null) {
+ return null;
+ } else {
+ return new ProvidedStorageLocation(new Path(path), offset, length,
+ nonce.toByteArray());
+ }
+ }
+
+ public static HdfsProtos.ProvidedStorageLocationProto convert(
+ ProvidedStorageLocation providedStorageLocation) {
+ String path = providedStorageLocation.getPath().toString();
+ return HdfsProtos.ProvidedStorageLocationProto.newBuilder()
+ .setPath(path)
+ .setLength(providedStorageLocation.getLength())
+ .setOffset(providedStorageLocation.getOffset())
+ .setNonce(ByteString.copyFrom(providedStorageLocation.getNonce()))
+ .build();
+ }
+
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
index 06578ca..e841975 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
@@ -45,6 +45,20 @@ message ExtendedBlockProto {
// here for historical reasons
}
+
+/**
+* ProvidedStorageLocation will contain the exact location in the provided
+ storage. The path, offset and length will result in ranged read. The nonce
+ is there to verify that you receive what you expect.
+*/
+
+message ProvidedStorageLocationProto {
+ required string path = 1;
+ required int64 offset = 2;
+ required int64 length = 3;
+ required bytes nonce = 4;
+}
+
/**
* Identifies a Datanode
*/
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index 65eea31..b647923 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -191,7 +191,6 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
<dependency>
<groupId>org.fusesource.leveldbjni</groupId>
<artifactId>leveldbjni-all</artifactId>
- <version>1.8</version>
</dependency>
<!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
<dependency>
@@ -208,6 +207,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
<artifactId>curator-test</artifactId>
<scope>test</scope>
</dependency>
+ <dependency>
+ <groupId>org.assertj</groupId>
+ <artifactId>assertj-core</artifactId>
+ <scope>test</scope>
+ </dependency>
</dependencies>
<build>
@@ -341,6 +345,7 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
<include>fsimage.proto</include>
<include>FederationProtocol.proto</include>
<include>RouterProtocol.proto</include>
+ <include>AliasMapProtocol.proto</include>
</includes>
</source>
</configuration>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index fbdc859..00976f9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -95,6 +95,14 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMENODE_BACKUP_HTTP_ADDRESS_KEY;
public static final String DFS_NAMENODE_BACKUP_HTTP_ADDRESS_DEFAULT = "0.0.0.0:50105";
public static final String DFS_NAMENODE_BACKUP_SERVICE_RPC_ADDRESS_KEY = "dfs.namenode.backup.dnrpc-address";
+ public static final String DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS = "dfs.provided.aliasmap.inmemory.dnrpc-address";
+ public static final String DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS_DEFAULT = "0.0.0.0:50200";
+ public static final String DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR = "dfs.provided.aliasmap.inmemory.leveldb.dir";
+ public static final String DFS_PROVIDED_ALIASMAP_INMEMORY_BATCH_SIZE = "dfs.provided.aliasmap.inmemory.batch-size";
+ public static final int DFS_PROVIDED_ALIASMAP_INMEMORY_BATCH_SIZE_DEFAULT = 500;
+ public static final String DFS_PROVIDED_ALIASMAP_INMEMORY_ENABLED = "dfs.provided.aliasmap.inmemory.enabled";
+ public static final boolean DFS_PROVIDED_ALIASMAP_INMEMORY_ENABLED_DEFAULT = false;
+
public static final String DFS_DATANODE_BALANCE_BANDWIDTHPERSEC_KEY =
HdfsClientConfigKeys.DeprecatedKeys.DFS_DATANODE_BALANCE_BANDWIDTHPERSEC_KEY;
public static final long DFS_DATANODE_BALANCE_BANDWIDTHPERSEC_DEFAULT =
@@ -1633,4 +1641,5 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
@Deprecated
public static final long DFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT =
HdfsClientConfigKeys.DFS_CLIENT_KEY_PROVIDER_CACHE_EXPIRY_DEFAULT;
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
new file mode 100644
index 0000000..98b3ee1
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolPB.java
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocolPB;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos;
+import org.apache.hadoop.ipc.ProtocolInfo;
+
+/**
+ * Protocol between the Namenode and the Datanode to read the AliasMap
+ * used for Provided storage.
+ * TODO add Kerberos support
+ */
+@ProtocolInfo(
+ protocolName =
+ "org.apache.hadoop.hdfs.server.aliasmap.AliasMapProtocol",
+ protocolVersion = 1)
+@InterfaceAudience.Private
+public interface AliasMapProtocolPB extends
+ AliasMapProtocolProtos.AliasMapProtocolService.BlockingInterface {
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolServerSideTranslatorPB.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolServerSideTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolServerSideTranslatorPB.java
new file mode 100644
index 0000000..808c43b
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/AliasMapProtocolServerSideTranslatorPB.java
@@ -0,0 +1,120 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocolPB;
+
+import com.google.protobuf.RpcController;
+import com.google.protobuf.ServiceException;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos.KeyValueProto;
+import org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos.ReadResponseProto;
+import org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos.WriteRequestProto;
+import org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos.WriteResponseProto;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos.*;
+import static org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap.*;
+
+/**
+ * AliasMapProtocolServerSideTranslatorPB is responsible for translating RPC
+ * calls and forwarding them to the internal InMemoryAliasMap.
+ */
+public class AliasMapProtocolServerSideTranslatorPB
+ implements AliasMapProtocolPB {
+
+ private final InMemoryAliasMapProtocol aliasMap;
+
+ public AliasMapProtocolServerSideTranslatorPB(
+ InMemoryAliasMapProtocol aliasMap) {
+ this.aliasMap = aliasMap;
+ }
+
+ private static final WriteResponseProto VOID_WRITE_RESPONSE =
+ WriteResponseProto.newBuilder().build();
+
+ @Override
+ public WriteResponseProto write(RpcController controller,
+ WriteRequestProto request) throws ServiceException {
+ try {
+ FileRegion toWrite =
+ PBHelper.convert(request.getKeyValuePair());
+
+ aliasMap.write(toWrite.getBlock(), toWrite.getProvidedStorageLocation());
+ return VOID_WRITE_RESPONSE;
+ } catch (IOException e) {
+ throw new ServiceException(e);
+ }
+ }
+
+ @Override
+ public ReadResponseProto read(RpcController controller,
+ ReadRequestProto request) throws ServiceException {
+ try {
+ Block toRead = PBHelperClient.convert(request.getKey());
+
+ Optional<ProvidedStorageLocation> optionalResult =
+ aliasMap.read(toRead);
+
+ ReadResponseProto.Builder builder = ReadResponseProto.newBuilder();
+ if (optionalResult.isPresent()) {
+ ProvidedStorageLocation providedStorageLocation = optionalResult.get();
+ builder.setValue(PBHelperClient.convert(providedStorageLocation));
+ }
+
+ return builder.build();
+ } catch (IOException e) {
+ throw new ServiceException(e);
+ }
+ }
+
+ @Override
+ public ListResponseProto list(RpcController controller,
+ ListRequestProto request) throws ServiceException {
+ try {
+ BlockProto marker = request.getMarker();
+ IterationResult iterationResult;
+ if (marker.isInitialized()) {
+ iterationResult =
+ aliasMap.list(Optional.of(PBHelperClient.convert(marker)));
+ } else {
+ iterationResult = aliasMap.list(Optional.empty());
+ }
+ ListResponseProto.Builder responseBuilder =
+ ListResponseProto.newBuilder();
+ List<FileRegion> fileRegions = iterationResult.getFileRegions();
+
+ List<KeyValueProto> keyValueProtos = fileRegions.stream()
+ .map(PBHelper::convert).collect(Collectors.toList());
+ responseBuilder.addAllFileRegions(keyValueProtos);
+ Optional<Block> nextMarker = iterationResult.getNextBlock();
+ nextMarker
+ .map(m -> responseBuilder.setNextMarker(PBHelperClient.convert(m)));
+
+ return responseBuilder.build();
+
+ } catch (IOException e) {
+ throw new ServiceException(e);
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InMemoryAliasMapProtocolClientSideTranslatorPB.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InMemoryAliasMapProtocolClientSideTranslatorPB.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InMemoryAliasMapProtocolClientSideTranslatorPB.java
new file mode 100644
index 0000000..a79360f
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/InMemoryAliasMapProtocolClientSideTranslatorPB.java
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.protocolPB;
+
+import com.google.protobuf.ServiceException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMapProtocol;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.ipc.ProtobufHelper;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.net.NetUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nonnull;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS_DEFAULT;
+import static org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos.*;
+import static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.*;
+
+/**
+ * This class is the client side translator to translate requests made to the
+ * {@link InMemoryAliasMapProtocol} interface to the RPC server implementing
+ * {@link AliasMapProtocolPB}.
+ */
+public class InMemoryAliasMapProtocolClientSideTranslatorPB
+ implements InMemoryAliasMapProtocol {
+
+ private static final Logger LOG =
+ LoggerFactory
+ .getLogger(InMemoryAliasMapProtocolClientSideTranslatorPB.class);
+
+ private AliasMapProtocolPB rpcProxy;
+
+ public InMemoryAliasMapProtocolClientSideTranslatorPB(Configuration conf) {
+ String addr = conf.getTrimmed(DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS,
+ DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS_DEFAULT);
+ InetSocketAddress aliasMapAddr = NetUtils.createSocketAddr(addr);
+
+ RPC.setProtocolEngine(conf, AliasMapProtocolPB.class,
+ ProtobufRpcEngine.class);
+ LOG.info("Connecting to address: " + addr);
+ try {
+ rpcProxy = RPC.getProxy(AliasMapProtocolPB.class,
+ RPC.getProtocolVersion(AliasMapProtocolPB.class), aliasMapAddr, null,
+ conf, NetUtils.getDefaultSocketFactory(conf), 0);
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ }
+
+ @Override
+ public InMemoryAliasMap.IterationResult list(Optional<Block> marker)
+ throws IOException {
+ ListRequestProto.Builder builder = ListRequestProto.newBuilder();
+ if (marker.isPresent()) {
+ builder.setMarker(PBHelperClient.convert(marker.get()));
+ }
+ ListRequestProto request = builder.build();
+ try {
+ ListResponseProto response = rpcProxy.list(null, request);
+ List<KeyValueProto> fileRegionsList = response.getFileRegionsList();
+
+ List<FileRegion> fileRegions = fileRegionsList
+ .stream()
+ .map(kv -> new FileRegion(
+ PBHelperClient.convert(kv.getKey()),
+ PBHelperClient.convert(kv.getValue()),
+ null
+ ))
+ .collect(Collectors.toList());
+ BlockProto nextMarker = response.getNextMarker();
+
+ if (nextMarker.isInitialized()) {
+ return new InMemoryAliasMap.IterationResult(fileRegions,
+ Optional.of(PBHelperClient.convert(nextMarker)));
+ } else {
+ return new InMemoryAliasMap.IterationResult(fileRegions,
+ Optional.empty());
+ }
+
+ } catch (ServiceException e) {
+ throw ProtobufHelper.getRemoteException(e);
+ }
+ }
+
+ @Nonnull
+ @Override
+ public Optional<ProvidedStorageLocation> read(@Nonnull Block block)
+ throws IOException {
+
+ ReadRequestProto request =
+ ReadRequestProto
+ .newBuilder()
+ .setKey(PBHelperClient.convert(block))
+ .build();
+ try {
+ ReadResponseProto response = rpcProxy.read(null, request);
+
+ ProvidedStorageLocationProto providedStorageLocation =
+ response.getValue();
+ if (providedStorageLocation.isInitialized()) {
+ return Optional.of(PBHelperClient.convert(providedStorageLocation));
+ }
+ return Optional.empty();
+
+ } catch (ServiceException e) {
+ throw ProtobufHelper.getRemoteException(e);
+ }
+ }
+
+ @Override
+ public void write(@Nonnull Block block,
+ @Nonnull ProvidedStorageLocation providedStorageLocation)
+ throws IOException {
+ WriteRequestProto request =
+ WriteRequestProto
+ .newBuilder()
+ .setKeyValuePair(KeyValueProto.newBuilder()
+ .setKey(PBHelperClient.convert(block))
+ .setValue(PBHelperClient.convert(providedStorageLocation))
+ .build())
+ .build();
+
+ try {
+ rpcProxy.write(null, request);
+ } catch (ServiceException e) {
+ throw ProtobufHelper.getRemoteException(e);
+ }
+ }
+
+ public void stop() {
+ RPC.stopProxy(rpcProxy);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
index 6539d32..2952a5b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
@@ -36,6 +36,8 @@ import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos.KeyValueProto;
import org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BalancerBandwidthCommandProto;
import org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockCommandProto;
import org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos.BlockECReconstructionCommandProto;
@@ -56,6 +58,7 @@ import org.apache.hadoop.hdfs.protocol.proto.ErasureCodingProtos.BlockECReconstr
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ExtendedBlockProto;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ProvidedStorageLocationProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageUuidsProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfosProto;
import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto;
@@ -80,6 +83,7 @@ import org.apache.hadoop.hdfs.protocol.proto.HdfsServerProtos.StorageInfoProto;
import org.apache.hadoop.hdfs.protocol.proto.JournalProtocolProtos.JournalInfoProto;
import org.apache.hadoop.hdfs.security.token.block.BlockKey;
import org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NamenodeRole;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NodeType;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
@@ -1096,4 +1100,28 @@ public class PBHelper {
DatanodeProtocol.DNA_ERASURE_CODING_RECONSTRUCTION,
blkECReconstructionInfos);
}
+
+ public static KeyValueProto convert(FileRegion fileRegion) {
+ return KeyValueProto
+ .newBuilder()
+ .setKey(PBHelperClient.convert(fileRegion.getBlock()))
+ .setValue(PBHelperClient.convert(
+ fileRegion.getProvidedStorageLocation()))
+ .build();
+ }
+
+ public static FileRegion
+ convert(KeyValueProto keyValueProto) {
+ BlockProto blockProto =
+ keyValueProto.getKey();
+ ProvidedStorageLocationProto providedStorageLocationProto =
+ keyValueProto.getValue();
+
+ Block block =
+ PBHelperClient.convert(blockProto);
+ ProvidedStorageLocation providedStorageLocation =
+ PBHelperClient.convert(providedStorageLocationProto);
+
+ return new FileRegion(block, providedStorageLocation, null);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
new file mode 100644
index 0000000..be891e5
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMap.java
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.aliasmap;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.Lists;
+import com.google.protobuf.InvalidProtocolBufferException;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ProvidedStorageLocationProto;
+import org.apache.hadoop.hdfs.protocolPB.PBHelperClient;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.fusesource.leveldbjni.JniDBFactory;
+import org.iq80.leveldb.DB;
+import org.iq80.leveldb.DBIterator;
+import org.iq80.leveldb.Options;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nonnull;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.Optional;
+
+/**
+ * InMemoryAliasMap is an implementation of the InMemoryAliasMapProtocol for
+ * use with LevelDB.
+ */
+public class InMemoryAliasMap implements InMemoryAliasMapProtocol,
+ Configurable {
+
+ private static final Logger LOG = LoggerFactory
+ .getLogger(InMemoryAliasMap.class);
+
+ private final DB levelDb;
+ private Configuration conf;
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ }
+
+ @Override
+ public Configuration getConf() {
+ return this.conf;
+ }
+
+ @VisibleForTesting
+ static String createPathErrorMessage(String directory) {
+ return new StringBuilder()
+ .append("Configured directory '")
+ .append(directory)
+ .append("' doesn't exist")
+ .toString();
+ }
+
+ public static @Nonnull InMemoryAliasMap init(Configuration conf)
+ throws IOException {
+ Options options = new Options();
+ options.createIfMissing(true);
+ String directory =
+ conf.get(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR);
+ LOG.info("Attempting to load InMemoryAliasMap from \"{}\"", directory);
+ File path = new File(directory);
+ if (!path.exists()) {
+ String error = createPathErrorMessage(directory);
+ throw new IOException(error);
+ }
+ DB levelDb = JniDBFactory.factory.open(path, options);
+ InMemoryAliasMap aliasMap = new InMemoryAliasMap(levelDb);
+ aliasMap.setConf(conf);
+ return aliasMap;
+ }
+
+ @VisibleForTesting
+ InMemoryAliasMap(DB levelDb) {
+ this.levelDb = levelDb;
+ }
+
+ @Override
+ public IterationResult list(Optional<Block> marker) throws IOException {
+ return withIterator((DBIterator iterator) -> {
+ Integer batchSize =
+ conf.getInt(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_BATCH_SIZE,
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_BATCH_SIZE_DEFAULT);
+ if (marker.isPresent()) {
+ iterator.seek(toProtoBufBytes(marker.get()));
+ } else {
+ iterator.seekToFirst();
+ }
+ int i = 0;
+ ArrayList<FileRegion> batch =
+ Lists.newArrayListWithExpectedSize(batchSize);
+ while (iterator.hasNext() && i < batchSize) {
+ Map.Entry<byte[], byte[]> entry = iterator.next();
+ Block block = fromBlockBytes(entry.getKey());
+ ProvidedStorageLocation providedStorageLocation =
+ fromProvidedStorageLocationBytes(entry.getValue());
+ batch.add(new FileRegion(block, providedStorageLocation, null));
+ ++i;
+ }
+ if (iterator.hasNext()) {
+ Block nextMarker = fromBlockBytes(iterator.next().getKey());
+ return new IterationResult(batch, Optional.of(nextMarker));
+ } else {
+ return new IterationResult(batch, Optional.empty());
+ }
+
+ });
+ }
+
+ public @Nonnull Optional<ProvidedStorageLocation> read(@Nonnull Block block)
+ throws IOException {
+
+ byte[] extendedBlockDbFormat = toProtoBufBytes(block);
+ byte[] providedStorageLocationDbFormat = levelDb.get(extendedBlockDbFormat);
+ if (providedStorageLocationDbFormat == null) {
+ return Optional.empty();
+ } else {
+ ProvidedStorageLocation providedStorageLocation =
+ fromProvidedStorageLocationBytes(providedStorageLocationDbFormat);
+ return Optional.of(providedStorageLocation);
+ }
+ }
+
+ public void write(@Nonnull Block block,
+ @Nonnull ProvidedStorageLocation providedStorageLocation)
+ throws IOException {
+ byte[] extendedBlockDbFormat = toProtoBufBytes(block);
+ byte[] providedStorageLocationDbFormat =
+ toProtoBufBytes(providedStorageLocation);
+ levelDb.put(extendedBlockDbFormat, providedStorageLocationDbFormat);
+ }
+
+ public void close() throws IOException {
+ levelDb.close();
+ }
+
+ @Nonnull
+ public static ProvidedStorageLocation fromProvidedStorageLocationBytes(
+ @Nonnull byte[] providedStorageLocationDbFormat)
+ throws InvalidProtocolBufferException {
+ ProvidedStorageLocationProto providedStorageLocationProto =
+ ProvidedStorageLocationProto
+ .parseFrom(providedStorageLocationDbFormat);
+ return PBHelperClient.convert(providedStorageLocationProto);
+ }
+
+ @Nonnull
+ public static Block fromBlockBytes(@Nonnull byte[] blockDbFormat)
+ throws InvalidProtocolBufferException {
+ BlockProto blockProto = BlockProto.parseFrom(blockDbFormat);
+ return PBHelperClient.convert(blockProto);
+ }
+
+ public static byte[] toProtoBufBytes(@Nonnull ProvidedStorageLocation
+ providedStorageLocation) throws IOException {
+ ProvidedStorageLocationProto providedStorageLocationProto =
+ PBHelperClient.convert(providedStorageLocation);
+ ByteArrayOutputStream providedStorageLocationOutputStream =
+ new ByteArrayOutputStream();
+ providedStorageLocationProto.writeTo(providedStorageLocationOutputStream);
+ return providedStorageLocationOutputStream.toByteArray();
+ }
+
+ public static byte[] toProtoBufBytes(@Nonnull Block block)
+ throws IOException {
+ BlockProto blockProto =
+ PBHelperClient.convert(block);
+ ByteArrayOutputStream blockOutputStream = new ByteArrayOutputStream();
+ blockProto.writeTo(blockOutputStream);
+ return blockOutputStream.toByteArray();
+ }
+
+ private IterationResult withIterator(
+ CheckedFunction<DBIterator, IterationResult> func) throws IOException {
+ try (DBIterator iterator = levelDb.iterator()) {
+ return func.apply(iterator);
+ }
+ }
+
+ /**
+ * CheckedFunction is akin to {@link java.util.function.Function} but
+ * specifies an IOException.
+ * @param <T> Argument type.
+ * @param <R> Return type.
+ */
+ @FunctionalInterface
+ public interface CheckedFunction<T, R> {
+ R apply(T t) throws IOException;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMapProtocol.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMapProtocol.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMapProtocol.java
new file mode 100644
index 0000000..fb6e8b3
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryAliasMapProtocol.java
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.aliasmap;
+
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+
+import javax.annotation.Nonnull;
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+
+/**
+ * Protocol used by clients to read/write data about aliases of
+ * provided blocks for an in-memory implementation of the
+ * {@link org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap}.
+ */
+public interface InMemoryAliasMapProtocol {
+
+ /**
+ * The result of a read from the in-memory aliasmap. It contains the
+ * a list of FileRegions that are returned, along with the next block
+ * from which the read operation must continue.
+ */
+ class IterationResult {
+
+ private final List<FileRegion> batch;
+ private final Optional<Block> nextMarker;
+
+ public IterationResult(List<FileRegion> batch, Optional<Block> nextMarker) {
+ this.batch = batch;
+ this.nextMarker = nextMarker;
+ }
+
+ public List<FileRegion> getFileRegions() {
+ return batch;
+ }
+
+ public Optional<Block> getNextBlock() {
+ return nextMarker;
+ }
+ }
+
+ /**
+ * List the next batch of {@link FileRegion}s in the alias map starting from
+ * the given {@code marker}. To retrieve all {@link FileRegion}s stored in the
+ * alias map, multiple calls to this function might be required.
+ * @param marker the next block to get fileregions from.
+ * @return the {@link IterationResult} with a set of
+ * FileRegions and the next marker.
+ * @throws IOException
+ */
+ InMemoryAliasMap.IterationResult list(Optional<Block> marker)
+ throws IOException;
+
+ /**
+ * Gets the {@link ProvidedStorageLocation} associated with the
+ * specified block.
+ * @param block the block to lookup
+ * @return the associated {@link ProvidedStorageLocation}.
+ * @throws IOException
+ */
+ @Nonnull
+ Optional<ProvidedStorageLocation> read(@Nonnull Block block)
+ throws IOException;
+
+ /**
+ * Stores the block and it's associated {@link ProvidedStorageLocation}
+ * in the alias map.
+ * @param block
+ * @param providedStorageLocation
+ * @throws IOException
+ */
+ void write(@Nonnull Block block,
+ @Nonnull ProvidedStorageLocation providedStorageLocation)
+ throws IOException;
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java
new file mode 100644
index 0000000..91b1e83
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/aliasmap/InMemoryLevelDBAliasMapServer.java
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.aliasmap;
+
+import com.google.protobuf.BlockingService;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.protocolPB.AliasMapProtocolPB;
+import org.apache.hadoop.hdfs.protocolPB.AliasMapProtocolServerSideTranslatorPB;
+import org.apache.hadoop.ipc.RPC;
+import javax.annotation.Nonnull;
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.Optional;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS_DEFAULT;
+import static org.apache.hadoop.hdfs.protocol.proto.AliasMapProtocolProtos.*;
+import static org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap.CheckedFunction;
+
+/**
+ * InMemoryLevelDBAliasMapServer is the entry point from the Namenode into
+ * the {@link InMemoryAliasMap}.
+ */
+public class InMemoryLevelDBAliasMapServer implements InMemoryAliasMapProtocol,
+ Configurable, Closeable {
+
+ private static final Logger LOG = LoggerFactory
+ .getLogger(InMemoryLevelDBAliasMapServer.class);
+ private final CheckedFunction<Configuration, InMemoryAliasMap> initFun;
+ private RPC.Server aliasMapServer;
+ private Configuration conf;
+ private InMemoryAliasMap aliasMap;
+
+ public InMemoryLevelDBAliasMapServer(
+ CheckedFunction<Configuration, InMemoryAliasMap> initFun) {
+ this.initFun = initFun;
+
+ }
+
+ public void start() throws IOException {
+ if (UserGroupInformation.isSecurityEnabled()) {
+ throw new UnsupportedOperationException("Unable to start "
+ + "InMemoryLevelDBAliasMapServer as security is enabled");
+ }
+ RPC.setProtocolEngine(getConf(), AliasMapProtocolPB.class,
+ ProtobufRpcEngine.class);
+ AliasMapProtocolServerSideTranslatorPB aliasMapProtocolXlator =
+ new AliasMapProtocolServerSideTranslatorPB(this);
+
+ BlockingService aliasMapProtocolService =
+ AliasMapProtocolService
+ .newReflectiveBlockingService(aliasMapProtocolXlator);
+
+ String rpcAddress =
+ conf.get(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS,
+ DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS_DEFAULT);
+ String[] split = rpcAddress.split(":");
+ String bindHost = split[0];
+ Integer port = Integer.valueOf(split[1]);
+
+ aliasMapServer = new RPC.Builder(conf)
+ .setProtocol(AliasMapProtocolPB.class)
+ .setInstance(aliasMapProtocolService)
+ .setBindAddress(bindHost)
+ .setPort(port)
+ .setNumHandlers(1)
+ .setVerbose(true)
+ .build();
+
+ LOG.info("Starting InMemoryLevelDBAliasMapServer on ", rpcAddress);
+ aliasMapServer.start();
+ }
+
+ @Override
+ public InMemoryAliasMap.IterationResult list(Optional<Block> marker)
+ throws IOException {
+ return aliasMap.list(marker);
+ }
+
+ @Nonnull
+ @Override
+ public Optional<ProvidedStorageLocation> read(@Nonnull Block block)
+ throws IOException {
+ return aliasMap.read(block);
+ }
+
+ @Override
+ public void write(@Nonnull Block block,
+ @Nonnull ProvidedStorageLocation providedStorageLocation)
+ throws IOException {
+ aliasMap.write(block, providedStorageLocation);
+ }
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ try {
+ this.aliasMap = initFun.apply(conf);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public void close() {
+ LOG.info("Stopping InMemoryLevelDBAliasMapServer");
+ try {
+ aliasMap.close();
+ } catch (IOException e) {
+ LOG.error(e.getMessage());
+ }
+ aliasMapServer.stop();
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
index c568b90..5d04640 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegion.java
@@ -17,9 +17,11 @@
*/
package org.apache.hadoop.hdfs.server.common;
+import org.apache.commons.lang3.tuple.Pair;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
/**
* This class is used to represent provided blocks that are file regions,
@@ -27,95 +29,70 @@ import org.apache.hadoop.hdfs.protocol.HdfsConstants;
*/
public class FileRegion implements BlockAlias {
- private final Path path;
- private final long offset;
- private final long length;
- private final long blockId;
+ private final Pair<Block, ProvidedStorageLocation> pair;
private final String bpid;
- private final long genStamp;
public FileRegion(long blockId, Path path, long offset,
long length, String bpid, long genStamp) {
- this.path = path;
- this.offset = offset;
- this.length = length;
- this.blockId = blockId;
- this.bpid = bpid;
- this.genStamp = genStamp;
+ this(new Block(blockId, length, genStamp),
+ new ProvidedStorageLocation(path, offset, length, new byte[0]), bpid);
}
public FileRegion(long blockId, Path path, long offset,
long length, String bpid) {
this(blockId, path, offset, length, bpid,
HdfsConstants.GRANDFATHER_GENERATION_STAMP);
-
}
public FileRegion(long blockId, Path path, long offset,
long length, long genStamp) {
this(blockId, path, offset, length, null, genStamp);
+ }
+ public FileRegion(Block block,
+ ProvidedStorageLocation providedStorageLocation) {
+ this.pair = Pair.of(block, providedStorageLocation);
+ this.bpid = null;
+ }
+
+ public FileRegion(Block block,
+ ProvidedStorageLocation providedStorageLocation, String bpid) {
+ this.pair = Pair.of(block, providedStorageLocation);
+ this.bpid = bpid;
}
public FileRegion(long blockId, Path path, long offset, long length) {
this(blockId, path, offset, length, null);
}
- @Override
public Block getBlock() {
- return new Block(blockId, length, genStamp);
+ return pair.getKey();
}
- @Override
- public boolean equals(Object other) {
- if (!(other instanceof FileRegion)) {
- return false;
- }
- FileRegion o = (FileRegion) other;
- return blockId == o.blockId
- && offset == o.offset
- && length == o.length
- && genStamp == o.genStamp
- && path.equals(o.path);
- }
-
- @Override
- public int hashCode() {
- return (int)(blockId & Integer.MIN_VALUE);
+ public ProvidedStorageLocation getProvidedStorageLocation() {
+ return pair.getValue();
}
- public Path getPath() {
- return path;
+ public String getBlockPoolId() {
+ return this.bpid;
}
- public long getOffset() {
- return offset;
- }
+ @Override
+ public boolean equals(Object o) {
+ if (this == o) {
+ return true;
+ }
+ if (o == null || getClass() != o.getClass()) {
+ return false;
+ }
- public long getLength() {
- return length;
- }
+ FileRegion that = (FileRegion) o;
- public long getGenerationStamp() {
- return genStamp;
+ return pair.equals(that.pair);
}
@Override
- public String toString() {
- StringBuilder sb = new StringBuilder();
- sb.append("{ block=\"").append(getBlock()).append("\"");
- sb.append(", path=\"").append(getPath()).append("\"");
- sb.append(", off=\"").append(getOffset()).append("\"");
- sb.append(", len=\"").append(getBlock().getNumBytes()).append("\"");
- sb.append(", genStamp=\"").append(getBlock()
- .getGenerationStamp()).append("\"");
- sb.append(", bpid=\"").append(bpid).append("\"");
- sb.append(" }");
- return sb.toString();
- }
-
- public String getBlockPoolId() {
- return this.bpid;
+ public int hashCode() {
+ return pair.hashCode();
}
-
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
index d276fb5..e3b6cb5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.hdfs.server.common.blockaliasmap;
import java.io.Closeable;
import java.io.IOException;
+import java.util.Iterator;
+import java.util.Optional;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.server.common.BlockAlias;
@@ -29,6 +31,19 @@ import org.apache.hadoop.hdfs.server.common.BlockAlias;
public abstract class BlockAliasMap<T extends BlockAlias> {
/**
+ * ImmutableIterator is an Iterator that does not support the remove
+ * operation. This could inherit {@link java.util.Enumeration} but Iterator
+ * is supported by more APIs and Enumeration's javadoc even suggests using
+ * Iterator instead.
+ */
+ public abstract class ImmutableIterator implements Iterator<T> {
+ public void remove() {
+ throw new UnsupportedOperationException(
+ "Remove is not supported for provided storage");
+ }
+ }
+
+ /**
* An abstract class that is used to read {@link BlockAlias}es
* for provided blocks.
*/
@@ -45,7 +60,7 @@ public abstract class BlockAliasMap<T extends BlockAlias> {
* @return BlockAlias correspoding to the provided block.
* @throws IOException
*/
- public abstract U resolve(Block ident) throws IOException;
+ public abstract Optional<U> resolve(Block ident) throws IOException;
}
@@ -85,4 +100,6 @@ public abstract class BlockAliasMap<T extends BlockAlias> {
*/
public abstract void refresh() throws IOException;
+ public abstract void close() throws IOException;
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
new file mode 100644
index 0000000..7b0b789
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/InMemoryLevelDBAliasMapClient.java
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common.blockaliasmap.impl;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.protocolPB.InMemoryAliasMapProtocolClientSideTranslatorPB;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.Optional;
+
+/**
+ * InMemoryLevelDBAliasMapClient is the client for the InMemoryAliasMapServer.
+ * This is used by the Datanode and fs2img to store and retrieve FileRegions
+ * based on the given Block.
+ */
+public class InMemoryLevelDBAliasMapClient extends BlockAliasMap<FileRegion>
+ implements Configurable {
+
+ private Configuration conf;
+ private InMemoryAliasMapProtocolClientSideTranslatorPB aliasMap;
+
+ @Override
+ public void close() {
+ aliasMap.stop();
+ }
+
+ class LevelDbReader extends BlockAliasMap.Reader<FileRegion> {
+
+ @Override
+ public Optional<FileRegion> resolve(Block block) throws IOException {
+ Optional<ProvidedStorageLocation> read = aliasMap.read(block);
+ return read.map(psl -> new FileRegion(block, psl, null));
+ }
+
+ @Override
+ public void close() throws IOException {
+ }
+
+ private class LevelDbIterator
+ extends BlockAliasMap<FileRegion>.ImmutableIterator {
+
+ private Iterator<FileRegion> iterator;
+ private Optional<Block> nextMarker;
+
+ LevelDbIterator() {
+ batch(Optional.empty());
+ }
+
+ private void batch(Optional<Block> newNextMarker) {
+ try {
+ InMemoryAliasMap.IterationResult iterationResult =
+ aliasMap.list(newNextMarker);
+ List<FileRegion> fileRegions = iterationResult.getFileRegions();
+ this.iterator = fileRegions.iterator();
+ this.nextMarker = iterationResult.getNextBlock();
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ @Override
+ public boolean hasNext() {
+ return iterator.hasNext() || nextMarker.isPresent();
+ }
+
+ @Override
+ public FileRegion next() {
+ if (iterator.hasNext()) {
+ return iterator.next();
+ } else {
+ if (nextMarker.isPresent()) {
+ batch(nextMarker);
+ return next();
+ } else {
+ throw new NoSuchElementException();
+ }
+ }
+ }
+ }
+
+ @Override
+ public Iterator<FileRegion> iterator() {
+ return new LevelDbIterator();
+ }
+ }
+
+ class LevelDbWriter extends BlockAliasMap.Writer<FileRegion> {
+ @Override
+ public void store(FileRegion fileRegion) throws IOException {
+ aliasMap.write(fileRegion.getBlock(),
+ fileRegion.getProvidedStorageLocation());
+ }
+
+ @Override
+ public void close() throws IOException {
+ }
+ }
+
+ InMemoryLevelDBAliasMapClient() {
+ if (UserGroupInformation.isSecurityEnabled()) {
+ throw new UnsupportedOperationException("Unable to start "
+ + "InMemoryLevelDBAliasMapClient as security is enabled");
+ }
+ }
+
+
+ @Override
+ public Reader<FileRegion> getReader(Reader.Options opts) throws IOException {
+ return new LevelDbReader();
+ }
+
+ @Override
+ public Writer<FileRegion> getWriter(Writer.Options opts) throws IOException {
+ return new LevelDbWriter();
+ }
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ this.aliasMap = new InMemoryAliasMapProtocolClientSideTranslatorPB(conf);
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public void refresh() throws IOException {
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
index bd04d60..b86b280 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
@@ -32,6 +32,7 @@ import java.util.Map;
import java.util.Collections;
import java.util.IdentityHashMap;
import java.util.NoSuchElementException;
+import java.util.Optional;
import org.apache.hadoop.conf.Configurable;
import org.apache.hadoop.conf.Configuration;
@@ -40,6 +41,7 @@ import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
import org.apache.hadoop.hdfs.server.common.FileRegion;
import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
import org.apache.hadoop.io.MultipleIOException;
@@ -160,7 +162,7 @@ public class TextFileRegionAliasMap
file = new Path(tmpfile);
delim = conf.get(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER,
DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT);
- LOG.info("TextFileRegionAliasMap: read path " + tmpfile.toString());
+ LOG.info("TextFileRegionAliasMap: read path {}", tmpfile);
}
@Override
@@ -190,7 +192,7 @@ public class TextFileRegionAliasMap
private Configuration conf;
private String codec = null;
private Path file =
- new Path(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT);;
+ new Path(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT);
private String delim =
DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT;
@@ -252,7 +254,7 @@ public class TextFileRegionAliasMap
Options delimiter(String delim);
}
- static ReaderOptions defaults() {
+ public static ReaderOptions defaults() {
return new ReaderOptions();
}
@@ -278,14 +280,14 @@ public class TextFileRegionAliasMap
}
@Override
- public FileRegion resolve(Block ident) throws IOException {
+ public Optional<FileRegion> resolve(Block ident) throws IOException {
// consider layering index w/ composable format
Iterator<FileRegion> i = iterator();
try {
while (i.hasNext()) {
FileRegion f = i.next();
if (f.getBlock().equals(ident)) {
- return f;
+ return Optional.of(f);
}
}
} finally {
@@ -295,7 +297,7 @@ public class TextFileRegionAliasMap
r.close();
}
}
- return null;
+ return Optional.empty();
}
class FRIterator implements Iterator<FileRegion> {
@@ -342,8 +344,8 @@ public class TextFileRegionAliasMap
throw new IOException("Invalid line: " + line);
}
return new FileRegion(Long.parseLong(f[0]), new Path(f[1]),
- Long.parseLong(f[2]), Long.parseLong(f[3]), f[5],
- Long.parseLong(f[4]));
+ Long.parseLong(f[2]), Long.parseLong(f[3]), f[4],
+ Long.parseLong(f[5]));
}
public InputStream createStream() throws IOException {
@@ -390,7 +392,6 @@ public class TextFileRegionAliasMap
throw MultipleIOException.createIOException(ex);
}
}
-
}
/**
@@ -422,12 +423,16 @@ public class TextFileRegionAliasMap
@Override
public void store(FileRegion token) throws IOException {
- out.append(String.valueOf(token.getBlock().getBlockId())).append(delim);
- out.append(token.getPath().toString()).append(delim);
- out.append(Long.toString(token.getOffset())).append(delim);
- out.append(Long.toString(token.getLength())).append(delim);
- out.append(Long.toString(token.getGenerationStamp())).append(delim);
- out.append(token.getBlockPoolId()).append("\n");
+ final Block block = token.getBlock();
+ final ProvidedStorageLocation psl = token.getProvidedStorageLocation();
+
+ out.append(String.valueOf(block.getBlockId())).append(delim);
+ out.append(psl.getPath().toString()).append(delim);
+ out.append(Long.toString(psl.getOffset())).append(delim);
+ out.append(Long.toString(psl.getLength())).append(delim);
+ out.append(token.getBlockPoolId()).append(delim);
+ out.append(Long.toString(block.getGenerationStamp())).append(delim);
+ out.append("\n");
}
@Override
@@ -443,4 +448,9 @@ public class TextFileRegionAliasMap
"Refresh not supported by " + getClass());
}
+ @Override
+ public void close() throws IOException {
+ //nothing to do;
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
index bcc9a38..0fbfc15 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
@@ -22,6 +22,7 @@ import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
@@ -38,6 +39,16 @@ public class FinalizedProvidedReplica extends ProvidedReplica {
remoteFS);
}
+ public FinalizedProvidedReplica(FileRegion fileRegion, FsVolumeSpi volume,
+ Configuration conf, FileSystem remoteFS) {
+ super(fileRegion.getBlock().getBlockId(),
+ fileRegion.getProvidedStorageLocation().getPath().toUri(),
+ fileRegion.getProvidedStorageLocation().getOffset(),
+ fileRegion.getBlock().getNumBytes(),
+ fileRegion.getBlock().getGenerationStamp(),
+ volume, conf, remoteFS);
+ }
+
public FinalizedProvidedReplica(long blockId, Path pathPrefix,
String pathSuffix, long fileOffset, long blockLen, long genStamp,
FsVolumeSpi volume, Configuration conf, FileSystem remoteFS) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
index de68e2d..8748918 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
@@ -315,12 +315,7 @@ public class ReplicaBuilder {
offset, length, genStamp, volume, conf, remoteFS);
}
} else {
- info = new FinalizedProvidedReplica(fileRegion.getBlock().getBlockId(),
- fileRegion.getPath().toUri(),
- fileRegion.getOffset(),
- fileRegion.getBlock().getNumBytes(),
- fileRegion.getBlock().getGenerationStamp(),
- volume, conf, remoteFS);
+ info = new FinalizedProvidedReplica(fileRegion, volume, conf, remoteFS);
}
return info;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index ab59fa5..6bbfa91 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -148,7 +148,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
this.aliasMap = blockAliasMap;
}
- public void getVolumeMap(ReplicaMap volumeMap,
+ void fetchVolumeMap(ReplicaMap volumeMap,
RamDiskReplicaTracker ramDiskReplicaMap, FileSystem remoteFS)
throws IOException {
BlockAliasMap.Reader<FileRegion> reader = aliasMap.getReader(null);
@@ -157,21 +157,19 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
+ "; no blocks will be populated");
return;
}
- Iterator<FileRegion> iter = reader.iterator();
Path blockPrefixPath = new Path(providedVolume.getBaseURI());
- while (iter.hasNext()) {
- FileRegion region = iter.next();
+ for (FileRegion region : reader) {
if (region.getBlockPoolId() != null
&& region.getBlockPoolId().equals(bpid)
&& containsBlock(providedVolume.baseURI,
- region.getPath().toUri())) {
- String blockSuffix =
- getSuffix(blockPrefixPath, new Path(region.getPath().toUri()));
+ region.getProvidedStorageLocation().getPath().toUri())) {
+ String blockSuffix = getSuffix(blockPrefixPath,
+ new Path(region.getProvidedStorageLocation().getPath().toUri()));
ReplicaInfo newReplica = new ReplicaBuilder(ReplicaState.FINALIZED)
.setBlockId(region.getBlock().getBlockId())
.setPathPrefix(blockPrefixPath)
.setPathSuffix(blockSuffix)
- .setOffset(region.getOffset())
+ .setOffset(region.getProvidedStorageLocation().getOffset())
.setLength(region.getBlock().getNumBytes())
.setGenerationStamp(region.getBlock().getGenerationStamp())
.setFsVolume(providedVolume)
@@ -216,18 +214,12 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
*/
aliasMap.refresh();
BlockAliasMap.Reader<FileRegion> reader = aliasMap.getReader(null);
- if (reader == null) {
- LOG.warn("Got null reader from BlockAliasMap " + aliasMap
- + "; no blocks will be populated in scan report");
- return;
- }
- Iterator<FileRegion> iter = reader.iterator();
- while(iter.hasNext()) {
+ for (FileRegion region : reader) {
reportCompiler.throttle();
- FileRegion region = iter.next();
if (region.getBlockPoolId().equals(bpid)) {
report.add(new ScanInfo(region.getBlock().getBlockId(),
- providedVolume, region, region.getLength()));
+ providedVolume, region,
+ region.getProvidedStorageLocation().getLength()));
}
}
}
@@ -522,7 +514,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
throws IOException {
LOG.info("Creating volumemap for provided volume " + this);
for(ProvidedBlockPoolSlice s : bpSlices.values()) {
- s.getVolumeMap(volumeMap, ramDiskReplicaMap, remoteFS);
+ s.fetchVolumeMap(volumeMap, ramDiskReplicaMap, remoteFS);
}
}
@@ -539,7 +531,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
void getVolumeMap(String bpid, ReplicaMap volumeMap,
final RamDiskReplicaTracker ramDiskReplicaMap)
throws IOException {
- getProvidedBlockPoolSlice(bpid).getVolumeMap(volumeMap, ramDiskReplicaMap,
+ getProvidedBlockPoolSlice(bpid).fetchVolumeMap(volumeMap, ramDiskReplicaMap,
remoteFS);
}
@@ -601,7 +593,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
@Override
public LinkedList<ScanInfo> compileReport(String bpid,
LinkedList<ScanInfo> report, ReportCompiler reportCompiler)
- throws InterruptedException, IOException {
+ throws InterruptedException, IOException {
LOG.info("Compiling report for volume: " + this + " bpid " + bpid);
//get the report from the appropriate block pool.
if(bpSlices.containsKey(bpid)) {
@@ -690,6 +682,12 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
}
@VisibleForTesting
+ BlockAliasMap<FileRegion> getFileRegionProvider(String bpid) throws
+ IOException {
+ return getProvidedBlockPoolSlice(bpid).getBlockAliasMap();
+ }
+
+ @VisibleForTesting
void setFileRegionProvider(String bpid,
BlockAliasMap<FileRegion> blockAliasMap) throws IOException {
ProvidedBlockPoolSlice bp = bpSlices.get(bpid);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
index 32b873b..993716a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
@@ -45,6 +45,8 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
import org.apache.hadoop.hdfs.protocol.ClientProtocol;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer;
import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.NamenodeRole;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.RollingUpgradeStartupOption;
@@ -208,6 +210,8 @@ public class NameNode extends ReconfigurableBase implements
HdfsConfiguration.init();
}
+ private InMemoryLevelDBAliasMapServer levelDBAliasMapServer;
+
/**
* Categories of operations supported by the namenode.
*/
@@ -745,6 +749,20 @@ public class NameNode extends ReconfigurableBase implements
startCommonServices(conf);
startMetricsLogger(conf);
+ startAliasMapServerIfNecessary(conf);
+ }
+
+ private void startAliasMapServerIfNecessary(Configuration conf)
+ throws IOException {
+ if (conf.getBoolean(DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED,
+ DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED_DEFAULT)
+ && conf.getBoolean(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_ENABLED,
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_ENABLED_DEFAULT)) {
+ levelDBAliasMapServer =
+ new InMemoryLevelDBAliasMapServer(InMemoryAliasMap::init);
+ levelDBAliasMapServer.setConf(conf);
+ levelDBAliasMapServer.start();
+ }
}
private void initReconfigurableBackoffKey() {
@@ -1027,6 +1045,9 @@ public class NameNode extends ReconfigurableBase implements
MBeans.unregister(nameNodeStatusBeanName);
nameNodeStatusBeanName = null;
}
+ if (levelDBAliasMapServer != null) {
+ levelDBAliasMapServer.close();
+ }
}
tracer.close();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/AliasMapProtocol.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/AliasMapProtocol.proto b/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/AliasMapProtocol.proto
new file mode 100644
index 0000000..08f10bb
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/AliasMapProtocol.proto
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+option java_package = "org.apache.hadoop.hdfs.protocol.proto";
+option java_outer_classname = "AliasMapProtocolProtos";
+option java_generic_services = true;
+option java_generate_equals_and_hash = true;
+package hadoop.hdfs;
+
+import "hdfs.proto";
+
+message KeyValueProto {
+ optional BlockProto key = 1;
+ optional ProvidedStorageLocationProto value = 2;
+}
+
+message WriteRequestProto {
+ required KeyValueProto keyValuePair = 1;
+}
+
+message WriteResponseProto {
+}
+
+message ReadRequestProto {
+ required BlockProto key = 1;
+}
+
+message ReadResponseProto {
+ optional ProvidedStorageLocationProto value = 1;
+}
+
+message ListRequestProto {
+ optional BlockProto marker = 1;
+}
+
+message ListResponseProto {
+ repeated KeyValueProto fileRegions = 1;
+ optional BlockProto nextMarker = 2;
+}
+
+service AliasMapProtocolService {
+ rpc write(WriteRequestProto) returns(WriteResponseProto);
+ rpc read(ReadRequestProto) returns(ReadResponseProto);
+ rpc list(ListRequestProto) returns(ListResponseProto);
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 655f9cb..ddc07ac 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -4653,6 +4653,40 @@
</property>
<property>
+ <name>dfs.provided.aliasmap.inmemory.batch-size</name>
+ <value>500</value>
+ <description>
+ The batch size when iterating over the database backing the aliasmap
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.aliasmap.inmemory.dnrpc-address</name>
+ <value>0.0.0.0:50200</value>
+ <description>
+ The address where the aliasmap server will be running
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.aliasmap.inmemory.leveldb.dir</name>
+ <value>/tmp</value>
+ <description>
+ The directory where the leveldb files will be kept
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.aliasmap.inmemory.enabled</name>
+ <value>false</value>
+ <description>
+ Don't use the aliasmap by default. Some tests will fail
+ because they try to start the namenode twice with the
+ same parameters if you turn it on.
+ </description>
+ </property>
+
+ <property>
<name>dfs.provided.aliasmap.text.delimiter</name>
<value>,</value>
<description>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/ITestInMemoryAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/ITestInMemoryAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/ITestInMemoryAliasMap.java
new file mode 100644
index 0000000..6f1ff3e
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/ITestInMemoryAliasMap.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.aliasmap;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.Arrays;
+import java.util.Optional;
+
+/**
+ * ITestInMemoryAliasMap is an integration test that writes and reads to
+ * an AliasMap. This is an integration test because it can't be run in parallel
+ * like normal unit tests since there is conflict over the port being in use.
+ */
+public class ITestInMemoryAliasMap {
+ private InMemoryAliasMap aliasMap;
+ private File tempDirectory;
+
+ @Before
+ public void setUp() throws Exception {
+ Configuration conf = new Configuration();
+ tempDirectory = Files.createTempDirectory("seagull").toFile();
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR,
+ tempDirectory.getAbsolutePath());
+ aliasMap = InMemoryAliasMap.init(conf);
+ }
+
+ @After
+ public void tearDown() throws Exception {
+ aliasMap.close();
+ FileUtils.deleteDirectory(tempDirectory);
+ }
+
+ @Test
+ public void readNotFoundReturnsNothing() throws IOException {
+ Block block = new Block(42, 43, 44);
+
+ Optional<ProvidedStorageLocation> actualProvidedStorageLocationOpt
+ = aliasMap.read(block);
+
+ assertFalse(actualProvidedStorageLocationOpt.isPresent());
+ }
+
+ @Test
+ public void readWrite() throws Exception {
+ Block block = new Block(42, 43, 44);
+
+ Path path = new Path("eagle", "mouse");
+ long offset = 47;
+ long length = 48;
+ int nonceSize = 4;
+ byte[] nonce = new byte[nonceSize];
+ Arrays.fill(nonce, 0, (nonceSize - 1), Byte.parseByte("0011", 2));
+
+ ProvidedStorageLocation expectedProvidedStorageLocation =
+ new ProvidedStorageLocation(path, offset, length, nonce);
+
+ aliasMap.write(block, expectedProvidedStorageLocation);
+
+ Optional<ProvidedStorageLocation> actualProvidedStorageLocationOpt
+ = aliasMap.read(block);
+
+ assertTrue(actualProvidedStorageLocationOpt.isPresent());
+ assertEquals(expectedProvidedStorageLocation,
+ actualProvidedStorageLocationOpt.get());
+
+ }
+
+ @Test
+ public void list() throws IOException {
+ Block block1 = new Block(42, 43, 44);
+ Block block2 = new Block(43, 44, 45);
+ Block block3 = new Block(44, 45, 46);
+
+ Path path = new Path("eagle", "mouse");
+ int nonceSize = 4;
+ byte[] nonce = new byte[nonceSize];
+ Arrays.fill(nonce, 0, (nonceSize - 1), Byte.parseByte("0011", 2));
+ ProvidedStorageLocation expectedProvidedStorageLocation1 =
+ new ProvidedStorageLocation(path, 47, 48, nonce);
+ ProvidedStorageLocation expectedProvidedStorageLocation2 =
+ new ProvidedStorageLocation(path, 48, 49, nonce);
+ ProvidedStorageLocation expectedProvidedStorageLocation3 =
+ new ProvidedStorageLocation(path, 49, 50, nonce);
+
+ aliasMap.write(block1, expectedProvidedStorageLocation1);
+ aliasMap.write(block2, expectedProvidedStorageLocation2);
+ aliasMap.write(block3, expectedProvidedStorageLocation3);
+
+ InMemoryAliasMap.IterationResult list = aliasMap.list(Optional.empty());
+ // we should have 3 results
+ assertEquals(3, list.getFileRegions().size());
+ // no more results expected
+ assertFalse(list.getNextBlock().isPresent());
+ }
+}
+
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/TestInMemoryAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/TestInMemoryAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/TestInMemoryAliasMap.java
new file mode 100644
index 0000000..f699055
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/aliasmap/TestInMemoryAliasMap.java
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.aliasmap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.junit.Test;
+
+import java.io.IOException;
+
+import static org.assertj.core.api.Assertions.assertThatExceptionOfType;
+
+/**
+ * TestInMemoryAliasMap tests the initialization of an AliasMap. Most of the
+ * rest of the tests are in ITestInMemoryAliasMap since the tests are not
+ * thread safe (there is competition for the port).
+ */
+public class TestInMemoryAliasMap {
+
+ @Test
+ public void testInit() {
+ String nonExistingDirectory = "non-existing-directory";
+ Configuration conf = new Configuration();
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR,
+ nonExistingDirectory);
+
+ assertThatExceptionOfType(IOException.class)
+ .isThrownBy(() -> InMemoryAliasMap.init(conf)).withMessage(
+ InMemoryAliasMap.createPathErrorMessage(nonExistingDirectory));
+ }
+}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[43/50] [abbrv] hadoop git commit: HDFS-12809. [READ] Fix the
randomized selection of locations in {{ProvidedBlocksBuilder}}.
Posted by vi...@apache.org.
HDFS-12809. [READ] Fix the randomized selection of locations in {{ProvidedBlocksBuilder}}.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6a3ab228
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6a3ab228
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6a3ab228
Branch: refs/heads/HDFS-9806
Commit: 6a3ab2282025b90c5e14898796b5a20725b54cfd
Parents: 1151f04
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Mon Nov 27 17:04:20 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../blockmanagement/ProvidedStorageMap.java | 112 +++++++------------
.../TestNameNodeProvidedImplementation.java | 26 ++++-
2 files changed, 61 insertions(+), 77 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a3ab228/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 6fec977..c85eb2c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -19,11 +19,12 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
import java.io.IOException;
import java.util.ArrayList;
+import java.util.Collections;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
-import java.util.Map;
import java.util.NavigableMap;
+import java.util.Random;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ConcurrentSkipListMap;
@@ -229,11 +230,8 @@ public class ProvidedStorageMap {
sids.add(currInfo.getStorageID());
types.add(storageType);
if (StorageType.PROVIDED.equals(storageType)) {
- DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
- locs.add(
- new DatanodeInfoWithStorage(
- dn, currInfo.getStorageID(), currInfo.getStorageType()));
- excludedUUids.add(dn.getDatanodeUuid());
+ // Provided location will be added to the list of locations after
+ // examining all local locations.
isProvidedBlock = true;
} else {
locs.add(new DatanodeInfoWithStorage(
@@ -245,11 +243,17 @@ public class ProvidedStorageMap {
int numLocations = locs.size();
if (isProvidedBlock) {
+ // add the first datanode here
+ DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
+ locs.add(
+ new DatanodeInfoWithStorage(dn, storageId, StorageType.PROVIDED));
+ excludedUUids.add(dn.getDatanodeUuid());
+ numLocations++;
// add more replicas until we reach the defaultReplication
for (int count = numLocations + 1;
count <= defaultReplication && count <= providedDescriptor
.activeProvidedDatanodes(); count++) {
- DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
+ dn = chooseProvidedDatanode(excludedUUids);
locs.add(new DatanodeInfoWithStorage(
dn, storageId, StorageType.PROVIDED));
sids.add(storageId);
@@ -284,6 +288,9 @@ public class ProvidedStorageMap {
private final NavigableMap<String, DatanodeDescriptor> dns =
new ConcurrentSkipListMap<>();
+ // maintain a separate list of the datanodes with provided storage
+ // to efficiently choose Datanodes when required.
+ private final List<DatanodeDescriptor> dnR = new ArrayList<>();
public final static String NETWORK_LOCATION = "/REMOTE";
public final static String NAME = "PROVIDED";
@@ -300,8 +307,8 @@ public class ProvidedStorageMap {
DatanodeStorageInfo getProvidedStorage(
DatanodeDescriptor dn, DatanodeStorage s) {
- LOG.info("XXXXX adding Datanode " + dn.getDatanodeUuid());
dns.put(dn.getDatanodeUuid(), dn);
+ dnR.add(dn);
// TODO: maintain separate RPC ident per dn
return storageMap.get(s.getStorageID());
}
@@ -315,84 +322,42 @@ public class ProvidedStorageMap {
}
DatanodeDescriptor choose(DatanodeDescriptor client) {
- // exact match for now
- DatanodeDescriptor dn = client != null ?
- dns.get(client.getDatanodeUuid()) : null;
- if (null == dn) {
- dn = chooseRandom();
- }
- return dn;
+ return choose(client, Collections.<String>emptySet());
}
DatanodeDescriptor choose(DatanodeDescriptor client,
Set<String> excludedUUids) {
// exact match for now
- DatanodeDescriptor dn = client != null ?
- dns.get(client.getDatanodeUuid()) : null;
-
- if (null == dn || excludedUUids.contains(client.getDatanodeUuid())) {
- dn = null;
- Set<String> exploredUUids = new HashSet<String>();
-
- while(exploredUUids.size() < dns.size()) {
- Map.Entry<String, DatanodeDescriptor> d =
- dns.ceilingEntry(UUID.randomUUID().toString());
- if (null == d) {
- d = dns.firstEntry();
- }
- String uuid = d.getValue().getDatanodeUuid();
- //this node has already been explored, and was not selected earlier
- if (exploredUUids.contains(uuid)) {
- continue;
- }
- exploredUUids.add(uuid);
- //this node has been excluded
- if (excludedUUids.contains(uuid)) {
- continue;
- }
- return dns.get(uuid);
- }
- }
-
- return dn;
- }
-
- DatanodeDescriptor chooseRandom(DatanodeStorageInfo[] excludedStorages) {
- // TODO: Currently this is not uniformly random;
- // skewed toward sparse sections of the ids
- Set<DatanodeDescriptor> excludedNodes =
- new HashSet<DatanodeDescriptor>();
- if (excludedStorages != null) {
- for (int i= 0; i < excludedStorages.length; i++) {
- LOG.info("Excluded: " + excludedStorages[i].getDatanodeDescriptor());
- excludedNodes.add(excludedStorages[i].getDatanodeDescriptor());
+ if (client != null && !excludedUUids.contains(client.getDatanodeUuid())) {
+ DatanodeDescriptor dn = dns.get(client.getDatanodeUuid());
+ if (dn != null) {
+ return dn;
}
}
- Set<DatanodeDescriptor> exploredNodes = new HashSet<DatanodeDescriptor>();
- while(exploredNodes.size() < dns.size()) {
- Map.Entry<String, DatanodeDescriptor> d =
- dns.ceilingEntry(UUID.randomUUID().toString());
- if (null == d) {
- d = dns.firstEntry();
- }
- DatanodeDescriptor node = d.getValue();
- //this node has already been explored, and was not selected earlier
- if (exploredNodes.contains(node)) {
- continue;
+ Random r = new Random();
+ for (int i = dnR.size() - 1; i >= 0; --i) {
+ int pos = r.nextInt(i + 1);
+ DatanodeDescriptor node = dnR.get(pos);
+ String uuid = node.getDatanodeUuid();
+ if (!excludedUUids.contains(uuid)) {
+ return node;
}
- exploredNodes.add(node);
- //this node has been excluded
- if (excludedNodes.contains(node)) {
- continue;
- }
- return node;
+ Collections.swap(dnR, i, pos);
}
return null;
}
- DatanodeDescriptor chooseRandom() {
- return chooseRandom(null);
+ DatanodeDescriptor chooseRandom(DatanodeStorageInfo... excludedStorages) {
+ Set<String> excludedNodes = new HashSet<>();
+ if (excludedStorages != null) {
+ for (int i = 0; i < excludedStorages.length; i++) {
+ DatanodeDescriptor dn = excludedStorages[i].getDatanodeDescriptor();
+ String uuid = dn.getDatanodeUuid();
+ excludedNodes.add(uuid);
+ }
+ }
+ return choose(null, excludedNodes);
}
@Override
@@ -414,6 +379,7 @@ public class ProvidedStorageMap {
DatanodeDescriptor storedDN = dns.get(dnToRemove.getDatanodeUuid());
if (storedDN != null) {
dns.remove(dnToRemove.getDatanodeUuid());
+ dnR.remove(dnToRemove);
}
}
return dns.size();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/6a3ab228/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 9c82967..09e8f97 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -27,8 +27,11 @@ import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.Channels;
import java.nio.channels.ReadableByteChannel;
+import java.util.HashSet;
import java.util.Iterator;
import java.util.Random;
+import java.util.Set;
+
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
@@ -480,16 +483,31 @@ public class TestNameNodeProvidedImplementation {
// given the start and length in the above call,
// only one LocatedBlock in LocatedBlocks
assertEquals(expectedBlocks, locatedBlocks.getLocatedBlocks().size());
- LocatedBlock locatedBlock = locatedBlocks.getLocatedBlocks().get(0);
- assertEquals(expectedLocations, locatedBlock.getLocations().length);
- return locatedBlock.getLocations();
+ DatanodeInfo[] locations =
+ locatedBlocks.getLocatedBlocks().get(0).getLocations();
+ assertEquals(expectedLocations, locations.length);
+ checkUniqueness(locations);
+ return locations;
+ }
+
+ /**
+ * verify that the given locations are all unique.
+ * @param locations
+ */
+ private void checkUniqueness(DatanodeInfo[] locations) {
+ Set<String> set = new HashSet<>();
+ for (DatanodeInfo info: locations) {
+ assertFalse("All locations should be unique",
+ set.contains(info.getDatanodeUuid()));
+ set.add(info.getDatanodeUuid());
+ }
}
/**
* Tests setting replication of provided files.
* @throws Exception
*/
- @Test(timeout=30000)
+ @Test(timeout=50000)
public void testSetReplicationForProvidedFiles() throws Exception {
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[11/50] [abbrv] hadoop git commit: HDFS-11576. Block recovery will
fail indefinitely if recovery time > heartbeat interval. Contributed by Lukas
Majercak
Posted by vi...@apache.org.
HDFS-11576. Block recovery will fail indefinitely if recovery time > heartbeat interval. Contributed by Lukas Majercak
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5304698d
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5304698d
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5304698d
Branch: refs/heads/HDFS-9806
Commit: 5304698dc8c5667c33e6ed9c4a827ef57172a723
Parents: 556aea3
Author: Chris Douglas <cd...@apache.org>
Authored: Fri Dec 1 10:29:30 2017 -0800
Committer: Chris Douglas <cd...@apache.org>
Committed: Fri Dec 1 10:29:30 2017 -0800
----------------------------------------------------------------------
.../apache/hadoop/test/GenericTestUtils.java | 10 +-
.../server/blockmanagement/BlockManager.java | 40 ++++++
.../blockmanagement/PendingRecoveryBlocks.java | 143 +++++++++++++++++++
.../hdfs/server/namenode/FSNamesystem.java | 40 +++---
.../TestPendingRecoveryBlocks.java | 87 +++++++++++
.../hdfs/server/datanode/TestBlockRecovery.java | 108 ++++++++++++++
.../namenode/ha/TestPipelinesFailover.java | 5 +-
7 files changed, 413 insertions(+), 20 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5304698d/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index 0db6c73..cdde48c 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -641,10 +641,16 @@ public abstract class GenericTestUtils {
* conditions.
*/
public static class SleepAnswer implements Answer<Object> {
+ private final int minSleepTime;
private final int maxSleepTime;
private static Random r = new Random();
-
+
public SleepAnswer(int maxSleepTime) {
+ this(0, maxSleepTime);
+ }
+
+ public SleepAnswer(int minSleepTime, int maxSleepTime) {
+ this.minSleepTime = minSleepTime;
this.maxSleepTime = maxSleepTime;
}
@@ -652,7 +658,7 @@ public abstract class GenericTestUtils {
public Object answer(InvocationOnMock invocation) throws Throwable {
boolean interrupted = false;
try {
- Thread.sleep(r.nextInt(maxSleepTime));
+ Thread.sleep(r.nextInt(maxSleepTime) + minSleepTime);
} catch (InterruptedException ie) {
interrupted = true;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5304698d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
index 4986027..1cdb159 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
@@ -164,6 +164,8 @@ public class BlockManager implements BlockStatsMXBean {
private static final String QUEUE_REASON_FUTURE_GENSTAMP =
"generation stamp is in the future";
+ private static final long BLOCK_RECOVERY_TIMEOUT_MULTIPLIER = 30;
+
private final Namesystem namesystem;
private final BlockManagerSafeMode bmSafeMode;
@@ -353,6 +355,9 @@ public class BlockManager implements BlockStatsMXBean {
@VisibleForTesting
final PendingReconstructionBlocks pendingReconstruction;
+ /** Stores information about block recovery attempts. */
+ private final PendingRecoveryBlocks pendingRecoveryBlocks;
+
/** The maximum number of replicas allowed for a block */
public final short maxReplication;
/**
@@ -549,6 +554,12 @@ public class BlockManager implements BlockStatsMXBean {
}
this.minReplicationToBeInMaintenance = (short)minMaintenanceR;
+ long heartbeatIntervalSecs = conf.getTimeDuration(
+ DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY,
+ DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_DEFAULT, TimeUnit.SECONDS);
+ long blockRecoveryTimeout = getBlockRecoveryTimeout(heartbeatIntervalSecs);
+ pendingRecoveryBlocks = new PendingRecoveryBlocks(blockRecoveryTimeout);
+
this.blockReportLeaseManager = new BlockReportLeaseManager(conf);
bmSafeMode = new BlockManagerSafeMode(this, namesystem, haEnabled, conf);
@@ -4736,6 +4747,25 @@ public class BlockManager implements BlockStatsMXBean {
}
}
+ /**
+ * Notification of a successful block recovery.
+ * @param block for which the recovery succeeded
+ */
+ public void successfulBlockRecovery(BlockInfo block) {
+ pendingRecoveryBlocks.remove(block);
+ }
+
+ /**
+ * Checks whether a recovery attempt has been made for the given block.
+ * If so, checks whether that attempt has timed out.
+ * @param b block for which recovery is being attempted
+ * @return true if no recovery attempt has been made or
+ * the previous attempt timed out
+ */
+ public boolean addBlockRecoveryAttempt(BlockInfo b) {
+ return pendingRecoveryBlocks.add(b);
+ }
+
@VisibleForTesting
public void flushBlockOps() throws IOException {
runBlockOp(new Callable<Void>(){
@@ -4863,4 +4893,14 @@ public class BlockManager implements BlockStatsMXBean {
}
return i;
}
+
+ private static long getBlockRecoveryTimeout(long heartbeatIntervalSecs) {
+ return TimeUnit.SECONDS.toMillis(heartbeatIntervalSecs *
+ BLOCK_RECOVERY_TIMEOUT_MULTIPLIER);
+ }
+
+ @VisibleForTesting
+ public void setBlockRecoveryTimeout(long blockRecoveryTimeout) {
+ pendingRecoveryBlocks.setRecoveryTimeoutInterval(blockRecoveryTimeout);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5304698d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingRecoveryBlocks.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingRecoveryBlocks.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingRecoveryBlocks.java
new file mode 100644
index 0000000..3f5f27c
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingRecoveryBlocks.java
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdfs.util.LightWeightHashSet;
+import org.apache.hadoop.util.Time;
+import org.slf4j.Logger;
+
+import java.util.concurrent.TimeUnit;
+
+/**
+ * PendingRecoveryBlocks tracks recovery attempts for each block and their
+ * timeouts to ensure we do not have multiple recoveries at the same time
+ * and retry only after the timeout for a recovery has expired.
+ */
+class PendingRecoveryBlocks {
+ private static final Logger LOG = BlockManager.LOG;
+
+ /** List of recovery attempts per block and the time they expire. */
+ private final LightWeightHashSet<BlockRecoveryAttempt> recoveryTimeouts =
+ new LightWeightHashSet<>();
+
+ /** The timeout for issuing a block recovery again.
+ * (it should be larger than the time to recover a block)
+ */
+ private long recoveryTimeoutInterval;
+
+ PendingRecoveryBlocks(long timeout) {
+ this.recoveryTimeoutInterval = timeout;
+ }
+
+ /**
+ * Remove recovery attempt for the given block.
+ * @param block whose recovery attempt to remove.
+ */
+ synchronized void remove(BlockInfo block) {
+ recoveryTimeouts.remove(new BlockRecoveryAttempt(block));
+ }
+
+ /**
+ * Checks whether a recovery attempt has been made for the given block.
+ * If so, checks whether that attempt has timed out.
+ * @param block block for which recovery is being attempted
+ * @return true if no recovery attempt has been made or
+ * the previous attempt timed out
+ */
+ synchronized boolean add(BlockInfo block) {
+ boolean added = false;
+ long curTime = getTime();
+ BlockRecoveryAttempt recoveryAttempt =
+ recoveryTimeouts.getElement(new BlockRecoveryAttempt(block));
+
+ if (recoveryAttempt == null) {
+ BlockRecoveryAttempt newAttempt = new BlockRecoveryAttempt(
+ block, curTime + recoveryTimeoutInterval);
+ added = recoveryTimeouts.add(newAttempt);
+ } else if (recoveryAttempt.hasTimedOut(curTime)) {
+ // Previous attempt timed out, reset the timeout
+ recoveryAttempt.setTimeout(curTime + recoveryTimeoutInterval);
+ added = true;
+ } else {
+ long timeoutIn = TimeUnit.MILLISECONDS.toSeconds(
+ recoveryAttempt.timeoutAt - curTime);
+ LOG.info("Block recovery attempt for " + block + " rejected, as the " +
+ "previous attempt times out in " + timeoutIn + " seconds.");
+ }
+ return added;
+ }
+
+ /**
+ * Check whether the given block is under recovery.
+ * @param b block for which to check
+ * @return true if the given block is being recovered
+ */
+ synchronized boolean isUnderRecovery(BlockInfo b) {
+ BlockRecoveryAttempt recoveryAttempt =
+ recoveryTimeouts.getElement(new BlockRecoveryAttempt(b));
+ return recoveryAttempt != null;
+ }
+
+ long getTime() {
+ return Time.monotonicNow();
+ }
+
+ @VisibleForTesting
+ synchronized void setRecoveryTimeoutInterval(long recoveryTimeoutInterval) {
+ this.recoveryTimeoutInterval = recoveryTimeoutInterval;
+ }
+
+ /**
+ * Tracks timeout for block recovery attempt of a given block.
+ */
+ private static class BlockRecoveryAttempt {
+ private final BlockInfo blockInfo;
+ private long timeoutAt;
+
+ private BlockRecoveryAttempt(BlockInfo blockInfo) {
+ this(blockInfo, 0);
+ }
+
+ BlockRecoveryAttempt(BlockInfo blockInfo, long timeoutAt) {
+ this.blockInfo = blockInfo;
+ this.timeoutAt = timeoutAt;
+ }
+
+ boolean hasTimedOut(long currentTime) {
+ return currentTime > timeoutAt;
+ }
+
+ void setTimeout(long newTimeoutAt) {
+ this.timeoutAt = newTimeoutAt;
+ }
+
+ @Override
+ public int hashCode() {
+ return blockInfo.hashCode();
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (obj instanceof BlockRecoveryAttempt) {
+ return this.blockInfo.equals(((BlockRecoveryAttempt) obj).blockInfo);
+ }
+ return false;
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5304698d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index d3d9cdc..6a890e2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -3318,25 +3318,30 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
+ "Removed empty last block and closed file " + src);
return true;
}
- // start recovery of the last block for this file
- long blockRecoveryId = nextGenerationStamp(
- blockManager.isLegacyBlock(lastBlock));
- lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
- if(copyOnTruncate) {
- lastBlock.setGenerationStamp(blockRecoveryId);
- } else if(truncateRecovery) {
- recoveryBlock.setGenerationStamp(blockRecoveryId);
- }
- uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
- leaseManager.renewLease(lease);
- // Cannot close file right now, since the last block requires recovery.
- // This may potentially cause infinite loop in lease recovery
- // if there are no valid replicas on data-nodes.
- NameNode.stateChangeLog.warn(
- "DIR* NameSystem.internalReleaseLease: " +
+ // Start recovery of the last block for this file
+ // Only do so if there is no ongoing recovery for this block,
+ // or the previous recovery for this block timed out.
+ if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
+ long blockRecoveryId = nextGenerationStamp(
+ blockManager.isLegacyBlock(lastBlock));
+ if(copyOnTruncate) {
+ lastBlock.setGenerationStamp(blockRecoveryId);
+ } else if(truncateRecovery) {
+ recoveryBlock.setGenerationStamp(blockRecoveryId);
+ }
+ uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
+
+ // Cannot close file right now, since the last block requires recovery.
+ // This may potentially cause infinite loop in lease recovery
+ // if there are no valid replicas on data-nodes.
+ NameNode.stateChangeLog.warn(
+ "DIR* NameSystem.internalReleaseLease: " +
"File " + src + " has not been closed." +
- " Lease recovery is in progress. " +
+ " Lease recovery is in progress. " +
"RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
+ }
+ lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
+ leaseManager.renewLease(lease);
break;
}
return false;
@@ -3604,6 +3609,7 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
// If this commit does not want to close the file, persist blocks
FSDirWriteFileOp.persistBlocks(dir, src, iFile, false);
}
+ blockManager.successfulBlockRecovery(storedBlock);
} finally {
writeUnlock("commitBlockSynchronization");
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5304698d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingRecoveryBlocks.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingRecoveryBlocks.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingRecoveryBlocks.java
new file mode 100644
index 0000000..baad89f
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestPendingRecoveryBlocks.java
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * This class contains unit tests for PendingRecoveryBlocks.java functionality.
+ */
+public class TestPendingRecoveryBlocks {
+
+ private PendingRecoveryBlocks pendingRecoveryBlocks;
+ private final long recoveryTimeout = 1000L;
+
+ private final BlockInfo blk1 = getBlock(1);
+ private final BlockInfo blk2 = getBlock(2);
+ private final BlockInfo blk3 = getBlock(3);
+
+ @Before
+ public void setUp() {
+ pendingRecoveryBlocks =
+ Mockito.spy(new PendingRecoveryBlocks(recoveryTimeout));
+ }
+
+ BlockInfo getBlock(long blockId) {
+ return new BlockInfoContiguous(new Block(blockId), (short) 0);
+ }
+
+ @Test
+ public void testAddDifferentBlocks() {
+ assertTrue(pendingRecoveryBlocks.add(blk1));
+ assertTrue(pendingRecoveryBlocks.isUnderRecovery(blk1));
+ assertTrue(pendingRecoveryBlocks.add(blk2));
+ assertTrue(pendingRecoveryBlocks.isUnderRecovery(blk2));
+ assertTrue(pendingRecoveryBlocks.add(blk3));
+ assertTrue(pendingRecoveryBlocks.isUnderRecovery(blk3));
+ }
+
+ @Test
+ public void testAddAndRemoveBlocks() {
+ // Add blocks
+ assertTrue(pendingRecoveryBlocks.add(blk1));
+ assertTrue(pendingRecoveryBlocks.add(blk2));
+
+ // Remove blk1
+ pendingRecoveryBlocks.remove(blk1);
+
+ // Adding back blk1 should succeed
+ assertTrue(pendingRecoveryBlocks.add(blk1));
+ }
+
+ @Test
+ public void testAddBlockWithPreviousRecoveryTimedOut() {
+ // Add blk
+ Mockito.doReturn(0L).when(pendingRecoveryBlocks).getTime();
+ assertTrue(pendingRecoveryBlocks.add(blk1));
+
+ // Should fail, has not timed out yet
+ Mockito.doReturn(recoveryTimeout / 2).when(pendingRecoveryBlocks).getTime();
+ assertFalse(pendingRecoveryBlocks.add(blk1));
+
+ // Should succeed after timing out
+ Mockito.doReturn(recoveryTimeout * 2).when(pendingRecoveryBlocks).getTime();
+ assertTrue(pendingRecoveryBlocks.add(blk1));
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5304698d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
index 311d5a6..208447d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
@@ -18,7 +18,10 @@
package org.apache.hadoop.hdfs.server.datanode;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
import org.apache.hadoop.hdfs.server.protocol.SlowDiskReports;
+
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
import static org.mockito.Matchers.any;
@@ -43,6 +46,7 @@ import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
+import java.util.Random;
import java.util.concurrent.Semaphore;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
@@ -94,6 +98,7 @@ import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
import org.apache.hadoop.hdfs.server.protocol.StorageReport;
import org.apache.hadoop.hdfs.server.protocol.VolumeFailureSummary;
import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.GenericTestUtils.SleepAnswer;
import org.apache.hadoop.util.DataChecksum;
import org.apache.hadoop.util.Time;
import org.apache.log4j.Level;
@@ -1035,4 +1040,107 @@ public class TestBlockRecovery {
Assert.fail("Thread failure: " + failureReason);
}
}
+
+ /**
+ * Test for block recovery taking longer than the heartbeat interval.
+ */
+ @Test(timeout = 300000L)
+ public void testRecoverySlowerThanHeartbeat() throws Exception {
+ tearDown(); // Stop the Mocked DN started in startup()
+
+ SleepAnswer delayer = new SleepAnswer(3000, 6000);
+ testRecoveryWithDatanodeDelayed(delayer);
+ }
+
+ /**
+ * Test for block recovery timeout. All recovery attempts will be delayed
+ * and the first attempt will be lost to trigger recovery timeout and retry.
+ */
+ @Test(timeout = 300000L)
+ public void testRecoveryTimeout() throws Exception {
+ tearDown(); // Stop the Mocked DN started in startup()
+ final Random r = new Random();
+
+ // Make sure first commitBlockSynchronization call from the DN gets lost
+ // for the recovery timeout to expire and new recovery attempt
+ // to be started.
+ SleepAnswer delayer = new SleepAnswer(3000) {
+ private final AtomicBoolean callRealMethod = new AtomicBoolean();
+
+ @Override
+ public Object answer(InvocationOnMock invocation) throws Throwable {
+ boolean interrupted = false;
+ try {
+ Thread.sleep(r.nextInt(3000) + 6000);
+ } catch (InterruptedException ie) {
+ interrupted = true;
+ }
+ try {
+ if (callRealMethod.get()) {
+ return invocation.callRealMethod();
+ }
+ callRealMethod.set(true);
+ return null;
+ } finally {
+ if (interrupted) {
+ Thread.currentThread().interrupt();
+ }
+ }
+ }
+ };
+ testRecoveryWithDatanodeDelayed(delayer);
+ }
+
+ private void testRecoveryWithDatanodeDelayed(
+ GenericTestUtils.SleepAnswer recoveryDelayer) throws Exception {
+ Configuration configuration = new HdfsConfiguration();
+ configuration.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+ MiniDFSCluster cluster = null;
+
+ try {
+ cluster = new MiniDFSCluster.Builder(configuration)
+ .numDataNodes(2).build();
+ cluster.waitActive();
+ final FSNamesystem ns = cluster.getNamesystem();
+ final NameNode nn = cluster.getNameNode();
+ final DistributedFileSystem dfs = cluster.getFileSystem();
+ ns.getBlockManager().setBlockRecoveryTimeout(
+ TimeUnit.SECONDS.toMillis(10));
+
+ // Create a file and never close the output stream to trigger recovery
+ FSDataOutputStream out = dfs.create(new Path("/testSlowRecovery"),
+ (short) 2);
+ out.write(AppendTestUtil.randomBytes(0, 4096));
+ out.hsync();
+
+ List<DataNode> dataNodes = cluster.getDataNodes();
+ for (DataNode datanode : dataNodes) {
+ DatanodeProtocolClientSideTranslatorPB nnSpy =
+ InternalDataNodeTestUtils.spyOnBposToNN(datanode, nn);
+
+ Mockito.doAnswer(recoveryDelayer).when(nnSpy).
+ commitBlockSynchronization(
+ Mockito.any(ExtendedBlock.class), Mockito.anyInt(),
+ Mockito.anyLong(), Mockito.anyBoolean(),
+ Mockito.anyBoolean(), Mockito.anyObject(),
+ Mockito.anyObject());
+ }
+
+ // Make sure hard lease expires to trigger replica recovery
+ cluster.setLeasePeriod(100L, 100L);
+
+ // Wait for recovery to succeed
+ GenericTestUtils.waitFor(new Supplier<Boolean>() {
+ @Override
+ public Boolean get() {
+ return ns.getCompleteBlocksTotal() > 0;
+ }
+ }, 300, 300000);
+
+ } finally {
+ if (cluster != null) {
+ cluster.shutdown();
+ }
+ }
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5304698d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
index dc7f47a..a565578 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java
@@ -25,6 +25,7 @@ import static org.junit.Assert.fail;
import java.io.IOException;
import java.security.PrivilegedExceptionAction;
import java.util.Random;
+import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import org.apache.commons.logging.Log;
@@ -278,12 +279,14 @@ public class TestPipelinesFailover {
// Disable permissions so that another user can recover the lease.
conf.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, BLOCK_SIZE);
-
+
FSDataOutputStream stm = null;
final MiniDFSCluster cluster = newMiniCluster(conf, 3);
try {
cluster.waitActive();
cluster.transitionToActive(0);
+ cluster.getNamesystem().getBlockManager().setBlockRecoveryTimeout(
+ TimeUnit.SECONDS.toMillis(1));
Thread.sleep(500);
LOG.info("Starting with NN 0 active");
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[38/50] [abbrv] hadoop git commit: HDFS-12091. [READ] Check that the
replicas served from a ProvidedVolumeImpl belong to the correct external
storage
Posted by vi...@apache.org.
HDFS-12091. [READ] Check that the replicas served from a ProvidedVolumeImpl belong to the correct external storage
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/6fdb52da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/6fdb52da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/6fdb52da
Branch: refs/heads/HDFS-9806
Commit: 6fdb52da6316e86a9d1198859f9e169f78f9cac4
Parents: cf2ef64
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Mon Aug 7 11:35:49 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../hdfs/server/datanode/StorageLocation.java | 26 +++--
.../fsdataset/impl/ProvidedVolumeImpl.java | 67 ++++++++++--
.../fsdataset/impl/TestProvidedImpl.java | 105 ++++++++++++++++++-
3 files changed, 173 insertions(+), 25 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fdb52da/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
index fb7acfd..d72448d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
@@ -64,21 +64,25 @@ public class StorageLocation
this.storageType = storageType;
if (uri.getScheme() == null || uri.getScheme().equals("file")) {
// make sure all URIs that point to a file have the same scheme
- try {
- File uriFile = new File(uri.getPath());
- String uriStr = uriFile.toURI().normalize().toString();
- if (uriStr.endsWith("/")) {
- uriStr = uriStr.substring(0, uriStr.length() - 1);
- }
- uri = new URI(uriStr);
- } catch (URISyntaxException e) {
- throw new IllegalArgumentException(
- "URI: " + uri + " is not in the expected format");
- }
+ uri = normalizeFileURI(uri);
}
baseURI = uri;
}
+ public static URI normalizeFileURI(URI uri) {
+ try {
+ File uriFile = new File(uri.getPath());
+ String uriStr = uriFile.toURI().normalize().toString();
+ if (uriStr.endsWith("/")) {
+ uriStr = uriStr.substring(0, uriStr.length() - 1);
+ }
+ return new URI(uriStr);
+ } catch (URISyntaxException e) {
+ throw new IllegalArgumentException(
+ "URI: " + uri + " is not in the expected format");
+ }
+ }
+
public StorageType getStorageType() {
return this.storageType;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fdb52da/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index 421b9cc..5cd28c7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
import org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline;
import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.ReportCompiler;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
import org.apache.hadoop.hdfs.server.datanode.checker.VolumeCheckResult;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
import org.apache.hadoop.hdfs.server.datanode.FileIoProvider;
@@ -64,7 +65,7 @@ import org.apache.hadoop.util.Time;
public class ProvidedVolumeImpl extends FsVolumeImpl {
static class ProvidedBlockPoolSlice {
- private FsVolumeImpl providedVolume;
+ private ProvidedVolumeImpl providedVolume;
private FileRegionProvider provider;
private Configuration conf;
@@ -89,13 +90,20 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
return provider;
}
+ @VisibleForTesting
+ void setFileRegionProvider(FileRegionProvider newProvider) {
+ this.provider = newProvider;
+ }
+
public void getVolumeMap(ReplicaMap volumeMap,
RamDiskReplicaTracker ramDiskReplicaMap) throws IOException {
Iterator<FileRegion> iter = provider.iterator();
- while(iter.hasNext()) {
+ while (iter.hasNext()) {
FileRegion region = iter.next();
- if (region.getBlockPoolId() != null &&
- region.getBlockPoolId().equals(bpid)) {
+ if (region.getBlockPoolId() != null
+ && region.getBlockPoolId().equals(bpid)
+ && containsBlock(providedVolume.baseURI,
+ region.getPath().toUri())) {
ReplicaInfo newReplica = new ReplicaBuilder(ReplicaState.FINALIZED)
.setBlockId(region.getBlock().getBlockId())
.setURI(region.getPath().toUri())
@@ -103,17 +111,16 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
.setLength(region.getBlock().getNumBytes())
.setGenerationStamp(region.getBlock().getGenerationStamp())
.setFsVolume(providedVolume)
- .setConf(conf).build();
-
- ReplicaInfo oldReplica =
- volumeMap.get(bpid, newReplica.getBlockId());
+ .setConf(conf)
+ .build();
+ // check if the replica already exists
+ ReplicaInfo oldReplica = volumeMap.get(bpid, newReplica.getBlockId());
if (oldReplica == null) {
volumeMap.add(bpid, newReplica);
bpVolumeMap.add(bpid, newReplica);
} else {
- throw new IOException(
- "A block with id " + newReplica.getBlockId() +
- " already exists in the volumeMap");
+ throw new IOException("A block with id " + newReplica.getBlockId()
+ + " already exists in the volumeMap");
}
}
}
@@ -527,4 +534,42 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
throw new UnsupportedOperationException(
"ProvidedVolume does not yet support writes");
}
+
+ private static URI getAbsoluteURI(URI uri) {
+ if (!uri.isAbsolute()) {
+ // URI is not absolute implies it is for a local file
+ // normalize the URI
+ return StorageLocation.normalizeFileURI(uri);
+ } else {
+ return uri;
+ }
+ }
+ /**
+ * @param volumeURI URI of the volume
+ * @param blockURI URI of the block
+ * @return true if the {@code blockURI} can belong to the volume or both URIs
+ * are null.
+ */
+ @VisibleForTesting
+ public static boolean containsBlock(URI volumeURI, URI blockURI) {
+ if (volumeURI == null && blockURI == null){
+ return true;
+ }
+ if (volumeURI == null || blockURI == null) {
+ return false;
+ }
+ volumeURI = getAbsoluteURI(volumeURI);
+ blockURI = getAbsoluteURI(blockURI);
+ return !volumeURI.relativize(blockURI).equals(blockURI);
+ }
+
+ @VisibleForTesting
+ void setFileRegionProvider(String bpid, FileRegionProvider provider)
+ throws IOException {
+ ProvidedBlockPoolSlice bp = bpSlices.get(bpid);
+ if (bp == null) {
+ throw new IOException("block pool " + bpid + " is not found");
+ }
+ bp.setFileRegionProvider(provider);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/6fdb52da/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index 4753235..8782e71 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -31,6 +31,8 @@ import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStreamWriter;
import java.io.Writer;
+import java.net.URI;
+import java.net.URISyntaxException;
import java.nio.ByteBuffer;
import java.nio.channels.Channels;
import java.nio.channels.ReadableByteChannel;
@@ -174,15 +176,26 @@ public class TestProvidedImpl {
private Configuration conf;
private int minId;
private int numBlocks;
+ private Iterator<FileRegion> suppliedIterator;
TestFileRegionProvider() {
- minId = MIN_BLK_ID;
- numBlocks = NUM_PROVIDED_BLKS;
+ this(null, MIN_BLK_ID, NUM_PROVIDED_BLKS);
+ }
+
+ TestFileRegionProvider(Iterator<FileRegion> iterator, int minId,
+ int numBlocks) {
+ this.suppliedIterator = iterator;
+ this.minId = minId;
+ this.numBlocks = numBlocks;
}
@Override
public Iterator<FileRegion> iterator() {
- return new TestFileRegionIterator(providedBasePath, minId, numBlocks);
+ if (suppliedIterator == null) {
+ return new TestFileRegionIterator(providedBasePath, minId, numBlocks);
+ } else {
+ return suppliedIterator;
+ }
}
@Override
@@ -503,4 +516,90 @@ public class TestProvidedImpl {
}
}
}
+
+ private int getBlocksInProvidedVolumes(String basePath, int numBlocks,
+ int minBlockId) throws IOException {
+ TestFileRegionIterator fileRegionIterator =
+ new TestFileRegionIterator(basePath, minBlockId, numBlocks);
+ int totalBlocks = 0;
+ for (int i = 0; i < providedVolumes.size(); i++) {
+ ProvidedVolumeImpl vol = (ProvidedVolumeImpl) providedVolumes.get(i);
+ vol.setFileRegionProvider(BLOCK_POOL_IDS[CHOSEN_BP_ID],
+ new TestFileRegionProvider(fileRegionIterator, minBlockId,
+ numBlocks));
+ ReplicaMap volumeMap = new ReplicaMap(new AutoCloseableLock());
+ vol.getVolumeMap(BLOCK_POOL_IDS[CHOSEN_BP_ID], volumeMap, null);
+ totalBlocks += volumeMap.size(BLOCK_POOL_IDS[CHOSEN_BP_ID]);
+ }
+ return totalBlocks;
+ }
+
+ /**
+ * Tests if the FileRegions provided by the FileRegionProvider
+ * can belong to the Providevolume.
+ * @throws IOException
+ */
+ @Test
+ public void testProvidedVolumeContents() throws IOException {
+ int expectedBlocks = 5;
+ int minId = 0;
+ //use a path which has the same prefix as providedBasePath
+ //all these blocks can belong to the provided volume
+ int blocksFound = getBlocksInProvidedVolumes(providedBasePath + "/test1/",
+ expectedBlocks, minId);
+ assertEquals(
+ "Number of blocks in provided volumes should be " + expectedBlocks,
+ expectedBlocks, blocksFound);
+ blocksFound = getBlocksInProvidedVolumes(
+ "file:/" + providedBasePath + "/test1/", expectedBlocks, minId);
+ assertEquals(
+ "Number of blocks in provided volumes should be " + expectedBlocks,
+ expectedBlocks, blocksFound);
+ //use a path that is entirely different from the providedBasePath
+ //none of these blocks can belong to the volume
+ blocksFound =
+ getBlocksInProvidedVolumes("randomtest1/", expectedBlocks, minId);
+ assertEquals("Number of blocks in provided volumes should be 0", 0,
+ blocksFound);
+ }
+
+ @Test
+ public void testProvidedVolumeContainsBlock() throws URISyntaxException {
+ assertEquals(true, ProvidedVolumeImpl.containsBlock(null, null));
+ assertEquals(false,
+ ProvidedVolumeImpl.containsBlock(new URI("file:/a"), null));
+ assertEquals(true,
+ ProvidedVolumeImpl.containsBlock(new URI("file:/a/b/c/"),
+ new URI("file:/a/b/c/d/e.file")));
+ assertEquals(true,
+ ProvidedVolumeImpl.containsBlock(new URI("/a/b/c/"),
+ new URI("file:/a/b/c/d/e.file")));
+ assertEquals(true,
+ ProvidedVolumeImpl.containsBlock(new URI("/a/b/c"),
+ new URI("file:/a/b/c/d/e.file")));
+ assertEquals(true,
+ ProvidedVolumeImpl.containsBlock(new URI("/a/b/c/"),
+ new URI("/a/b/c/d/e.file")));
+ assertEquals(true,
+ ProvidedVolumeImpl.containsBlock(new URI("file:/a/b/c/"),
+ new URI("/a/b/c/d/e.file")));
+ assertEquals(false,
+ ProvidedVolumeImpl.containsBlock(new URI("/a/b/e"),
+ new URI("file:/a/b/c/d/e.file")));
+ assertEquals(false,
+ ProvidedVolumeImpl.containsBlock(new URI("file:/a/b/e"),
+ new URI("file:/a/b/c/d/e.file")));
+ assertEquals(true,
+ ProvidedVolumeImpl.containsBlock(new URI("s3a:/bucket1/dir1/"),
+ new URI("s3a:/bucket1/dir1/temp.txt")));
+ assertEquals(false,
+ ProvidedVolumeImpl.containsBlock(new URI("s3a:/bucket2/dir1/"),
+ new URI("s3a:/bucket1/dir1/temp.txt")));
+ assertEquals(false,
+ ProvidedVolumeImpl.containsBlock(new URI("s3a:/bucket1/dir1/"),
+ new URI("s3a:/bucket1/temp.txt")));
+ assertEquals(false,
+ ProvidedVolumeImpl.containsBlock(new URI("/bucket1/dir1/"),
+ new URI("s3a:/bucket1/dir1/temp.txt")));
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[28/50] [abbrv] hadoop git commit: HDFS-12289. [READ] HDFS-12091
breaks the tests for provided block reads
Posted by vi...@apache.org.
HDFS-12289. [READ] HDFS-12091 breaks the tests for provided block reads
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7eabf01e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7eabf01e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7eabf01e
Branch: refs/heads/HDFS-9806
Commit: 7eabf01ea9ea8f97519b49fd658e6dde63a3853f
Parents: 30f2de1
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Mon Aug 14 10:29:47 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../org/apache/hadoop/hdfs/MiniDFSCluster.java | 30 +++++++++++++++++++-
.../TestNameNodeProvidedImplementation.java | 4 ++-
2 files changed, 32 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eabf01e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index da91006..a58893b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -147,6 +147,9 @@ public class MiniDFSCluster implements AutoCloseable {
GenericTestUtils.SYSPROP_TEST_DATA_DIR;
/** Configuration option to set the data dir: {@value} */
public static final String HDFS_MINIDFS_BASEDIR = "hdfs.minidfs.basedir";
+ /** Configuration option to set the provided data dir: {@value} */
+ public static final String HDFS_MINIDFS_BASEDIR_PROVIDED =
+ "hdfs.minidfs.basedir.provided";
public static final String DFS_NAMENODE_SAFEMODE_EXTENSION_TESTING_KEY
= DFS_NAMENODE_SAFEMODE_EXTENSION_KEY + ".testing";
public static final String DFS_NAMENODE_DECOMMISSION_INTERVAL_TESTING_KEY
@@ -1397,7 +1400,12 @@ public class MiniDFSCluster implements AutoCloseable {
if ((storageTypes != null) && (j >= storageTypes.length)) {
break;
}
- File dir = getInstanceStorageDir(dnIndex, j);
+ File dir;
+ if (storageTypes != null && storageTypes[j] == StorageType.PROVIDED) {
+ dir = getProvidedStorageDir(dnIndex, j);
+ } else {
+ dir = getInstanceStorageDir(dnIndex, j);
+ }
dir.mkdirs();
if (!dir.isDirectory()) {
throw new IOException("Mkdirs failed to create directory for DataNode " + dir);
@@ -2847,6 +2855,26 @@ public class MiniDFSCluster implements AutoCloseable {
}
/**
+ * Get a storage directory for PROVIDED storages.
+ * The PROVIDED directory to return can be set by using the configuration
+ * parameter {@link #HDFS_MINIDFS_BASEDIR_PROVIDED}. If this parameter is
+ * not set, this function behaves exactly the same as
+ * {@link #getInstanceStorageDir(int, int)}. Currently, the two parameters
+ * are ignored as only one PROVIDED storage is supported in HDFS-9806.
+ *
+ * @param dnIndex datanode index (starts from 0)
+ * @param dirIndex directory index
+ * @return Storage directory
+ */
+ public File getProvidedStorageDir(int dnIndex, int dirIndex) {
+ String base = conf.get(HDFS_MINIDFS_BASEDIR_PROVIDED, null);
+ if (base == null) {
+ return getInstanceStorageDir(dnIndex, dirIndex);
+ }
+ return new File(base);
+ }
+
+ /**
* Get a storage directory for a datanode.
* <ol>
* <li><base directory>/data/data<2*dnIndex + 1></li>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7eabf01e/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 60b306f..3f937c4 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -74,7 +74,7 @@ public class TestNameNodeProvidedImplementation {
final Random r = new Random();
final File fBASE = new File(MiniDFSCluster.getBaseDirectory());
final Path BASE = new Path(fBASE.toURI().toString());
- final Path NAMEPATH = new Path(BASE, "providedDir");;
+ final Path NAMEPATH = new Path(BASE, "providedDir");
final Path NNDIRPATH = new Path(BASE, "nnDir");
final Path BLOCKFILE = new Path(NNDIRPATH, "blocks.csv");
final String SINGLEUSER = "usr1";
@@ -116,6 +116,8 @@ public class TestNameNodeProvidedImplementation {
BLOCKFILE.toString());
conf.set(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER, ",");
+ conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR_PROVIDED,
+ new File(NAMEPATH.toUri()).toString());
File imageDir = new File(NAMEPATH.toUri());
if (!imageDir.exists()) {
LOG.info("Creating directory: " + imageDir);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[24/50] [abbrv] hadoop git commit: HDFS-10675. Datanode support to
read from external stores.
Posted by vi...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
index adec209..15e71f0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.server.datanode.FileIoProvider;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.ReportCompiler;
import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
import org.apache.hadoop.hdfs.server.datanode.checker.Checkable;
@@ -241,10 +242,11 @@ public interface FsVolumeSpi
private final FsVolumeSpi volume;
+ private final FileRegion fileRegion;
/**
* Get the file's length in async block scan
*/
- private final long blockFileLength;
+ private final long blockLength;
private final static Pattern CONDENSED_PATH_REGEX =
Pattern.compile("(?<!^)(\\\\|/){2,}");
@@ -294,13 +296,30 @@ public interface FsVolumeSpi
*/
public ScanInfo(long blockId, File blockFile, File metaFile,
FsVolumeSpi vol) {
+ this(blockId, blockFile, metaFile, vol, null,
+ (blockFile != null) ? blockFile.length() : 0);
+ }
+
+ /**
+ * Create a ScanInfo object for a block. This constructor will examine
+ * the block data and meta-data files.
+ *
+ * @param blockId the block ID
+ * @param blockFile the path to the block data file
+ * @param metaFile the path to the block meta-data file
+ * @param vol the volume that contains the block
+ * @param fileRegion the file region (for provided blocks)
+ * @param length the length of the block data
+ */
+ public ScanInfo(long blockId, File blockFile, File metaFile,
+ FsVolumeSpi vol, FileRegion fileRegion, long length) {
this.blockId = blockId;
String condensedVolPath =
(vol == null || vol.getBaseURI() == null) ? null :
getCondensedPath(new File(vol.getBaseURI()).getAbsolutePath());
this.blockSuffix = blockFile == null ? null :
getSuffix(blockFile, condensedVolPath);
- this.blockFileLength = (blockFile != null) ? blockFile.length() : 0;
+ this.blockLength = length;
if (metaFile == null) {
this.metaSuffix = null;
} else if (blockFile == null) {
@@ -310,6 +329,7 @@ public interface FsVolumeSpi
condensedVolPath + blockSuffix);
}
this.volume = vol;
+ this.fileRegion = fileRegion;
}
/**
@@ -328,8 +348,8 @@ public interface FsVolumeSpi
*
* @return the length of the data block
*/
- public long getBlockFileLength() {
- return blockFileLength;
+ public long getBlockLength() {
+ return blockLength;
}
/**
@@ -399,6 +419,10 @@ public interface FsVolumeSpi
getMetaFile().getName()) :
HdfsConstants.GRANDFATHER_GENERATION_STAMP;
}
+
+ public FileRegion getFileRegion() {
+ return fileRegion;
+ }
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
new file mode 100644
index 0000000..24921c4
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/DefaultProvidedVolumeDF.java
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * The default usage statistics for a provided volume.
+ */
+public class DefaultProvidedVolumeDF
+ implements ProvidedVolumeDF, Configurable {
+
+ @Override
+ public void setConf(Configuration conf) {
+ }
+
+ @Override
+ public Configuration getConf() {
+ return null;
+ }
+
+ @Override
+ public long getCapacity() {
+ return Long.MAX_VALUE;
+ }
+
+ @Override
+ public long getSpaceUsed() {
+ return 0;
+ }
+
+ @Override
+ public long getBlockPoolUsed(String bpid) {
+ return 0;
+ }
+
+ @Override
+ public long getAvailable() {
+ return Long.MAX_VALUE;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
index d4375cd..81056db 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
@@ -86,6 +86,7 @@ import org.apache.hadoop.hdfs.server.datanode.UnexpectedReplicaStateException;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.ScanInfo;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaInputStreams;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams;
@@ -1742,6 +1743,10 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
Set<String> missingVolumesReported = new HashSet<>();
for (ReplicaInfo b : volumeMap.replicas(bpid)) {
+ //skip blocks in PROVIDED storage
+ if (b.getVolume().getStorageType() == StorageType.PROVIDED) {
+ continue;
+ }
String volStorageID = b.getVolume().getStorageID();
if (!builders.containsKey(volStorageID)) {
if (!missingVolumesReported.contains(volStorageID)) {
@@ -1877,7 +1882,6 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
try (AutoCloseableLock lock = datasetLock.acquire()) {
r = volumeMap.get(bpid, blockId);
}
-
if (r != null) {
if (r.blockDataExists()) {
return r;
@@ -2230,13 +2234,20 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
* @param vol Volume of the block file
*/
@Override
- public void checkAndUpdate(String bpid, long blockId, File diskFile,
- File diskMetaFile, FsVolumeSpi vol) throws IOException {
+ public void checkAndUpdate(String bpid, ScanInfo scanInfo)
+ throws IOException {
+
+ long blockId = scanInfo.getBlockId();
+ File diskFile = scanInfo.getBlockFile();
+ File diskMetaFile = scanInfo.getMetaFile();
+ FsVolumeSpi vol = scanInfo.getVolume();
+
Block corruptBlock = null;
ReplicaInfo memBlockInfo;
try (AutoCloseableLock lock = datasetLock.acquire()) {
memBlockInfo = volumeMap.get(bpid, blockId);
- if (memBlockInfo != null && memBlockInfo.getState() != ReplicaState.FINALIZED) {
+ if (memBlockInfo != null &&
+ memBlockInfo.getState() != ReplicaState.FINALIZED) {
// Block is not finalized - ignore the difference
return;
}
@@ -2251,6 +2262,26 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
Block.getGenerationStamp(diskMetaFile.getName()) :
HdfsConstants.GRANDFATHER_GENERATION_STAMP;
+ if (vol.getStorageType() == StorageType.PROVIDED) {
+ if (memBlockInfo == null) {
+ //replica exists on provided store but not in memory
+ ReplicaInfo diskBlockInfo =
+ new ReplicaBuilder(ReplicaState.FINALIZED)
+ .setFileRegion(scanInfo.getFileRegion())
+ .setFsVolume(vol)
+ .setConf(conf)
+ .build();
+
+ volumeMap.add(bpid, diskBlockInfo);
+ LOG.warn("Added missing block to memory " + diskBlockInfo);
+ } else {
+ //replica exists in memory but not in the provided store
+ volumeMap.remove(bpid, blockId);
+ LOG.warn("Deleting missing provided block " + memBlockInfo);
+ }
+ return;
+ }
+
if (!diskFileExists) {
if (memBlockInfo == null) {
// Block file does not exist and block does not exist in memory
@@ -3026,7 +3057,6 @@ class FsDatasetImpl implements FsDatasetSpi<FsVolumeImpl> {
newReplicaInfo =
replicaState.getLazyPersistVolume().activateSavedReplica(bpid,
replicaInfo, replicaState);
-
// Update the volumeMap entry.
volumeMap.add(bpid, newReplicaInfo);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
index 32759c4..9f115a0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
@@ -17,6 +17,8 @@
*/
package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
import java.io.File;
import java.io.FileDescriptor;
import java.io.FileInputStream;
@@ -32,10 +34,12 @@ import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader;
import org.apache.hadoop.hdfs.server.datanode.DatanodeUtil;
import org.apache.hadoop.hdfs.server.datanode.FinalizedReplica;
import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.util.DataChecksum;
/** Utility methods. */
@InterfaceAudience.Private
@@ -44,6 +48,22 @@ public class FsDatasetUtil {
return f.getName().endsWith(DatanodeUtil.UNLINK_BLOCK_SUFFIX);
}
+ public static byte[] createNullChecksumByteArray() {
+ DataChecksum csum =
+ DataChecksum.newDataChecksum(DataChecksum.Type.NULL, 512);
+ ByteArrayOutputStream out = new ByteArrayOutputStream();
+ DataOutputStream dataOut = new DataOutputStream(out);
+ try {
+ BlockMetadataHeader.writeHeader(dataOut, csum);
+ dataOut.close();
+ } catch (IOException e) {
+ FsVolumeImpl.LOG.error(
+ "Exception in creating null checksum stream: " + e);
+ return null;
+ }
+ return out.toByteArray();
+ }
+
static File getOrigFile(File unlinkTmpFile) {
final String name = unlinkTmpFile.getName();
if (!name.endsWith(DatanodeUtil.UNLINK_BLOCK_SUFFIX)) {
@@ -135,8 +155,9 @@ public class FsDatasetUtil {
* Compute the checksum for a block file that does not already have
* its checksum computed, and save it to dstMeta file.
*/
- public static void computeChecksum(File srcMeta, File dstMeta, File blockFile,
- int smallBufferSize, Configuration conf) throws IOException {
+ public static void computeChecksum(File srcMeta, File dstMeta,
+ File blockFile, int smallBufferSize, Configuration conf)
+ throws IOException {
Preconditions.checkNotNull(srcMeta);
Preconditions.checkNotNull(dstMeta);
Preconditions.checkNotNull(blockFile);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
index 7224e69..319bc0e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
@@ -154,18 +154,24 @@ public class FsVolumeImpl implements FsVolumeSpi {
this.reservedForReplicas = new AtomicLong(0L);
this.storageLocation = sd.getStorageLocation();
this.currentDir = sd.getCurrentDir();
- File parent = currentDir.getParentFile();
- this.usage = new DF(parent, conf);
this.storageType = storageLocation.getStorageType();
this.reserved = conf.getLong(DFSConfigKeys.DFS_DATANODE_DU_RESERVED_KEY
+ "." + StringUtils.toLowerCase(storageType.toString()), conf.getLong(
DFSConfigKeys.DFS_DATANODE_DU_RESERVED_KEY,
DFSConfigKeys.DFS_DATANODE_DU_RESERVED_DEFAULT));
this.configuredCapacity = -1;
+ if (currentDir != null) {
+ File parent = currentDir.getParentFile();
+ this.usage = new DF(parent, conf);
+ cacheExecutor = initializeCacheExecutor(parent);
+ this.metrics = DataNodeVolumeMetrics.create(conf, parent.getPath());
+ } else {
+ this.usage = null;
+ cacheExecutor = null;
+ this.metrics = null;
+ }
this.conf = conf;
this.fileIoProvider = fileIoProvider;
- cacheExecutor = initializeCacheExecutor(parent);
- this.metrics = DataNodeVolumeMetrics.create(conf, getBaseURI().getPath());
}
protected ThreadPoolExecutor initializeCacheExecutor(File parent) {
@@ -440,7 +446,8 @@ public class FsVolumeImpl implements FsVolumeSpi {
/**
* Unplanned Non-DFS usage, i.e. Extra usage beyond reserved.
*
- * @return
+ * @return Disk usage excluding space used by HDFS and excluding space
+ * reserved for blocks open for write.
* @throws IOException
*/
public long getNonDfsUsed() throws IOException {
@@ -518,7 +525,7 @@ public class FsVolumeImpl implements FsVolumeSpi {
public String[] getBlockPoolList() {
return bpSlices.keySet().toArray(new String[bpSlices.keySet().size()]);
}
-
+
/**
* Temporary files. They get moved to the finalized block directory when
* the block is finalized.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java
index 427f81b..2da9170 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImplBuilder.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
import org.apache.hadoop.hdfs.server.datanode.FileIoProvider;
@@ -67,6 +68,11 @@ public class FsVolumeImplBuilder {
}
FsVolumeImpl build() throws IOException {
+ if (sd.getStorageLocation().getStorageType() == StorageType.PROVIDED) {
+ return new ProvidedVolumeImpl(dataset, storageID, sd,
+ fileIoProvider != null ? fileIoProvider :
+ new FileIoProvider(null, null), conf);
+ }
return new FsVolumeImpl(
dataset, storageID, sd,
fileIoProvider != null ? fileIoProvider :
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
new file mode 100644
index 0000000..4d28883
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeDF.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+/**
+ * This interface is used to define the usage statistics
+ * of the provided storage.
+ */
+public interface ProvidedVolumeDF {
+
+ long getCapacity();
+
+ long getSpaceUsed();
+
+ long getBlockPoolUsed(String bpid);
+
+ long getAvailable();
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
new file mode 100644
index 0000000..a48e117
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -0,0 +1,526 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.Map;
+import java.util.Set;
+import java.util.Map.Entry;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.common.FileRegionProvider;
+import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
+import org.apache.hadoop.hdfs.server.common.TextFileRegionProvider;
+import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
+import org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline;
+import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
+import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.ReportCompiler;
+import org.apache.hadoop.hdfs.server.datanode.checker.VolumeCheckResult;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.hdfs.server.datanode.FileIoProvider;
+import org.apache.hadoop.hdfs.server.datanode.ReplicaBuilder;
+import org.apache.hadoop.util.Timer;
+import org.apache.hadoop.util.DiskChecker.DiskErrorException;
+import org.apache.hadoop.util.AutoCloseableLock;
+import org.codehaus.jackson.annotate.JsonProperty;
+import org.codehaus.jackson.map.ObjectMapper;
+import org.codehaus.jackson.map.ObjectReader;
+import org.codehaus.jackson.map.ObjectWriter;
+
+import com.google.common.annotations.VisibleForTesting;
+
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.Time;
+
+/**
+ * This class is used to create provided volumes.
+ */
+public class ProvidedVolumeImpl extends FsVolumeImpl {
+
+ static class ProvidedBlockPoolSlice {
+ private FsVolumeImpl providedVolume;
+
+ private FileRegionProvider provider;
+ private Configuration conf;
+ private String bpid;
+ private ReplicaMap bpVolumeMap;
+
+ ProvidedBlockPoolSlice(String bpid, ProvidedVolumeImpl volume,
+ Configuration conf) {
+ this.providedVolume = volume;
+ bpVolumeMap = new ReplicaMap(new AutoCloseableLock());
+ Class<? extends FileRegionProvider> fmt =
+ conf.getClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
+ TextFileRegionProvider.class, FileRegionProvider.class);
+ provider = ReflectionUtils.newInstance(fmt, conf);
+ this.conf = conf;
+ this.bpid = bpid;
+ bpVolumeMap.initBlockPool(bpid);
+ LOG.info("Created provider: " + provider.getClass());
+ }
+
+ FileRegionProvider getFileRegionProvider() {
+ return provider;
+ }
+
+ public void getVolumeMap(ReplicaMap volumeMap,
+ RamDiskReplicaTracker ramDiskReplicaMap) throws IOException {
+ Iterator<FileRegion> iter = provider.iterator();
+ while(iter.hasNext()) {
+ FileRegion region = iter.next();
+ if (region.getBlockPoolId() != null &&
+ region.getBlockPoolId().equals(bpid)) {
+ ReplicaInfo newReplica = new ReplicaBuilder(ReplicaState.FINALIZED)
+ .setBlockId(region.getBlock().getBlockId())
+ .setURI(region.getPath().toUri())
+ .setOffset(region.getOffset())
+ .setLength(region.getBlock().getNumBytes())
+ .setGenerationStamp(region.getBlock().getGenerationStamp())
+ .setFsVolume(providedVolume)
+ .setConf(conf).build();
+
+ ReplicaInfo oldReplica =
+ volumeMap.get(bpid, newReplica.getBlockId());
+ if (oldReplica == null) {
+ volumeMap.add(bpid, newReplica);
+ bpVolumeMap.add(bpid, newReplica);
+ } else {
+ throw new IOException(
+ "A block with id " + newReplica.getBlockId() +
+ " already exists in the volumeMap");
+ }
+ }
+ }
+ }
+
+ public boolean isEmpty() {
+ return bpVolumeMap.replicas(bpid).size() == 0;
+ }
+
+ public void shutdown(BlockListAsLongs blocksListsAsLongs) {
+ //nothing to do!
+ }
+
+ public void compileReport(LinkedList<ScanInfo> report,
+ ReportCompiler reportCompiler)
+ throws IOException, InterruptedException {
+ /* refresh the provider and return the list of blocks found.
+ * the assumption here is that the block ids in the external
+ * block map, after the refresh, are consistent with those
+ * from before the refresh, i.e., for blocks which did not change,
+ * the ids remain the same.
+ */
+ provider.refresh();
+ Iterator<FileRegion> iter = provider.iterator();
+ while(iter.hasNext()) {
+ reportCompiler.throttle();
+ FileRegion region = iter.next();
+ if (region.getBlockPoolId().equals(bpid)) {
+ LOG.info("Adding ScanInfo for blkid " +
+ region.getBlock().getBlockId());
+ report.add(new ScanInfo(region.getBlock().getBlockId(), null, null,
+ providedVolume, region, region.getLength()));
+ }
+ }
+ }
+ }
+
+ private URI baseURI;
+ private final Map<String, ProvidedBlockPoolSlice> bpSlices =
+ new ConcurrentHashMap<String, ProvidedBlockPoolSlice>();
+
+ private ProvidedVolumeDF df;
+
+ ProvidedVolumeImpl(FsDatasetImpl dataset, String storageID,
+ StorageDirectory sd, FileIoProvider fileIoProvider,
+ Configuration conf) throws IOException {
+ super(dataset, storageID, sd, fileIoProvider, conf);
+ assert getStorageLocation().getStorageType() == StorageType.PROVIDED:
+ "Only provided storages must use ProvidedVolume";
+
+ baseURI = getStorageLocation().getUri();
+ Class<? extends ProvidedVolumeDF> dfClass =
+ conf.getClass(DFSConfigKeys.DFS_PROVIDER_DF_CLASS,
+ DefaultProvidedVolumeDF.class, ProvidedVolumeDF.class);
+ df = ReflectionUtils.newInstance(dfClass, conf);
+ }
+
+ @Override
+ public String[] getBlockPoolList() {
+ return bpSlices.keySet().toArray(new String[bpSlices.keySet().size()]);
+ }
+
+ @Override
+ public long getCapacity() {
+ if (configuredCapacity < 0) {
+ return df.getCapacity();
+ }
+ return configuredCapacity;
+ }
+
+ @Override
+ public long getDfsUsed() throws IOException {
+ return df.getSpaceUsed();
+ }
+
+ @Override
+ long getBlockPoolUsed(String bpid) throws IOException {
+ return df.getBlockPoolUsed(bpid);
+ }
+
+ @Override
+ public long getAvailable() throws IOException {
+ return df.getAvailable();
+ }
+
+ @Override
+ long getActualNonDfsUsed() throws IOException {
+ return df.getSpaceUsed();
+ }
+
+ @Override
+ public long getNonDfsUsed() throws IOException {
+ return 0L;
+ }
+
+ @Override
+ public URI getBaseURI() {
+ return baseURI;
+ }
+
+ @Override
+ public File getFinalizedDir(String bpid) throws IOException {
+ return null;
+ }
+
+ @Override
+ public void reserveSpaceForReplica(long bytesToReserve) {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public void releaseReservedSpace(long bytesToRelease) {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ private static final ObjectWriter WRITER =
+ new ObjectMapper().writerWithDefaultPrettyPrinter();
+ private static final ObjectReader READER =
+ new ObjectMapper().reader(ProvidedBlockIteratorState.class);
+
+ private static class ProvidedBlockIteratorState {
+ ProvidedBlockIteratorState() {
+ iterStartMs = Time.now();
+ lastSavedMs = iterStartMs;
+ atEnd = false;
+ lastBlockId = -1;
+ }
+
+ // The wall-clock ms since the epoch at which this iterator was last saved.
+ @JsonProperty
+ private long lastSavedMs;
+
+ // The wall-clock ms since the epoch at which this iterator was created.
+ @JsonProperty
+ private long iterStartMs;
+
+ @JsonProperty
+ private boolean atEnd;
+
+ //The id of the last block read when the state of the iterator is saved.
+ //This implementation assumes that provided blocks are returned
+ //in sorted order of the block ids.
+ @JsonProperty
+ private long lastBlockId;
+ }
+
+ private class ProviderBlockIteratorImpl
+ implements FsVolumeSpi.BlockIterator {
+
+ private String bpid;
+ private String name;
+ private FileRegionProvider provider;
+ private Iterator<FileRegion> blockIterator;
+ private ProvidedBlockIteratorState state;
+
+ ProviderBlockIteratorImpl(String bpid, String name,
+ FileRegionProvider provider) {
+ this.bpid = bpid;
+ this.name = name;
+ this.provider = provider;
+ rewind();
+ }
+
+ @Override
+ public void close() throws IOException {
+ //No action needed
+ }
+
+ @Override
+ public ExtendedBlock nextBlock() throws IOException {
+ if (null == blockIterator || !blockIterator.hasNext()) {
+ return null;
+ }
+ FileRegion nextRegion = null;
+ while (null == nextRegion && blockIterator.hasNext()) {
+ FileRegion temp = blockIterator.next();
+ if (temp.getBlock().getBlockId() < state.lastBlockId) {
+ continue;
+ }
+ if (temp.getBlockPoolId().equals(bpid)) {
+ nextRegion = temp;
+ }
+ }
+ if (null == nextRegion) {
+ return null;
+ }
+ state.lastBlockId = nextRegion.getBlock().getBlockId();
+ return new ExtendedBlock(bpid, nextRegion.getBlock());
+ }
+
+ @Override
+ public boolean atEnd() {
+ return blockIterator != null ? !blockIterator.hasNext(): true;
+ }
+
+ @Override
+ public void rewind() {
+ blockIterator = provider.iterator();
+ state = new ProvidedBlockIteratorState();
+ }
+
+ @Override
+ public void save() throws IOException {
+ //We do not persist the state of this iterator anywhere, locally.
+ //We just re-scan provided volumes as necessary.
+ state.lastSavedMs = Time.now();
+ }
+
+ @Override
+ public void setMaxStalenessMs(long maxStalenessMs) {
+ //do not use max staleness
+ }
+
+ @Override
+ public long getIterStartMs() {
+ return state.iterStartMs;
+ }
+
+ @Override
+ public long getLastSavedMs() {
+ return state.lastSavedMs;
+ }
+
+ @Override
+ public String getBlockPoolId() {
+ return bpid;
+ }
+
+ public void load() throws IOException {
+ //on load, we just rewind the iterator for provided volumes.
+ rewind();
+ LOG.trace("load({}, {}): loaded iterator {}: {}", getStorageID(),
+ bpid, name, WRITER.writeValueAsString(state));
+ }
+ }
+
+ @Override
+ public BlockIterator newBlockIterator(String bpid, String name) {
+ return new ProviderBlockIteratorImpl(bpid, name,
+ bpSlices.get(bpid).getFileRegionProvider());
+ }
+
+ @Override
+ public BlockIterator loadBlockIterator(String bpid, String name)
+ throws IOException {
+ ProviderBlockIteratorImpl iter = new ProviderBlockIteratorImpl(bpid, name,
+ bpSlices.get(bpid).getFileRegionProvider());
+ iter.load();
+ return iter;
+ }
+
+ @Override
+ ReplicaInfo addFinalizedBlock(String bpid, Block b,
+ ReplicaInfo replicaInfo, long bytesReserved) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public VolumeCheckResult check(VolumeCheckContext ignored)
+ throws DiskErrorException {
+ return VolumeCheckResult.HEALTHY;
+ }
+
+ @Override
+ void getVolumeMap(ReplicaMap volumeMap,
+ final RamDiskReplicaTracker ramDiskReplicaMap)
+ throws IOException {
+ LOG.info("Creating volumemap for provided volume " + this);
+ for(ProvidedBlockPoolSlice s : bpSlices.values()) {
+ s.getVolumeMap(volumeMap, ramDiskReplicaMap);
+ }
+ }
+
+ private ProvidedBlockPoolSlice getProvidedBlockPoolSlice(String bpid)
+ throws IOException {
+ ProvidedBlockPoolSlice bp = bpSlices.get(bpid);
+ if (bp == null) {
+ throw new IOException("block pool " + bpid + " is not found");
+ }
+ return bp;
+ }
+
+ @Override
+ void getVolumeMap(String bpid, ReplicaMap volumeMap,
+ final RamDiskReplicaTracker ramDiskReplicaMap)
+ throws IOException {
+ getProvidedBlockPoolSlice(bpid).getVolumeMap(volumeMap, ramDiskReplicaMap);
+ }
+
+ @VisibleForTesting
+ FileRegionProvider getFileRegionProvider(String bpid) throws IOException {
+ return getProvidedBlockPoolSlice(bpid).getFileRegionProvider();
+ }
+
+ @Override
+ public String toString() {
+ return this.baseURI.toString();
+ }
+
+ @Override
+ void addBlockPool(String bpid, Configuration conf) throws IOException {
+ addBlockPool(bpid, conf, null);
+ }
+
+ @Override
+ void addBlockPool(String bpid, Configuration conf, Timer timer)
+ throws IOException {
+ LOG.info("Adding block pool " + bpid +
+ " to volume with id " + getStorageID());
+ ProvidedBlockPoolSlice bp;
+ bp = new ProvidedBlockPoolSlice(bpid, this, conf);
+ bpSlices.put(bpid, bp);
+ }
+
+ void shutdown() {
+ if (cacheExecutor != null) {
+ cacheExecutor.shutdown();
+ }
+ Set<Entry<String, ProvidedBlockPoolSlice>> set = bpSlices.entrySet();
+ for (Entry<String, ProvidedBlockPoolSlice> entry : set) {
+ entry.getValue().shutdown(null);
+ }
+ }
+
+ @Override
+ void shutdownBlockPool(String bpid, BlockListAsLongs blocksListsAsLongs) {
+ ProvidedBlockPoolSlice bp = bpSlices.get(bpid);
+ if (bp != null) {
+ bp.shutdown(blocksListsAsLongs);
+ }
+ bpSlices.remove(bpid);
+ }
+
+ @Override
+ boolean isBPDirEmpty(String bpid) throws IOException {
+ return getProvidedBlockPoolSlice(bpid).isEmpty();
+ }
+
+ @Override
+ void deleteBPDirectories(String bpid, boolean force) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public LinkedList<ScanInfo> compileReport(String bpid,
+ LinkedList<ScanInfo> report, ReportCompiler reportCompiler)
+ throws InterruptedException, IOException {
+ LOG.info("Compiling report for volume: " + this + " bpid " + bpid);
+ //get the report from the appropriate block pool.
+ if(bpSlices.containsKey(bpid)) {
+ bpSlices.get(bpid).compileReport(report, reportCompiler);
+ }
+ return report;
+ }
+
+ @Override
+ public ReplicaInPipeline append(String bpid, ReplicaInfo replicaInfo,
+ long newGS, long estimateBlockLen) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public ReplicaInPipeline createRbw(ExtendedBlock b) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public ReplicaInPipeline convertTemporaryToRbw(ExtendedBlock b,
+ ReplicaInfo temp) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public ReplicaInPipeline createTemporary(ExtendedBlock b)
+ throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public ReplicaInPipeline updateRURCopyOnTruncate(ReplicaInfo rur,
+ String bpid, long newBlockId, long recoveryId, long newlength)
+ throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public ReplicaInfo moveBlockToTmpLocation(ExtendedBlock block,
+ ReplicaInfo replicaInfo, int smallBufferSize,
+ Configuration conf) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+
+ @Override
+ public File[] copyBlockToLazyPersistLocation(String bpId, long blockId,
+ long genStamp, ReplicaInfo replicaInfo, int smallBufferSize,
+ Configuration conf) throws IOException {
+ throw new UnsupportedOperationException(
+ "ProvidedVolume does not yet support writes");
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
index 8b89378..c5d14d2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java
@@ -686,7 +686,7 @@ public class Mover {
}
}
- static class Cli extends Configured implements Tool {
+ public static class Cli extends Configured implements Tool {
private static final String USAGE = "Usage: hdfs mover "
+ "[-p <files/dirs> | -f <local file>]"
+ "\n\t-p <files/dirs>\ta space separated list of HDFS files/dirs to migrate."
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageCompression.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageCompression.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageCompression.java
index 872ee74..45e001d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageCompression.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageCompression.java
@@ -39,7 +39,7 @@ import org.apache.hadoop.io.compress.CompressionCodecFactory;
*/
@InterfaceAudience.Private
@InterfaceStability.Evolving
-class FSImageCompression {
+public class FSImageCompression {
/** Codec to use to save or load image, or null if the image is not compressed */
private CompressionCodec imageCodec;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
index 63d1a28..4aae7d8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java
@@ -658,6 +658,10 @@ public class NNStorage extends Storage implements Closeable,
void readProperties(StorageDirectory sd, StartupOption startupOption)
throws IOException {
Properties props = readPropertiesFile(sd.getVersionFile());
+ if (props == null) {
+ throw new IOException(
+ "Properties not found for storage directory " + sd);
+ }
if (HdfsServerConstants.RollingUpgradeStartupOption.ROLLBACK
.matches(startupOption)) {
int lv = Integer.parseInt(getProperty(props, sd, "layoutVersion"));
@@ -975,7 +979,11 @@ public class NNStorage extends Storage implements Closeable,
StorageDirectory sd = sdit.next();
try {
Properties props = readPropertiesFile(sd.getVersionFile());
- cid = props.getProperty("clusterID");
+ if (props == null) {
+ cid = null;
+ } else {
+ cid = props.getProperty("clusterID");
+ }
LOG.info("current cluster id for sd="+sd.getCurrentDir() +
";lv=" + layoutVersion + ";cid=" + cid);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index dedf987..169dfc2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -4622,6 +4622,84 @@
</property>
<property>
+ <name>dfs.provider.class</name>
+ <value>org.apache.hadoop.hdfs.server.common.TextFileRegionProvider</value>
+ <description>
+ The class that is used to load information about blocks stored in
+ provided storages.
+ org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TextFileRegionProvider
+ is used as the default, which expects the blocks to be specified
+ using a delimited text file.
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.df.class</name>
+ <value>org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.DefaultProvidedVolumeDF</value>
+ <description>
+ The class that is used to measure usage statistics of provided stores.
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.storage.id</name>
+ <value>DS-PROVIDED</value>
+ <description>
+ The storage ID used for provided stores.
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.blockformat.class</name>
+ <value>org.apache.hadoop.hdfs.server.common.TextFileRegionFormat</value>
+ <description>
+ The class that is used to specify the input format of the blocks on
+ provided storages. The default is
+ org.apache.hadoop.hdfs.server.common.TextFileRegionFormat which uses
+ file regions to describe blocks. The file regions are specified as a
+ delimited text file. Each file region is a 6-tuple containing the
+ block id, remote file path, offset into file, length of block, the
+ block pool id containing the block, and the generation stamp of the
+ block.
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.textprovider.delimiter</name>
+ <value>,</value>
+ <description>
+ The delimiter used when the provided block map is specified as
+ a text file.
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.textprovider.read.path</name>
+ <value></value>
+ <description>
+ The path specifying the provided block map as a text file, specified as
+ a URI.
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.textprovider.read.codec</name>
+ <value></value>
+ <description>
+ The codec used to de-compress the provided block map.
+ </description>
+ </property>
+
+ <property>
+ <name>dfs.provided.textprovider.write.path</name>
+ <value></value>
+ <description>
+ The path to which the provided block map should be written as a text
+ file, specified as a URI.
+ </description>
+ </property>
+
+ <property>
<name>dfs.lock.suppress.warning.interval</name>
<value>10s</value>
<description>Instrumentation reporting long critical sections will suppress
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java
index 25eb5b6..8bc8b0d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSRollback.java
@@ -208,7 +208,7 @@ public class TestDFSRollback {
UpgradeUtilities.createDataNodeVersionFile(
dataCurrentDirs,
storageInfo,
- UpgradeUtilities.getCurrentBlockPoolID(cluster));
+ UpgradeUtilities.getCurrentBlockPoolID(cluster), conf);
cluster.startDataNodes(conf, 1, false, StartupOption.ROLLBACK, null);
assertTrue(cluster.isDataNodeUp());
@@ -256,7 +256,7 @@ public class TestDFSRollback {
NodeType.DATA_NODE);
UpgradeUtilities.createDataNodeVersionFile(baseDirs, storageInfo,
- UpgradeUtilities.getCurrentBlockPoolID(cluster));
+ UpgradeUtilities.getCurrentBlockPoolID(cluster), conf);
startBlockPoolShouldFail(StartupOption.ROLLBACK,
cluster.getNamesystem().getBlockPoolId());
@@ -283,7 +283,7 @@ public class TestDFSRollback {
NodeType.DATA_NODE);
UpgradeUtilities.createDataNodeVersionFile(baseDirs, storageInfo,
- UpgradeUtilities.getCurrentBlockPoolID(cluster));
+ UpgradeUtilities.getCurrentBlockPoolID(cluster), conf);
startBlockPoolShouldFail(StartupOption.ROLLBACK,
cluster.getNamesystem().getBlockPoolId());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStartupVersions.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStartupVersions.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStartupVersions.java
index d202223..0c09eda 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStartupVersions.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStartupVersions.java
@@ -265,7 +265,7 @@ public class TestDFSStartupVersions {
conf.getStrings(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY), "current");
log("DataNode version info", DATA_NODE, i, versions[i]);
UpgradeUtilities.createDataNodeVersionFile(storage,
- versions[i].storageInfo, bpid, versions[i].blockPoolId);
+ versions[i].storageInfo, bpid, versions[i].blockPoolId, conf);
try {
cluster.startDataNodes(conf, 1, false, StartupOption.REGULAR, null);
} catch (Exception ignore) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java
index fe1ede0..0d9f502 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java
@@ -290,7 +290,7 @@ public class TestDFSUpgrade {
UpgradeUtilities.getCurrentFsscTime(cluster), NodeType.DATA_NODE);
UpgradeUtilities.createDataNodeVersionFile(baseDirs, storageInfo,
- UpgradeUtilities.getCurrentBlockPoolID(cluster));
+ UpgradeUtilities.getCurrentBlockPoolID(cluster), conf);
startBlockPoolShouldFail(StartupOption.REGULAR, UpgradeUtilities
.getCurrentBlockPoolID(null));
@@ -308,7 +308,7 @@ public class TestDFSUpgrade {
NodeType.DATA_NODE);
UpgradeUtilities.createDataNodeVersionFile(baseDirs, storageInfo,
- UpgradeUtilities.getCurrentBlockPoolID(cluster));
+ UpgradeUtilities.getCurrentBlockPoolID(cluster), conf);
// Ensure corresponding block pool failed to initialized
startBlockPoolShouldFail(StartupOption.REGULAR, UpgradeUtilities
.getCurrentBlockPoolID(null));
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java
index 9f4df70..621bd51 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/UpgradeUtilities.java
@@ -384,8 +384,10 @@ public class UpgradeUtilities {
new File(datanodeStorage.toString()));
sd.setStorageUuid(DatanodeStorage.generateUuid());
Properties properties = Storage.readPropertiesFile(sd.getVersionFile());
- properties.setProperty("storageID", sd.getStorageUuid());
- Storage.writeProperties(sd.getVersionFile(), properties);
+ if (properties != null) {
+ properties.setProperty("storageID", sd.getStorageUuid());
+ Storage.writeProperties(sd.getVersionFile(), properties);
+ }
retVal[i] = newDir;
}
@@ -461,8 +463,9 @@ public class UpgradeUtilities {
* @param bpid Block pool Id
*/
public static void createDataNodeVersionFile(File[] parent,
- StorageInfo version, String bpid) throws IOException {
- createDataNodeVersionFile(parent, version, bpid, bpid);
+ StorageInfo version, String bpid, Configuration conf)
+ throws IOException {
+ createDataNodeVersionFile(parent, version, bpid, bpid, conf);
}
/**
@@ -477,7 +480,8 @@ public class UpgradeUtilities {
* @param bpidToWrite Block pool Id to write into the version file
*/
public static void createDataNodeVersionFile(File[] parent,
- StorageInfo version, String bpid, String bpidToWrite) throws IOException {
+ StorageInfo version, String bpid, String bpidToWrite, Configuration conf)
+ throws IOException {
DataStorage storage = new DataStorage(version);
storage.setDatanodeUuid("FixedDatanodeUuid");
@@ -485,7 +489,7 @@ public class UpgradeUtilities {
for (int i = 0; i < parent.length; i++) {
File versionFile = new File(parent[i], "VERSION");
StorageDirectory sd = new StorageDirectory(parent[i].getParentFile());
- DataStorage.createStorageID(sd, false);
+ DataStorage.createStorageID(sd, false, conf);
storage.writeProperties(versionFile, sd);
versionFiles[i] = versionFile;
File bpDir = BlockPoolSliceStorage.getBpRoot(bpid, parent[i]);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java
new file mode 100644
index 0000000..eaaac22
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java
@@ -0,0 +1,160 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStreamWriter;
+import java.util.Iterator;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.common.TextFileRegionFormat.*;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.compress.CompressionCodec;
+
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+/**
+ * Test for the text based block format for provided block maps.
+ */
+public class TestTextBlockFormat {
+
+ static final Path OUTFILE = new Path("hdfs://dummyServer:0000/dummyFile.txt");
+
+ void check(TextWriter.Options opts, final Path vp,
+ final Class<? extends CompressionCodec> vc) throws IOException {
+ TextFileRegionFormat mFmt = new TextFileRegionFormat() {
+ @Override
+ public TextWriter createWriter(Path file, CompressionCodec codec,
+ String delim, Configuration conf) throws IOException {
+ assertEquals(vp, file);
+ if (null == vc) {
+ assertNull(codec);
+ } else {
+ assertEquals(vc, codec.getClass());
+ }
+ return null; // ignored
+ }
+ };
+ mFmt.getWriter(opts);
+ }
+
+ @Test
+ public void testWriterOptions() throws Exception {
+ TextWriter.Options opts = TextWriter.defaults();
+ assertTrue(opts instanceof WriterOptions);
+ WriterOptions wopts = (WriterOptions) opts;
+ Path def = new Path(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT);
+ assertEquals(def, wopts.getFile());
+ assertNull(wopts.getCodec());
+
+ opts.filename(OUTFILE);
+ check(opts, OUTFILE, null);
+
+ opts.filename(OUTFILE);
+ opts.codec("gzip");
+ Path cp = new Path(OUTFILE.getParent(), OUTFILE.getName() + ".gz");
+ check(opts, cp, org.apache.hadoop.io.compress.GzipCodec.class);
+
+ }
+
+ @Test
+ public void testCSVReadWrite() throws Exception {
+ final DataOutputBuffer out = new DataOutputBuffer();
+ FileRegion r1 = new FileRegion(4344L, OUTFILE, 0, 1024);
+ FileRegion r2 = new FileRegion(4345L, OUTFILE, 1024, 1024);
+ FileRegion r3 = new FileRegion(4346L, OUTFILE, 2048, 512);
+ try (TextWriter csv = new TextWriter(new OutputStreamWriter(out), ",")) {
+ csv.store(r1);
+ csv.store(r2);
+ csv.store(r3);
+ }
+ Iterator<FileRegion> i3;
+ try (TextReader csv = new TextReader(null, null, null, ",") {
+ @Override
+ public InputStream createStream() {
+ DataInputBuffer in = new DataInputBuffer();
+ in.reset(out.getData(), 0, out.getLength());
+ return in;
+ }}) {
+ Iterator<FileRegion> i1 = csv.iterator();
+ assertEquals(r1, i1.next());
+ Iterator<FileRegion> i2 = csv.iterator();
+ assertEquals(r1, i2.next());
+ assertEquals(r2, i2.next());
+ assertEquals(r3, i2.next());
+ assertEquals(r2, i1.next());
+ assertEquals(r3, i1.next());
+
+ assertFalse(i1.hasNext());
+ assertFalse(i2.hasNext());
+ i3 = csv.iterator();
+ }
+ try {
+ i3.next();
+ } catch (IllegalStateException e) {
+ return;
+ }
+ fail("Invalid iterator");
+ }
+
+ @Test
+ public void testCSVReadWriteTsv() throws Exception {
+ final DataOutputBuffer out = new DataOutputBuffer();
+ FileRegion r1 = new FileRegion(4344L, OUTFILE, 0, 1024);
+ FileRegion r2 = new FileRegion(4345L, OUTFILE, 1024, 1024);
+ FileRegion r3 = new FileRegion(4346L, OUTFILE, 2048, 512);
+ try (TextWriter csv = new TextWriter(new OutputStreamWriter(out), "\t")) {
+ csv.store(r1);
+ csv.store(r2);
+ csv.store(r3);
+ }
+ Iterator<FileRegion> i3;
+ try (TextReader csv = new TextReader(null, null, null, "\t") {
+ @Override
+ public InputStream createStream() {
+ DataInputBuffer in = new DataInputBuffer();
+ in.reset(out.getData(), 0, out.getLength());
+ return in;
+ }}) {
+ Iterator<FileRegion> i1 = csv.iterator();
+ assertEquals(r1, i1.next());
+ Iterator<FileRegion> i2 = csv.iterator();
+ assertEquals(r1, i2.next());
+ assertEquals(r2, i2.next());
+ assertEquals(r3, i2.next());
+ assertEquals(r2, i1.next());
+ assertEquals(r3, i1.next());
+
+ assertFalse(i1.hasNext());
+ assertFalse(i2.hasNext());
+ i3 = csv.iterator();
+ }
+ try {
+ i3.next();
+ } catch (IllegalStateException e) {
+ return;
+ }
+ fail("Invalid iterator");
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
index 212f953..c31df4c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
@@ -54,6 +54,7 @@ import org.apache.hadoop.hdfs.server.datanode.fsdataset.DataNodeVolumeMetrics;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeReference;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.ScanInfo;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaInputStreams;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams;
@@ -616,7 +617,7 @@ public class SimulatedFSDataset implements FsDatasetSpi<FsVolumeSpi> {
this.datanode = datanode;
if (storage != null) {
for (int i = 0; i < storage.getNumStorageDirs(); ++i) {
- DataStorage.createStorageID(storage.getStorageDir(i), false);
+ DataStorage.createStorageID(storage.getStorageDir(i), false, conf);
}
this.datanodeUuid = storage.getDatanodeUuid();
} else {
@@ -1352,8 +1353,7 @@ public class SimulatedFSDataset implements FsDatasetSpi<FsVolumeSpi> {
}
@Override
- public void checkAndUpdate(String bpid, long blockId, File diskFile,
- File diskMetaFile, FsVolumeSpi vol) throws IOException {
+ public void checkAndUpdate(String bpid, ScanInfo info) throws IOException {
throw new UnsupportedOperationException();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
index 13502d9..bfdaad9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
import org.apache.hadoop.hdfs.server.datanode.*;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.ScanInfo;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.LengthInputStream;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaInputStreams;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaOutputStreams;
@@ -94,8 +95,8 @@ public class ExternalDatasetImpl implements FsDatasetSpi<ExternalVolumeImpl> {
}
@Override
- public void checkAndUpdate(String bpid, long blockId, File diskFile,
- File diskMetaFile, FsVolumeSpi vol) {
+ public void checkAndUpdate(String bpid, ScanInfo info) {
+ return;
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
index a30329c..cfae1e2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestFsDatasetImpl.java
@@ -119,11 +119,12 @@ public class TestFsDatasetImpl {
private final static String BLOCKPOOL = "BP-TEST";
- private static Storage.StorageDirectory createStorageDirectory(File root)
+ private static Storage.StorageDirectory createStorageDirectory(File root,
+ Configuration conf)
throws SecurityException, IOException {
Storage.StorageDirectory sd = new Storage.StorageDirectory(
StorageLocation.parse(root.toURI().toString()));
- DataStorage.createStorageID(sd, false);
+ DataStorage.createStorageID(sd, false, conf);
return sd;
}
@@ -137,7 +138,7 @@ public class TestFsDatasetImpl {
File loc = new File(BASE_DIR + "/data" + i);
dirStrings.add(new Path(loc.toString()).toUri().toString());
loc.mkdirs();
- dirs.add(createStorageDirectory(loc));
+ dirs.add(createStorageDirectory(loc, conf));
when(storage.getStorageDir(i)).thenReturn(dirs.get(i));
}
@@ -197,7 +198,8 @@ public class TestFsDatasetImpl {
String pathUri = new Path(path).toUri().toString();
expectedVolumes.add(new File(pathUri).getAbsolutePath());
StorageLocation loc = StorageLocation.parse(pathUri);
- Storage.StorageDirectory sd = createStorageDirectory(new File(path));
+ Storage.StorageDirectory sd = createStorageDirectory(
+ new File(path), conf);
DataStorage.VolumeBuilder builder =
new DataStorage.VolumeBuilder(storage, sd);
when(storage.prepareVolume(eq(datanode), eq(loc),
@@ -315,7 +317,8 @@ public class TestFsDatasetImpl {
String newVolumePath = BASE_DIR + "/newVolumeToRemoveLater";
StorageLocation loc = StorageLocation.parse(newVolumePath);
- Storage.StorageDirectory sd = createStorageDirectory(new File(newVolumePath));
+ Storage.StorageDirectory sd = createStorageDirectory(
+ new File(newVolumePath), conf);
DataStorage.VolumeBuilder builder =
new DataStorage.VolumeBuilder(storage, sd);
when(storage.prepareVolume(eq(datanode), eq(loc),
@@ -348,7 +351,7 @@ public class TestFsDatasetImpl {
any(ReplicaMap.class),
any(RamDiskReplicaLruTracker.class));
- Storage.StorageDirectory sd = createStorageDirectory(badDir);
+ Storage.StorageDirectory sd = createStorageDirectory(badDir, conf);
sd.lock();
DataStorage.VolumeBuilder builder = new DataStorage.VolumeBuilder(storage, sd);
when(storage.prepareVolume(eq(datanode),
@@ -492,7 +495,7 @@ public class TestFsDatasetImpl {
String path = BASE_DIR + "/newData0";
String pathUri = new Path(path).toUri().toString();
StorageLocation loc = StorageLocation.parse(pathUri);
- Storage.StorageDirectory sd = createStorageDirectory(new File(path));
+ Storage.StorageDirectory sd = createStorageDirectory(new File(path), conf);
DataStorage.VolumeBuilder builder =
new DataStorage.VolumeBuilder(storage, sd);
when(
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
new file mode 100644
index 0000000..2c119fe
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -0,0 +1,426 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.datanode.fsdataset.impl;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_SCAN_PERIOD_HOURS_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.nio.ByteBuffer;
+import java.nio.channels.Channels;
+import java.nio.channels.ReadableByteChannel;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.common.FileRegionProvider;
+import org.apache.hadoop.hdfs.server.common.Storage;
+import org.apache.hadoop.hdfs.server.datanode.BlockScanner;
+import org.apache.hadoop.hdfs.server.datanode.DNConf;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.hdfs.server.datanode.DataStorage;
+import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner;
+import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
+import org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry;
+import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi.BlockIterator;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsDatasetSpi.FsVolumeReferences;
+import org.apache.hadoop.util.AutoCloseableLock;
+import org.apache.hadoop.util.StringUtils;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Basic test cases for provided implementation.
+ */
+public class TestProvidedImpl {
+ private static final Logger LOG =
+ LoggerFactory.getLogger(TestFsDatasetImpl.class);
+ private static final String BASE_DIR =
+ new FileSystemTestHelper().getTestRootDir();
+ private static final int NUM_LOCAL_INIT_VOLUMES = 1;
+ private static final int NUM_PROVIDED_INIT_VOLUMES = 1;
+ private static final String[] BLOCK_POOL_IDS = {"bpid-0", "bpid-1"};
+ private static final int NUM_PROVIDED_BLKS = 10;
+ private static final long BLK_LEN = 128 * 1024;
+ private static final int MIN_BLK_ID = 0;
+ private static final int CHOSEN_BP_ID = 0;
+
+ private static String providedBasePath = BASE_DIR;
+
+ private Configuration conf;
+ private DataNode datanode;
+ private DataStorage storage;
+ private FsDatasetImpl dataset;
+ private static Map<Long, String> blkToPathMap;
+ private static List<FsVolumeImpl> providedVolumes;
+
+ /**
+ * A simple FileRegion iterator for tests.
+ */
+ public static class TestFileRegionIterator implements Iterator<FileRegion> {
+
+ private int numBlocks;
+ private int currentCount;
+ private String basePath;
+
+ public TestFileRegionIterator(String basePath, int minID, int numBlocks) {
+ this.currentCount = minID;
+ this.numBlocks = numBlocks;
+ this.basePath = basePath;
+ }
+
+ @Override
+ public boolean hasNext() {
+ return currentCount < numBlocks;
+ }
+
+ @Override
+ public FileRegion next() {
+ FileRegion region = null;
+ if (hasNext()) {
+ File newFile = new File(basePath, "file" + currentCount);
+ if(!newFile.exists()) {
+ try {
+ LOG.info("Creating file for blkid " + currentCount);
+ blkToPathMap.put((long) currentCount, newFile.getAbsolutePath());
+ LOG.info("Block id " + currentCount + " corresponds to file " +
+ newFile.getAbsolutePath());
+ newFile.createNewFile();
+ Writer writer = new OutputStreamWriter(
+ new FileOutputStream(newFile.getAbsolutePath()), "utf-8");
+ for(int i=0; i< BLK_LEN/(Integer.SIZE/8); i++) {
+ writer.write(currentCount);
+ }
+ writer.flush();
+ writer.close();
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ }
+ region = new FileRegion(currentCount, new Path(newFile.toString()),
+ 0, BLK_LEN, BLOCK_POOL_IDS[CHOSEN_BP_ID]);
+ currentCount++;
+ }
+ return region;
+ }
+
+ @Override
+ public void remove() {
+ //do nothing.
+ }
+
+ public void resetMinBlockId(int minId) {
+ currentCount = minId;
+ }
+
+ public void resetBlockCount(int numBlocks) {
+ this.numBlocks = numBlocks;
+ }
+
+ }
+
+ /**
+ * A simple FileRegion provider for tests.
+ */
+ public static class TestFileRegionProvider
+ extends FileRegionProvider implements Configurable {
+
+ private Configuration conf;
+ private int minId;
+ private int numBlocks;
+
+ TestFileRegionProvider() {
+ minId = MIN_BLK_ID;
+ numBlocks = NUM_PROVIDED_BLKS;
+ }
+
+ @Override
+ public Iterator<FileRegion> iterator() {
+ return new TestFileRegionIterator(providedBasePath, minId, numBlocks);
+ }
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public void refresh() {
+ //do nothing!
+ }
+
+ public void setMinBlkId(int minId) {
+ this.minId = minId;
+ }
+
+ public void setBlockCount(int numBlocks) {
+ this.numBlocks = numBlocks;
+ }
+ }
+
+ private static Storage.StorageDirectory createLocalStorageDirectory(
+ File root, Configuration conf)
+ throws SecurityException, IOException {
+ Storage.StorageDirectory sd =
+ new Storage.StorageDirectory(
+ StorageLocation.parse(root.toURI().toString()));
+ DataStorage.createStorageID(sd, false, conf);
+ return sd;
+ }
+
+ private static Storage.StorageDirectory createProvidedStorageDirectory(
+ String confString, Configuration conf)
+ throws SecurityException, IOException {
+ Storage.StorageDirectory sd =
+ new Storage.StorageDirectory(StorageLocation.parse(confString));
+ DataStorage.createStorageID(sd, false, conf);
+ return sd;
+ }
+
+ private static void createStorageDirs(DataStorage storage,
+ Configuration conf, int numDirs, int numProvidedDirs)
+ throws IOException {
+ List<Storage.StorageDirectory> dirs =
+ new ArrayList<Storage.StorageDirectory>();
+ List<String> dirStrings = new ArrayList<String>();
+ FileUtils.deleteDirectory(new File(BASE_DIR));
+ for (int i = 0; i < numDirs; i++) {
+ File loc = new File(BASE_DIR, "data" + i);
+ dirStrings.add(new Path(loc.toString()).toUri().toString());
+ loc.mkdirs();
+ dirs.add(createLocalStorageDirectory(loc, conf));
+ when(storage.getStorageDir(i)).thenReturn(dirs.get(i));
+ }
+
+ for (int i = numDirs; i < numDirs + numProvidedDirs; i++) {
+ File loc = new File(BASE_DIR, "data" + i);
+ providedBasePath = loc.getAbsolutePath();
+ loc.mkdirs();
+ String dirString = "[PROVIDED]" +
+ new Path(loc.toString()).toUri().toString();
+ dirStrings.add(dirString);
+ dirs.add(createProvidedStorageDirectory(dirString, conf));
+ when(storage.getStorageDir(i)).thenReturn(dirs.get(i));
+ }
+
+ String dataDir = StringUtils.join(",", dirStrings);
+ conf.set(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY, dataDir);
+ when(storage.dirIterator()).thenReturn(dirs.iterator());
+ when(storage.getNumStorageDirs()).thenReturn(numDirs + numProvidedDirs);
+ }
+
+ private int getNumVolumes() {
+ try (FsDatasetSpi.FsVolumeReferences volumes =
+ dataset.getFsVolumeReferences()) {
+ return volumes.size();
+ } catch (IOException e) {
+ return 0;
+ }
+ }
+
+ private void compareBlkFile(InputStream ins, String filepath)
+ throws FileNotFoundException, IOException {
+ try (ReadableByteChannel i = Channels.newChannel(
+ new FileInputStream(new File(filepath)))) {
+ try (ReadableByteChannel j = Channels.newChannel(ins)) {
+ ByteBuffer ib = ByteBuffer.allocate(4096);
+ ByteBuffer jb = ByteBuffer.allocate(4096);
+ while (true) {
+ int il = i.read(ib);
+ int jl = j.read(jb);
+ if (il < 0 || jl < 0) {
+ assertEquals(il, jl);
+ break;
+ }
+ ib.flip();
+ jb.flip();
+ int cmp = Math.min(ib.remaining(), jb.remaining());
+ for (int k = 0; k < cmp; ++k) {
+ assertEquals(ib.get(), jb.get());
+ }
+ ib.compact();
+ jb.compact();
+ }
+ }
+ }
+ }
+
+ @Before
+ public void setUp() throws IOException {
+ datanode = mock(DataNode.class);
+ storage = mock(DataStorage.class);
+ this.conf = new Configuration();
+ this.conf.setLong(DFS_DATANODE_SCAN_PERIOD_HOURS_KEY, 0);
+
+ when(datanode.getConf()).thenReturn(conf);
+ final DNConf dnConf = new DNConf(datanode);
+ when(datanode.getDnConf()).thenReturn(dnConf);
+
+ final BlockScanner disabledBlockScanner = new BlockScanner(datanode, conf);
+ when(datanode.getBlockScanner()).thenReturn(disabledBlockScanner);
+ final ShortCircuitRegistry shortCircuitRegistry =
+ new ShortCircuitRegistry(conf);
+ when(datanode.getShortCircuitRegistry()).thenReturn(shortCircuitRegistry);
+
+ this.conf.setClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
+ TestFileRegionProvider.class, FileRegionProvider.class);
+
+ blkToPathMap = new HashMap<Long, String>();
+ providedVolumes = new LinkedList<FsVolumeImpl>();
+
+ createStorageDirs(
+ storage, conf, NUM_LOCAL_INIT_VOLUMES, NUM_PROVIDED_INIT_VOLUMES);
+
+ dataset = new FsDatasetImpl(datanode, storage, conf);
+ FsVolumeReferences volumes = dataset.getFsVolumeReferences();
+ for (int i = 0; i < volumes.size(); i++) {
+ FsVolumeSpi vol = volumes.get(i);
+ if (vol.getStorageType() == StorageType.PROVIDED) {
+ providedVolumes.add((FsVolumeImpl) vol);
+ }
+ }
+
+ for (String bpid : BLOCK_POOL_IDS) {
+ dataset.addBlockPool(bpid, conf);
+ }
+
+ assertEquals(NUM_LOCAL_INIT_VOLUMES + NUM_PROVIDED_INIT_VOLUMES,
+ getNumVolumes());
+ assertEquals(0, dataset.getNumFailedVolumes());
+ }
+
+ @Test
+ public void testProvidedStorageID() throws IOException {
+ for (int i = 0; i < providedVolumes.size(); i++) {
+ assertEquals(DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT,
+ providedVolumes.get(i).getStorageID());
+ }
+ }
+
+ @Test
+ public void testBlockLoad() throws IOException {
+ for (int i = 0; i < providedVolumes.size(); i++) {
+ FsVolumeImpl vol = providedVolumes.get(i);
+ ReplicaMap volumeMap = new ReplicaMap(new AutoCloseableLock());
+ vol.getVolumeMap(volumeMap, null);
+
+ assertEquals(vol.getBlockPoolList().length, BLOCK_POOL_IDS.length);
+ for (int j = 0; j < BLOCK_POOL_IDS.length; j++) {
+ if (j != CHOSEN_BP_ID) {
+ //this block pool should not have any blocks
+ assertEquals(null, volumeMap.replicas(BLOCK_POOL_IDS[j]));
+ }
+ }
+ assertEquals(NUM_PROVIDED_BLKS,
+ volumeMap.replicas(BLOCK_POOL_IDS[CHOSEN_BP_ID]).size());
+ }
+ }
+
+ @Test
+ public void testProvidedBlockRead() throws IOException {
+ for (int id = 0; id < NUM_PROVIDED_BLKS; id++) {
+ ExtendedBlock eb = new ExtendedBlock(
+ BLOCK_POOL_IDS[CHOSEN_BP_ID], id, BLK_LEN,
+ HdfsConstants.GRANDFATHER_GENERATION_STAMP);
+ InputStream ins = dataset.getBlockInputStream(eb, 0);
+ String filepath = blkToPathMap.get((long) id);
+ compareBlkFile(ins, filepath);
+ }
+ }
+
+ @Test
+ public void testProvidedBlockIterator() throws IOException {
+ for (int i = 0; i < providedVolumes.size(); i++) {
+ FsVolumeImpl vol = providedVolumes.get(i);
+ BlockIterator iter =
+ vol.newBlockIterator(BLOCK_POOL_IDS[CHOSEN_BP_ID], "temp");
+ Set<Long> blockIdsUsed = new HashSet<Long>();
+ while(!iter.atEnd()) {
+ ExtendedBlock eb = iter.nextBlock();
+ long blkId = eb.getBlockId();
+ assertTrue(blkId >= MIN_BLK_ID && blkId < NUM_PROVIDED_BLKS);
+ //all block ids must be unique!
+ assertTrue(!blockIdsUsed.contains(blkId));
+ blockIdsUsed.add(blkId);
+ }
+ assertEquals(NUM_PROVIDED_BLKS, blockIdsUsed.size());
+ }
+ }
+
+
+ @Test
+ public void testRefresh() throws IOException {
+ conf.setInt(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_THREADS_KEY, 1);
+ for (int i = 0; i < providedVolumes.size(); i++) {
+ ProvidedVolumeImpl vol = (ProvidedVolumeImpl) providedVolumes.get(i);
+ TestFileRegionProvider provider = (TestFileRegionProvider)
+ vol.getFileRegionProvider(BLOCK_POOL_IDS[CHOSEN_BP_ID]);
+ //equivalent to two new blocks appearing
+ provider.setBlockCount(NUM_PROVIDED_BLKS + 2);
+ //equivalent to deleting the first block
+ provider.setMinBlkId(MIN_BLK_ID + 1);
+
+ DirectoryScanner scanner = new DirectoryScanner(datanode, dataset, conf);
+ scanner.reconcile();
+ ReplicaInfo info = dataset.getBlockReplica(
+ BLOCK_POOL_IDS[CHOSEN_BP_ID], NUM_PROVIDED_BLKS + 1);
+ //new replica should be added to the dataset
+ assertTrue(info != null);
+ try {
+ info = dataset.getBlockReplica(BLOCK_POOL_IDS[CHOSEN_BP_ID], 0);
+ } catch(Exception ex) {
+ LOG.info("Exception expected: " + ex);
+ }
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/970028f0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
index d5a3948..db8c029 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterId.java
@@ -68,7 +68,10 @@ public class TestClusterId {
fsImage.getStorage().dirIterator(NNStorage.NameNodeDirType.IMAGE);
StorageDirectory sd = sdit.next();
Properties props = Storage.readPropertiesFile(sd.getVersionFile());
- String cid = props.getProperty("clusterID");
+ String cid = null;
+ if (props != null) {
+ cid = props.getProperty("clusterID");
+ }
LOG.info("successfully formated : sd="+sd.getCurrentDir() + ";cid="+cid);
return cid;
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[08/50] [abbrv] hadoop git commit: HDFS-12638. Delete
copy-on-truncate block along with the original block,
when deleting a file being truncated. Contributed by Konstantin Shvachko.
Posted by vi...@apache.org.
HDFS-12638. Delete copy-on-truncate block along with the original block, when deleting a file being truncated. Contributed by Konstantin Shvachko.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/60fd0d7f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/60fd0d7f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/60fd0d7f
Branch: refs/heads/HDFS-9806
Commit: 60fd0d7fd73198fd610e59d1a4cd007c5fcc7205
Parents: a63d19d
Author: Konstantin V Shvachko <sh...@apache.org>
Authored: Thu Nov 30 18:18:09 2017 -0800
Committer: Konstantin V Shvachko <sh...@apache.org>
Committed: Thu Nov 30 18:18:28 2017 -0800
----------------------------------------------------------------------
.../hadoop/hdfs/server/namenode/INode.java | 14 +++++++
.../hdfs/server/namenode/TestFileTruncate.java | 41 ++++++++++++++++++++
2 files changed, 55 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/60fd0d7f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
index 34bfe10..1682a30 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
@@ -33,9 +33,11 @@ import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.FsPermission;
import org.apache.hadoop.fs.permission.PermissionStatus;
import org.apache.hadoop.hdfs.DFSUtilClient;
+import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite;
+import org.apache.hadoop.hdfs.server.blockmanagement.BlockUnderConstructionFeature;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.protocol.QuotaExceededException;
import org.apache.hadoop.hdfs.server.namenode.INodeReference.DstReference;
@@ -1058,6 +1060,18 @@ public abstract class INode implements INodeAttributes, Diff.Element<byte[]> {
assert toDelete != null : "toDelete is null";
toDelete.delete();
toDeleteList.add(toDelete);
+ // If the file is being truncated
+ // the copy-on-truncate block should also be collected for deletion
+ BlockUnderConstructionFeature uc = toDelete.getUnderConstructionFeature();
+ if(uc == null) {
+ return;
+ }
+ Block truncateBlock = uc.getTruncateBlock();
+ if(truncateBlock == null || truncateBlock.equals(toDelete)) {
+ return;
+ }
+ assert truncateBlock instanceof BlockInfo : "should be BlockInfo";
+ addDeleteBlock((BlockInfo) truncateBlock);
}
public void addUpdateReplicationFactor(BlockInfo block, short targetRepl) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/60fd0d7f/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
index d4215e8..51a94e7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileTruncate.java
@@ -60,6 +60,7 @@ import org.apache.hadoop.hdfs.server.common.HdfsServerConstants;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.StartupOption;
import org.apache.hadoop.hdfs.server.datanode.FsDatasetTestUtils;
import org.apache.hadoop.hdfs.server.namenode.FSDirectory.DirOp;
+import org.apache.hadoop.hdfs.tools.DFSAdmin;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.Time;
@@ -1155,6 +1156,46 @@ public class TestFileTruncate {
fs.delete(parent, true);
}
+ /**
+ * While rolling upgrade is in-progress the test truncates a file
+ * such that copy-on-truncate is triggered, then deletes the file,
+ * and makes sure that no blocks involved in truncate are hanging around.
+ */
+ @Test
+ public void testTruncateWithRollingUpgrade() throws Exception {
+ final DFSAdmin dfsadmin = new DFSAdmin(cluster.getConfiguration(0));
+ DistributedFileSystem dfs = cluster.getFileSystem();
+ //start rolling upgrade
+ dfs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
+ int status = dfsadmin.run(new String[]{"-rollingUpgrade", "prepare"});
+ assertEquals("could not prepare for rolling upgrade", 0, status);
+ dfs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE);
+
+ Path dir = new Path("/testTruncateWithRollingUpgrade");
+ fs.mkdirs(dir);
+ final Path p = new Path(dir, "file");
+ final byte[] data = new byte[3];
+ ThreadLocalRandom.current().nextBytes(data);
+ writeContents(data, data.length, p);
+
+ assertEquals("block num should 1", 1,
+ cluster.getNamesystem().getFSDirectory().getBlockManager()
+ .getTotalBlocks());
+
+ final boolean isReady = fs.truncate(p, 2);
+ assertFalse("should be copy-on-truncate", isReady);
+ assertEquals("block num should 2", 2,
+ cluster.getNamesystem().getFSDirectory().getBlockManager()
+ .getTotalBlocks());
+ fs.delete(p, true);
+
+ assertEquals("block num should 0", 0,
+ cluster.getNamesystem().getFSDirectory().getBlockManager()
+ .getTotalBlocks());
+ status = dfsadmin.run(new String[]{"-rollingUpgrade", "finalize"});
+ assertEquals("could not finalize rolling upgrade", 0, status);
+ }
+
static void writeContents(byte[] contents, int fileLength, Path p)
throws IOException {
FSDataOutputStream out = fs.create(p, true, BLOCK_SIZE, REPLICATION,
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[35/50] [abbrv] hadoop git commit: HDFS-11902. [READ] Merge
BlockFormatProvider and FileRegionProvider.
Posted by vi...@apache.org.
HDFS-11902. [READ] Merge BlockFormatProvider and FileRegionProvider.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/926ead5e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/926ead5e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/926ead5e
Branch: refs/heads/HDFS-9806
Commit: 926ead5e1f9549b4c44275fd8ad1d5bfc6cd8249
Parents: 2cf4faa
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Fri Nov 3 13:45:56 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../org/apache/hadoop/hdfs/DFSConfigKeys.java | 17 +-
.../blockmanagement/BlockFormatProvider.java | 91 ----
.../server/blockmanagement/BlockProvider.java | 75 ----
.../blockmanagement/ProvidedStorageMap.java | 63 ++-
.../hadoop/hdfs/server/common/BlockFormat.java | 82 ----
.../hdfs/server/common/FileRegionProvider.java | 37 --
.../server/common/TextFileRegionFormat.java | 442 ------------------
.../server/common/TextFileRegionProvider.java | 88 ----
.../common/blockaliasmap/BlockAliasMap.java | 88 ++++
.../impl/TextFileRegionAliasMap.java | 445 +++++++++++++++++++
.../common/blockaliasmap/package-info.java | 27 ++
.../fsdataset/impl/ProvidedVolumeImpl.java | 76 ++--
.../src/main/resources/hdfs-default.xml | 34 +-
.../blockmanagement/TestProvidedStorageMap.java | 41 +-
.../hdfs/server/common/TestTextBlockFormat.java | 160 -------
.../impl/TestTextBlockAliasMap.java | 161 +++++++
.../fsdataset/impl/TestProvidedImpl.java | 75 ++--
.../hdfs/server/namenode/FileSystemImage.java | 4 +-
.../hdfs/server/namenode/ImageWriter.java | 25 +-
.../hdfs/server/namenode/NullBlockAliasMap.java | 86 ++++
.../hdfs/server/namenode/NullBlockFormat.java | 87 ----
.../hadoop/hdfs/server/namenode/TreePath.java | 8 +-
.../TestNameNodeProvidedImplementation.java | 25 +-
23 files changed, 994 insertions(+), 1243 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
index 7449987..cb57675 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
@@ -331,22 +331,19 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
public static final String DFS_NAMENODE_PROVIDED_ENABLED = "dfs.namenode.provided.enabled";
public static final boolean DFS_NAMENODE_PROVIDED_ENABLED_DEFAULT = false;
- public static final String DFS_NAMENODE_BLOCK_PROVIDER_CLASS = "dfs.namenode.block.provider.class";
-
- public static final String DFS_PROVIDER_CLASS = "dfs.provider.class";
public static final String DFS_PROVIDER_DF_CLASS = "dfs.provided.df.class";
public static final String DFS_PROVIDER_STORAGEUUID = "dfs.provided.storage.id";
public static final String DFS_PROVIDER_STORAGEUUID_DEFAULT = "DS-PROVIDED";
- public static final String DFS_PROVIDER_BLK_FORMAT_CLASS = "dfs.provided.blockformat.class";
+ public static final String DFS_PROVIDED_ALIASMAP_CLASS = "dfs.provided.aliasmap.class";
- public static final String DFS_PROVIDED_BLOCK_MAP_DELIMITER = "dfs.provided.textprovider.delimiter";
- public static final String DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT = ",";
+ public static final String DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER = "dfs.provided.aliasmap.text.delimiter";
+ public static final String DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT = ",";
- public static final String DFS_PROVIDED_BLOCK_MAP_READ_PATH = "dfs.provided.textprovider.read.path";
- public static final String DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT = "file:///tmp/blocks.csv";
+ public static final String DFS_PROVIDED_ALIASMAP_TEXT_READ_PATH = "dfs.provided.aliasmap.text.read.path";
+ public static final String DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT = "file:///tmp/blocks.csv";
- public static final String DFS_PROVIDED_BLOCK_MAP_CODEC = "dfs.provided.textprovider.read.codec";
- public static final String DFS_PROVIDED_BLOCK_MAP_WRITE_PATH = "dfs.provided.textprovider.write.path";
+ public static final String DFS_PROVIDED_ALIASMAP_TEXT_CODEC = "dfs.provided.aliasmap.text.codec";
+ public static final String DFS_PROVIDED_ALIASMAP_TEXT_WRITE_PATH = "dfs.provided.aliasmap.text.write.path";
public static final String DFS_LIST_LIMIT = "dfs.ls.limit";
public static final int DFS_LIST_LIMIT_DEFAULT = 1000;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
deleted file mode 100644
index 930263d..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockFormatProvider.java
+++ /dev/null
@@ -1,91 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.server.blockmanagement;
-
-import java.io.IOException;
-import java.util.Iterator;
-
-import org.apache.hadoop.conf.Configurable;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.hdfs.protocol.Block;
-import org.apache.hadoop.hdfs.server.common.BlockAlias;
-import org.apache.hadoop.hdfs.server.common.BlockFormat;
-import org.apache.hadoop.hdfs.server.common.TextFileRegionFormat;
-import org.apache.hadoop.util.ReflectionUtils;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Loads provided blocks from a {@link BlockFormat}.
- */
-public class BlockFormatProvider extends BlockProvider
- implements Configurable {
-
- private Configuration conf;
- private BlockFormat<? extends BlockAlias> blockFormat;
- public static final Logger LOG =
- LoggerFactory.getLogger(BlockFormatProvider.class);
-
- @Override
- @SuppressWarnings({ "rawtypes", "unchecked" })
- public void setConf(Configuration conf) {
- Class<? extends BlockFormat> c = conf.getClass(
- DFSConfigKeys.DFS_PROVIDER_BLK_FORMAT_CLASS,
- TextFileRegionFormat.class, BlockFormat.class);
- blockFormat = ReflectionUtils.newInstance(c, conf);
- LOG.info("Loaded BlockFormat class : " + c.getClass().getName());
- this.conf = conf;
- }
-
- @Override
- public Configuration getConf() {
- return conf;
- }
-
- @Override
- public Iterator<Block> iterator() {
- try {
- final BlockFormat.Reader<? extends BlockAlias> reader =
- blockFormat.getReader(null);
-
- return new Iterator<Block>() {
-
- private final Iterator<? extends BlockAlias> inner = reader.iterator();
-
- @Override
- public boolean hasNext() {
- return inner.hasNext();
- }
-
- @Override
- public Block next() {
- return inner.next().getBlock();
- }
-
- @Override
- public void remove() {
- throw new UnsupportedOperationException();
- }
- };
- } catch (IOException e) {
- throw new RuntimeException("Failed to read provided blocks", e);
- }
- }
-
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
deleted file mode 100644
index 2214868..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockProvider.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.server.blockmanagement;
-
-import java.io.IOException;
-import org.apache.hadoop.hdfs.protocol.Block;
-import org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap.ProvidedBlockList;
-import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
-import org.apache.hadoop.hdfs.util.RwLock;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * Used to load provided blocks in the {@link BlockManager}.
- */
-public abstract class BlockProvider implements Iterable<Block> {
-
- private static final Logger LOG =
- LoggerFactory.getLogger(ProvidedStorageMap.class);
-
- private RwLock lock;
- private BlockManager bm;
- private DatanodeStorageInfo storage;
- private boolean hasDNs = false;
-
- /**
- * @param lock the namesystem lock
- * @param bm block manager
- * @param storage storage for provided blocks
- */
- void init(RwLock lock, BlockManager bm, DatanodeStorageInfo storage) {
- this.bm = bm;
- this.lock = lock;
- this.storage = storage;
- }
-
- /**
- * start the processing of block report for provided blocks.
- * @throws IOException
- */
- void start(BlockReportContext context) throws IOException {
- assert lock.hasWriteLock() : "Not holding write lock";
- if (hasDNs) {
- return;
- }
- if (storage.getBlockReportCount() == 0) {
- LOG.info("Calling process first blk report from storage: " + storage);
- // first pass; periodic refresh should call bm.processReport
- bm.processFirstBlockReport(storage, new ProvidedBlockList(iterator()));
- } else {
- bm.processReport(storage, new ProvidedBlockList(iterator()), context);
- }
- hasDNs = true;
- }
-
- void stop() {
- assert lock.hasWriteLock() : "Not holding write lock";
- hasDNs = false;
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 5717e0c..a848d50 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -40,7 +40,10 @@ import org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap;
import org.apache.hadoop.hdfs.server.protocol.BlockReportContext;
+import org.apache.hadoop.hdfs.server.common.BlockAlias;
import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage.State;
import org.apache.hadoop.hdfs.util.RwLock;
@@ -61,7 +64,11 @@ public class ProvidedStorageMap {
LoggerFactory.getLogger(ProvidedStorageMap.class);
// limit to a single provider for now
- private final BlockProvider blockProvider;
+ private RwLock lock;
+ private BlockManager bm;
+ private boolean hasDNs = false;
+ private BlockAliasMap aliasMap;
+
private final String storageId;
private final ProvidedDescriptor providedDescriptor;
private final DatanodeStorageInfo providedStorageInfo;
@@ -79,7 +86,7 @@ public class ProvidedStorageMap {
if (!providedEnabled) {
// disable mapping
- blockProvider = null;
+ aliasMap = null;
providedDescriptor = null;
providedStorageInfo = null;
return;
@@ -90,15 +97,17 @@ public class ProvidedStorageMap {
providedDescriptor = new ProvidedDescriptor();
providedStorageInfo = providedDescriptor.createProvidedStorage(ds);
+ this.bm = bm;
+ this.lock = lock;
+
// load block reader into storage
- Class<? extends BlockProvider> fmt = conf.getClass(
- DFSConfigKeys.DFS_NAMENODE_BLOCK_PROVIDER_CLASS,
- BlockFormatProvider.class, BlockProvider.class);
-
- blockProvider = ReflectionUtils.newInstance(fmt, conf);
- blockProvider.init(lock, bm, providedStorageInfo);
- LOG.info("Loaded block provider class: " +
- blockProvider.getClass() + " storage: " + providedStorageInfo);
+ Class<? extends BlockAliasMap> aliasMapClass = conf.getClass(
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
+ TextFileRegionAliasMap.class, BlockAliasMap.class);
+ aliasMap = ReflectionUtils.newInstance(aliasMapClass, conf);
+
+ LOG.info("Loaded alias map class: " +
+ aliasMap.getClass() + " storage: " + providedStorageInfo);
}
/**
@@ -114,8 +123,7 @@ public class ProvidedStorageMap {
BlockReportContext context) throws IOException {
if (providedEnabled && storageId.equals(s.getStorageID())) {
if (StorageType.PROVIDED.equals(s.getStorageType())) {
- // poll service, initiate
- blockProvider.start(context);
+ processProvidedStorageReport(context);
dn.injectStorage(providedStorageInfo);
return providedDescriptor.getProvidedStorage(dn, s);
}
@@ -124,6 +132,26 @@ public class ProvidedStorageMap {
return dn.getStorageInfo(s.getStorageID());
}
+ private void processProvidedStorageReport(BlockReportContext context)
+ throws IOException {
+ assert lock.hasWriteLock() : "Not holding write lock";
+ if (hasDNs) {
+ return;
+ }
+ if (providedStorageInfo.getBlockReportCount() == 0) {
+ LOG.info("Calling process first blk report from storage: "
+ + providedStorageInfo);
+ // first pass; periodic refresh should call bm.processReport
+ bm.processFirstBlockReport(providedStorageInfo,
+ new ProvidedBlockList(aliasMap.getReader(null).iterator()));
+ } else {
+ bm.processReport(providedStorageInfo,
+ new ProvidedBlockList(aliasMap.getReader(null).iterator()),
+ context);
+ }
+ hasDNs = true;
+ }
+
@VisibleForTesting
public DatanodeStorageInfo getProvidedStorageInfo() {
return providedStorageInfo;
@@ -137,10 +165,11 @@ public class ProvidedStorageMap {
}
public void removeDatanode(DatanodeDescriptor dnToRemove) {
- if (providedDescriptor != null) {
+ if (providedEnabled) {
+ assert lock.hasWriteLock() : "Not holding write lock";
int remainingDatanodes = providedDescriptor.remove(dnToRemove);
if (remainingDatanodes == 0) {
- blockProvider.stop();
+ hasDNs = false;
}
}
}
@@ -443,9 +472,9 @@ public class ProvidedStorageMap {
*/
static class ProvidedBlockList extends BlockListAsLongs {
- private final Iterator<Block> inner;
+ private final Iterator<BlockAlias> inner;
- ProvidedBlockList(Iterator<Block> inner) {
+ ProvidedBlockList(Iterator<BlockAlias> inner) {
this.inner = inner;
}
@@ -454,7 +483,7 @@ public class ProvidedStorageMap {
return new Iterator<BlockReportReplica>() {
@Override
public BlockReportReplica next() {
- return new BlockReportReplica(inner.next());
+ return new BlockReportReplica(inner.next().getBlock());
}
@Override
public boolean hasNext() {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
deleted file mode 100644
index 66e7fdf..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/BlockFormat.java
+++ /dev/null
@@ -1,82 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.server.common;
-
-import java.io.Closeable;
-import java.io.IOException;
-
-import org.apache.hadoop.hdfs.protocol.Block;
-
-/**
- * An abstract class used to read and write block maps for provided blocks.
- */
-public abstract class BlockFormat<T extends BlockAlias> {
-
- /**
- * An abstract class that is used to read {@link BlockAlias}es
- * for provided blocks.
- */
- public static abstract class Reader<U extends BlockAlias>
- implements Iterable<U>, Closeable {
-
- /**
- * reader options.
- */
- public interface Options { }
-
- public abstract U resolve(Block ident) throws IOException;
-
- }
-
- /**
- * Returns the reader for the provided block map.
- * @param opts reader options
- * @return {@link Reader} to the block map.
- * @throws IOException
- */
- public abstract Reader<T> getReader(Reader.Options opts) throws IOException;
-
- /**
- * An abstract class used as a writer for the provided block map.
- */
- public static abstract class Writer<U extends BlockAlias>
- implements Closeable {
- /**
- * writer options.
- */
- public interface Options { }
-
- public abstract void store(U token) throws IOException;
-
- }
-
- /**
- * Returns the writer for the provided block map.
- * @param opts writer options.
- * @return {@link Writer} to the block map.
- * @throws IOException
- */
- public abstract Writer<T> getWriter(Writer.Options opts) throws IOException;
-
- /**
- * Refresh based on the underlying block map.
- * @throws IOException
- */
- public abstract void refresh() throws IOException;
-
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
deleted file mode 100644
index 2e94239..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/FileRegionProvider.java
+++ /dev/null
@@ -1,37 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdfs.server.common;
-
-import java.io.IOException;
-import java.util.Collections;
-import java.util.Iterator;
-
-/**
- * This class is a stub for reading file regions from the block map.
- */
-public class FileRegionProvider implements Iterable<FileRegion> {
- @Override
- public Iterator<FileRegion> iterator() {
- return Collections.emptyListIterator();
- }
-
- public void refresh() throws IOException {
- return;
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
deleted file mode 100644
index eacd08f..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionFormat.java
+++ /dev/null
@@ -1,442 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdfs.server.common;
-
-import java.io.File;
-import java.io.IOException;
-import java.io.BufferedReader;
-import java.io.BufferedWriter;
-import java.io.InputStream;
-import java.io.InputStreamReader;
-import java.io.OutputStream;
-import java.io.OutputStreamWriter;
-import java.util.ArrayList;
-import java.util.Iterator;
-import java.util.Map;
-import java.util.Collections;
-import java.util.IdentityHashMap;
-import java.util.NoSuchElementException;
-
-import org.apache.hadoop.conf.Configurable;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.LocalFileSystem;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.hdfs.protocol.Block;
-import org.apache.hadoop.io.MultipleIOException;
-import org.apache.hadoop.io.compress.CompressionCodec;
-import org.apache.hadoop.io.compress.CompressionCodecFactory;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import com.google.common.annotations.VisibleForTesting;
-
-/**
- * This class is used for block maps stored as text files,
- * with a specified delimiter.
- */
-public class TextFileRegionFormat
- extends BlockFormat<FileRegion> implements Configurable {
-
- private Configuration conf;
- private ReaderOptions readerOpts = TextReader.defaults();
- private WriterOptions writerOpts = TextWriter.defaults();
-
- public static final Logger LOG =
- LoggerFactory.getLogger(TextFileRegionFormat.class);
- @Override
- public void setConf(Configuration conf) {
- readerOpts.setConf(conf);
- writerOpts.setConf(conf);
- this.conf = conf;
- }
-
- @Override
- public Configuration getConf() {
- return conf;
- }
-
- @Override
- public Reader<FileRegion> getReader(Reader.Options opts)
- throws IOException {
- if (null == opts) {
- opts = readerOpts;
- }
- if (!(opts instanceof ReaderOptions)) {
- throw new IllegalArgumentException("Invalid options " + opts.getClass());
- }
- ReaderOptions o = (ReaderOptions) opts;
- Configuration readerConf = (null == o.getConf())
- ? new Configuration()
- : o.getConf();
- return createReader(o.file, o.delim, readerConf);
- }
-
- @VisibleForTesting
- TextReader createReader(Path file, String delim, Configuration cfg)
- throws IOException {
- FileSystem fs = file.getFileSystem(cfg);
- if (fs instanceof LocalFileSystem) {
- fs = ((LocalFileSystem)fs).getRaw();
- }
- CompressionCodecFactory factory = new CompressionCodecFactory(cfg);
- CompressionCodec codec = factory.getCodec(file);
- return new TextReader(fs, file, codec, delim);
- }
-
- @Override
- public Writer<FileRegion> getWriter(Writer.Options opts) throws IOException {
- if (null == opts) {
- opts = writerOpts;
- }
- if (!(opts instanceof WriterOptions)) {
- throw new IllegalArgumentException("Invalid options " + opts.getClass());
- }
- WriterOptions o = (WriterOptions) opts;
- Configuration cfg = (null == o.getConf())
- ? new Configuration()
- : o.getConf();
- if (o.codec != null) {
- CompressionCodecFactory factory = new CompressionCodecFactory(cfg);
- CompressionCodec codec = factory.getCodecByName(o.codec);
- String name = o.file.getName() + codec.getDefaultExtension();
- o.filename(new Path(o.file.getParent(), name));
- return createWriter(o.file, codec, o.delim, cfg);
- }
- return createWriter(o.file, null, o.delim, conf);
- }
-
- @VisibleForTesting
- TextWriter createWriter(Path file, CompressionCodec codec, String delim,
- Configuration cfg) throws IOException {
- FileSystem fs = file.getFileSystem(cfg);
- if (fs instanceof LocalFileSystem) {
- fs = ((LocalFileSystem)fs).getRaw();
- }
- OutputStream tmp = fs.create(file);
- java.io.Writer out = new BufferedWriter(new OutputStreamWriter(
- (null == codec) ? tmp : codec.createOutputStream(tmp), "UTF-8"));
- return new TextWriter(out, delim);
- }
-
- /**
- * Class specifying reader options for the {@link TextFileRegionFormat}.
- */
- public static class ReaderOptions
- implements TextReader.Options, Configurable {
-
- private Configuration conf;
- private String delim =
- DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT;
- private Path file = new Path(
- new File(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT)
- .toURI().toString());
-
- @Override
- public void setConf(Configuration conf) {
- this.conf = conf;
- String tmpfile = conf.get(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_READ_PATH,
- DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT);
- file = new Path(tmpfile);
- delim = conf.get(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER,
- DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT);
- LOG.info("TextFileRegionFormat: read path " + tmpfile.toString());
- }
-
- @Override
- public Configuration getConf() {
- return conf;
- }
-
- @Override
- public ReaderOptions filename(Path file) {
- this.file = file;
- return this;
- }
-
- @Override
- public ReaderOptions delimiter(String delim) {
- this.delim = delim;
- return this;
- }
- }
-
- /**
- * Class specifying writer options for the {@link TextFileRegionFormat}.
- */
- public static class WriterOptions
- implements TextWriter.Options, Configurable {
-
- private Configuration conf;
- private String codec = null;
- private Path file =
- new Path(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT);
- private String delim =
- DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT;
-
- @Override
- public void setConf(Configuration conf) {
- this.conf = conf;
- String tmpfile = conf.get(
- DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_WRITE_PATH, file.toString());
- file = new Path(tmpfile);
- codec = conf.get(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_CODEC);
- delim = conf.get(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER,
- DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_DELIMITER_DEFAULT);
- }
-
- @Override
- public Configuration getConf() {
- return conf;
- }
-
- @Override
- public WriterOptions filename(Path file) {
- this.file = file;
- return this;
- }
-
- public String getCodec() {
- return codec;
- }
-
- public Path getFile() {
- return file;
- }
-
- @Override
- public WriterOptions codec(String codec) {
- this.codec = codec;
- return this;
- }
-
- @Override
- public WriterOptions delimiter(String delim) {
- this.delim = delim;
- return this;
- }
-
- }
-
- /**
- * This class is used as a reader for block maps which
- * are stored as delimited text files.
- */
- public static class TextReader extends Reader<FileRegion> {
-
- /**
- * Options for {@link TextReader}.
- */
- public interface Options extends Reader.Options {
- Options filename(Path file);
- Options delimiter(String delim);
- }
-
- static ReaderOptions defaults() {
- return new ReaderOptions();
- }
-
- private final Path file;
- private final String delim;
- private final FileSystem fs;
- private final CompressionCodec codec;
- private final Map<FRIterator, BufferedReader> iterators;
-
- protected TextReader(FileSystem fs, Path file, CompressionCodec codec,
- String delim) {
- this(fs, file, codec, delim,
- new IdentityHashMap<FRIterator, BufferedReader>());
- }
-
- TextReader(FileSystem fs, Path file, CompressionCodec codec, String delim,
- Map<FRIterator, BufferedReader> iterators) {
- this.fs = fs;
- this.file = file;
- this.codec = codec;
- this.delim = delim;
- this.iterators = Collections.synchronizedMap(iterators);
- }
-
- @Override
- public FileRegion resolve(Block ident) throws IOException {
- // consider layering index w/ composable format
- Iterator<FileRegion> i = iterator();
- try {
- while (i.hasNext()) {
- FileRegion f = i.next();
- if (f.getBlock().equals(ident)) {
- return f;
- }
- }
- } finally {
- BufferedReader r = iterators.remove(i);
- if (r != null) {
- // null on last element
- r.close();
- }
- }
- return null;
- }
-
- class FRIterator implements Iterator<FileRegion> {
-
- private FileRegion pending;
-
- @Override
- public boolean hasNext() {
- return pending != null;
- }
-
- @Override
- public FileRegion next() {
- if (null == pending) {
- throw new NoSuchElementException();
- }
- FileRegion ret = pending;
- try {
- pending = nextInternal(this);
- } catch (IOException e) {
- throw new RuntimeException(e);
- }
- return ret;
- }
-
- @Override
- public void remove() {
- throw new UnsupportedOperationException();
- }
- }
-
- private FileRegion nextInternal(Iterator<FileRegion> i) throws IOException {
- BufferedReader r = iterators.get(i);
- if (null == r) {
- throw new IllegalStateException();
- }
- String line = r.readLine();
- if (null == line) {
- iterators.remove(i);
- return null;
- }
- String[] f = line.split(delim);
- if (f.length != 6) {
- throw new IOException("Invalid line: " + line);
- }
- return new FileRegion(Long.parseLong(f[0]), new Path(f[1]),
- Long.parseLong(f[2]), Long.parseLong(f[3]), f[5],
- Long.parseLong(f[4]));
- }
-
- public InputStream createStream() throws IOException {
- InputStream i = fs.open(file);
- if (codec != null) {
- i = codec.createInputStream(i);
- }
- return i;
- }
-
- @Override
- public Iterator<FileRegion> iterator() {
- FRIterator i = new FRIterator();
- try {
- BufferedReader r =
- new BufferedReader(new InputStreamReader(createStream(), "UTF-8"));
- iterators.put(i, r);
- i.pending = nextInternal(i);
- } catch (IOException e) {
- iterators.remove(i);
- throw new RuntimeException(e);
- }
- return i;
- }
-
- @Override
- public void close() throws IOException {
- ArrayList<IOException> ex = new ArrayList<>();
- synchronized (iterators) {
- for (Iterator<BufferedReader> i = iterators.values().iterator();
- i.hasNext();) {
- try {
- BufferedReader r = i.next();
- r.close();
- } catch (IOException e) {
- ex.add(e);
- } finally {
- i.remove();
- }
- }
- iterators.clear();
- }
- if (!ex.isEmpty()) {
- throw MultipleIOException.createIOException(ex);
- }
- }
-
- }
-
- /**
- * This class is used as a writer for block maps which
- * are stored as delimited text files.
- */
- public static class TextWriter extends Writer<FileRegion> {
-
- /**
- * Interface for Writer options.
- */
- public interface Options extends Writer.Options {
- Options codec(String codec);
- Options filename(Path file);
- Options delimiter(String delim);
- }
-
- public static WriterOptions defaults() {
- return new WriterOptions();
- }
-
- private final String delim;
- private final java.io.Writer out;
-
- public TextWriter(java.io.Writer out, String delim) {
- this.out = out;
- this.delim = delim;
- }
-
- @Override
- public void store(FileRegion token) throws IOException {
- out.append(String.valueOf(token.getBlock().getBlockId())).append(delim);
- out.append(token.getPath().toString()).append(delim);
- out.append(Long.toString(token.getOffset())).append(delim);
- out.append(Long.toString(token.getLength())).append(delim);
- out.append(Long.toString(token.getGenerationStamp())).append(delim);
- out.append(token.getBlockPoolId()).append("\n");
- }
-
- @Override
- public void close() throws IOException {
- out.close();
- }
-
- }
-
- @Override
- public void refresh() throws IOException {
- //nothing to do;
- }
-
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
deleted file mode 100644
index 0fa667e..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/TextFileRegionProvider.java
+++ /dev/null
@@ -1,88 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdfs.server.common;
-
-import java.io.IOException;
-import java.util.Iterator;
-
-import org.apache.hadoop.conf.Configurable;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.util.ReflectionUtils;
-
-/**
- * This class is used to read file regions from block maps
- * specified using delimited text.
- */
-public class TextFileRegionProvider
- extends FileRegionProvider implements Configurable {
-
- private Configuration conf;
- private BlockFormat<FileRegion> fmt;
-
- @SuppressWarnings("unchecked")
- @Override
- public void setConf(Configuration conf) {
- fmt = ReflectionUtils.newInstance(
- conf.getClass(DFSConfigKeys.DFS_PROVIDER_BLK_FORMAT_CLASS,
- TextFileRegionFormat.class,
- BlockFormat.class),
- conf);
- ((Configurable)fmt).setConf(conf); //redundant?
- this.conf = conf;
- }
-
- @Override
- public Configuration getConf() {
- return conf;
- }
-
- @Override
- public Iterator<FileRegion> iterator() {
- try {
- final BlockFormat.Reader<FileRegion> r = fmt.getReader(null);
- return new Iterator<FileRegion>() {
-
- private final Iterator<FileRegion> inner = r.iterator();
-
- @Override
- public boolean hasNext() {
- return inner.hasNext();
- }
-
- @Override
- public FileRegion next() {
- return inner.next();
- }
-
- @Override
- public void remove() {
- throw new UnsupportedOperationException();
- }
- };
- } catch (IOException e) {
- throw new RuntimeException("Failed to read provided blocks", e);
- }
- }
-
- @Override
- public void refresh() throws IOException {
- fmt.refresh();
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
new file mode 100644
index 0000000..d276fb5
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/BlockAliasMap.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common.blockaliasmap;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.common.BlockAlias;
+
+/**
+ * An abstract class used to read and write block maps for provided blocks.
+ */
+public abstract class BlockAliasMap<T extends BlockAlias> {
+
+ /**
+ * An abstract class that is used to read {@link BlockAlias}es
+ * for provided blocks.
+ */
+ public static abstract class Reader<U extends BlockAlias>
+ implements Iterable<U>, Closeable {
+
+ /**
+ * reader options.
+ */
+ public interface Options { }
+
+ /**
+ * @param ident block to resolve
+ * @return BlockAlias correspoding to the provided block.
+ * @throws IOException
+ */
+ public abstract U resolve(Block ident) throws IOException;
+
+ }
+
+ /**
+ * Returns a reader to the alias map.
+ * @param opts reader options
+ * @return {@link Reader} to the alias map.
+ * @throws IOException
+ */
+ public abstract Reader<T> getReader(Reader.Options opts) throws IOException;
+
+ /**
+ * An abstract class used as a writer for the provided block map.
+ */
+ public static abstract class Writer<U extends BlockAlias>
+ implements Closeable {
+ /**
+ * writer options.
+ */
+ public interface Options { }
+
+ public abstract void store(U token) throws IOException;
+
+ }
+
+ /**
+ * Returns the writer for the alias map.
+ * @param opts writer options.
+ * @return {@link Writer} to the alias map.
+ * @throws IOException
+ */
+ public abstract Writer<T> getWriter(Writer.Options opts) throws IOException;
+
+ /**
+ * Refresh the alias map.
+ * @throws IOException
+ */
+ public abstract void refresh() throws IOException;
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
new file mode 100644
index 0000000..80f48c1
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
@@ -0,0 +1,445 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.common.blockaliasmap.impl;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Collections;
+import java.util.IdentityHashMap;
+import java.util.NoSuchElementException;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.CompressionCodecFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.annotations.VisibleForTesting;
+
+/**
+ * This class is used for block maps stored as text files,
+ * with a specified delimiter.
+ */
+public class TextFileRegionAliasMap
+ extends BlockAliasMap<FileRegion> implements Configurable {
+
+ private Configuration conf;
+ private ReaderOptions readerOpts = TextReader.defaults();
+ private WriterOptions writerOpts = TextWriter.defaults();
+
+ public static final Logger LOG =
+ LoggerFactory.getLogger(TextFileRegionAliasMap.class);
+ @Override
+ public void setConf(Configuration conf) {
+ readerOpts.setConf(conf);
+ writerOpts.setConf(conf);
+ this.conf = conf;
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public Reader<FileRegion> getReader(Reader.Options opts)
+ throws IOException {
+ if (null == opts) {
+ opts = readerOpts;
+ }
+ if (!(opts instanceof ReaderOptions)) {
+ throw new IllegalArgumentException("Invalid options " + opts.getClass());
+ }
+ ReaderOptions o = (ReaderOptions) opts;
+ Configuration readerConf = (null == o.getConf())
+ ? new Configuration()
+ : o.getConf();
+ return createReader(o.file, o.delim, readerConf);
+ }
+
+ @VisibleForTesting
+ TextReader createReader(Path file, String delim, Configuration cfg)
+ throws IOException {
+ FileSystem fs = file.getFileSystem(cfg);
+ if (fs instanceof LocalFileSystem) {
+ fs = ((LocalFileSystem)fs).getRaw();
+ }
+ CompressionCodecFactory factory = new CompressionCodecFactory(cfg);
+ CompressionCodec codec = factory.getCodec(file);
+ return new TextReader(fs, file, codec, delim);
+ }
+
+ @Override
+ public Writer<FileRegion> getWriter(Writer.Options opts) throws IOException {
+ if (null == opts) {
+ opts = writerOpts;
+ }
+ if (!(opts instanceof WriterOptions)) {
+ throw new IllegalArgumentException("Invalid options " + opts.getClass());
+ }
+ WriterOptions o = (WriterOptions) opts;
+ Configuration cfg = (null == o.getConf())
+ ? new Configuration()
+ : o.getConf();
+ if (o.codec != null) {
+ CompressionCodecFactory factory = new CompressionCodecFactory(cfg);
+ CompressionCodec codec = factory.getCodecByName(o.codec);
+ String name = o.file.getName() + codec.getDefaultExtension();
+ o.filename(new Path(o.file.getParent(), name));
+ return createWriter(o.file, codec, o.delim, cfg);
+ }
+ return createWriter(o.file, null, o.delim, conf);
+ }
+
+ @VisibleForTesting
+ TextWriter createWriter(Path file, CompressionCodec codec, String delim,
+ Configuration cfg) throws IOException {
+ FileSystem fs = file.getFileSystem(cfg);
+ if (fs instanceof LocalFileSystem) {
+ fs = ((LocalFileSystem)fs).getRaw();
+ }
+ OutputStream tmp = fs.create(file);
+ java.io.Writer out = new BufferedWriter(new OutputStreamWriter(
+ (null == codec) ? tmp : codec.createOutputStream(tmp), "UTF-8"));
+ return new TextWriter(out, delim);
+ }
+
+ /**
+ * Class specifying reader options for the {@link TextFileRegionAliasMap}.
+ */
+ public static class ReaderOptions
+ implements TextReader.Options, Configurable {
+
+ private Configuration conf;
+ private String delim =
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT;
+ private Path file = new Path(
+ new File(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT).toURI()
+ .toString());
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ String tmpfile =
+ conf.get(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_READ_PATH,
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT);
+ file = new Path(tmpfile);
+ delim = conf.get(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER,
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT);
+ LOG.info("TextFileRegionAliasMap: read path " + tmpfile.toString());
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public ReaderOptions filename(Path file) {
+ this.file = file;
+ return this;
+ }
+
+ @Override
+ public ReaderOptions delimiter(String delim) {
+ this.delim = delim;
+ return this;
+ }
+ }
+
+ /**
+ * Class specifying writer options for the {@link TextFileRegionAliasMap}.
+ */
+ public static class WriterOptions
+ implements TextWriter.Options, Configurable {
+
+ private Configuration conf;
+ private String codec = null;
+ private Path file =
+ new Path(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT);;
+ private String delim =
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT;
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ String tmpfile = conf.get(
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_WRITE_PATH, file.toString());
+ file = new Path(tmpfile);
+ codec = conf.get(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_CODEC);
+ delim = conf.get(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER,
+ DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_DELIMITER_DEFAULT);
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public WriterOptions filename(Path file) {
+ this.file = file;
+ return this;
+ }
+
+ public String getCodec() {
+ return codec;
+ }
+
+ public Path getFile() {
+ return file;
+ }
+
+ @Override
+ public WriterOptions codec(String codec) {
+ this.codec = codec;
+ return this;
+ }
+
+ @Override
+ public WriterOptions delimiter(String delim) {
+ this.delim = delim;
+ return this;
+ }
+
+ }
+
+ /**
+ * This class is used as a reader for block maps which
+ * are stored as delimited text files.
+ */
+ public static class TextReader extends Reader<FileRegion> {
+
+ /**
+ * Options for {@link TextReader}.
+ */
+ public interface Options extends Reader.Options {
+ Options filename(Path file);
+ Options delimiter(String delim);
+ }
+
+ static ReaderOptions defaults() {
+ return new ReaderOptions();
+ }
+
+ private final Path file;
+ private final String delim;
+ private final FileSystem fs;
+ private final CompressionCodec codec;
+ private final Map<FRIterator, BufferedReader> iterators;
+
+ protected TextReader(FileSystem fs, Path file, CompressionCodec codec,
+ String delim) {
+ this(fs, file, codec, delim,
+ new IdentityHashMap<FRIterator, BufferedReader>());
+ }
+
+ TextReader(FileSystem fs, Path file, CompressionCodec codec, String delim,
+ Map<FRIterator, BufferedReader> iterators) {
+ this.fs = fs;
+ this.file = file;
+ this.codec = codec;
+ this.delim = delim;
+ this.iterators = Collections.synchronizedMap(iterators);
+ }
+
+ @Override
+ public FileRegion resolve(Block ident) throws IOException {
+ // consider layering index w/ composable format
+ Iterator<FileRegion> i = iterator();
+ try {
+ while (i.hasNext()) {
+ FileRegion f = i.next();
+ if (f.getBlock().equals(ident)) {
+ return f;
+ }
+ }
+ } finally {
+ BufferedReader r = iterators.remove(i);
+ if (r != null) {
+ // null on last element
+ r.close();
+ }
+ }
+ return null;
+ }
+
+ class FRIterator implements Iterator<FileRegion> {
+
+ private FileRegion pending;
+
+ @Override
+ public boolean hasNext() {
+ return pending != null;
+ }
+
+ @Override
+ public FileRegion next() {
+ if (null == pending) {
+ throw new NoSuchElementException();
+ }
+ FileRegion ret = pending;
+ try {
+ pending = nextInternal(this);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ return ret;
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ }
+
+ private FileRegion nextInternal(Iterator<FileRegion> i) throws IOException {
+ BufferedReader r = iterators.get(i);
+ if (null == r) {
+ throw new IllegalStateException();
+ }
+ String line = r.readLine();
+ if (null == line) {
+ iterators.remove(i);
+ return null;
+ }
+ String[] f = line.split(delim);
+ if (f.length != 6) {
+ throw new IOException("Invalid line: " + line);
+ }
+ return new FileRegion(Long.parseLong(f[0]), new Path(f[1]),
+ Long.parseLong(f[2]), Long.parseLong(f[3]), f[5],
+ Long.parseLong(f[4]));
+ }
+
+ public InputStream createStream() throws IOException {
+ InputStream i = fs.open(file);
+ if (codec != null) {
+ i = codec.createInputStream(i);
+ }
+ return i;
+ }
+
+ @Override
+ public Iterator<FileRegion> iterator() {
+ FRIterator i = new FRIterator();
+ try {
+ BufferedReader r =
+ new BufferedReader(new InputStreamReader(createStream(), "UTF-8"));
+ iterators.put(i, r);
+ i.pending = nextInternal(i);
+ } catch (IOException e) {
+ iterators.remove(i);
+ throw new RuntimeException(e);
+ }
+ return i;
+ }
+
+ @Override
+ public void close() throws IOException {
+ ArrayList<IOException> ex = new ArrayList<>();
+ synchronized (iterators) {
+ for (Iterator<BufferedReader> i = iterators.values().iterator();
+ i.hasNext();) {
+ try {
+ BufferedReader r = i.next();
+ r.close();
+ } catch (IOException e) {
+ ex.add(e);
+ } finally {
+ i.remove();
+ }
+ }
+ iterators.clear();
+ }
+ if (!ex.isEmpty()) {
+ throw MultipleIOException.createIOException(ex);
+ }
+ }
+
+ }
+
+ /**
+ * This class is used as a writer for block maps which
+ * are stored as delimited text files.
+ */
+ public static class TextWriter extends Writer<FileRegion> {
+
+ /**
+ * Interface for Writer options.
+ */
+ public interface Options extends Writer.Options {
+ Options codec(String codec);
+ Options filename(Path file);
+ Options delimiter(String delim);
+ }
+
+ public static WriterOptions defaults() {
+ return new WriterOptions();
+ }
+
+ private final String delim;
+ private final java.io.Writer out;
+
+ public TextWriter(java.io.Writer out, String delim) {
+ this.out = out;
+ this.delim = delim;
+ }
+
+ @Override
+ public void store(FileRegion token) throws IOException {
+ out.append(String.valueOf(token.getBlock().getBlockId())).append(delim);
+ out.append(token.getPath().toString()).append(delim);
+ out.append(Long.toString(token.getOffset())).append(delim);
+ out.append(Long.toString(token.getLength())).append(delim);
+ out.append(Long.toString(token.getGenerationStamp())).append(delim);
+ out.append(token.getBlockPoolId()).append("\n");
+ }
+
+ @Override
+ public void close() throws IOException {
+ out.close();
+ }
+
+ }
+
+ @Override
+ public void refresh() throws IOException {
+ //nothing to do;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/package-info.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/package-info.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/package-info.java
new file mode 100644
index 0000000..b906791
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/package-info.java
@@ -0,0 +1,27 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+package org.apache.hadoop.hdfs.server.common.blockaliasmap;
+
+/**
+ * The AliasMap defines mapping of PROVIDED HDFS blocks to data in remote
+ * storage systems.
+ */
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index d1a7015..092672d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -35,9 +35,9 @@ import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.server.common.FileRegion;
-import org.apache.hadoop.hdfs.server.common.FileRegionProvider;
import org.apache.hadoop.hdfs.server.common.Storage.StorageDirectory;
-import org.apache.hadoop.hdfs.server.common.TextFileRegionProvider;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
import org.apache.hadoop.hdfs.server.datanode.ReplicaInPipeline;
import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
@@ -68,7 +68,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
static class ProvidedBlockPoolSlice {
private ProvidedVolumeImpl providedVolume;
- private FileRegionProvider provider;
+ private BlockAliasMap<FileRegion> aliasMap;
private Configuration conf;
private String bpid;
private ReplicaMap bpVolumeMap;
@@ -77,29 +77,35 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
Configuration conf) {
this.providedVolume = volume;
bpVolumeMap = new ReplicaMap(new AutoCloseableLock());
- Class<? extends FileRegionProvider> fmt =
- conf.getClass(DFSConfigKeys.DFS_PROVIDER_CLASS,
- TextFileRegionProvider.class, FileRegionProvider.class);
- provider = ReflectionUtils.newInstance(fmt, conf);
+ Class<? extends BlockAliasMap> fmt =
+ conf.getClass(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
+ TextFileRegionAliasMap.class, BlockAliasMap.class);
+ aliasMap = ReflectionUtils.newInstance(fmt, conf);
this.conf = conf;
this.bpid = bpid;
bpVolumeMap.initBlockPool(bpid);
- LOG.info("Created provider: " + provider.getClass());
+ LOG.info("Created alias map using class: " + aliasMap.getClass());
}
- FileRegionProvider getFileRegionProvider() {
- return provider;
+ BlockAliasMap<FileRegion> getBlockAliasMap() {
+ return aliasMap;
}
@VisibleForTesting
- void setFileRegionProvider(FileRegionProvider newProvider) {
- this.provider = newProvider;
+ void setFileRegionProvider(BlockAliasMap<FileRegion> blockAliasMap) {
+ this.aliasMap = blockAliasMap;
}
public void getVolumeMap(ReplicaMap volumeMap,
RamDiskReplicaTracker ramDiskReplicaMap, FileSystem remoteFS)
throws IOException {
- Iterator<FileRegion> iter = provider.iterator();
+ BlockAliasMap.Reader<FileRegion> reader = aliasMap.getReader(null);
+ if (reader == null) {
+ LOG.warn("Got null reader from BlockAliasMap " + aliasMap
+ + "; no blocks will be populated");
+ return;
+ }
+ Iterator<FileRegion> iter = reader.iterator();
while (iter.hasNext()) {
FileRegion region = iter.next();
if (region.getBlockPoolId() != null
@@ -140,14 +146,20 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
public void compileReport(LinkedList<ScanInfo> report,
ReportCompiler reportCompiler)
throws IOException, InterruptedException {
- /* refresh the provider and return the list of blocks found.
+ /* refresh the aliasMap and return the list of blocks found.
* the assumption here is that the block ids in the external
* block map, after the refresh, are consistent with those
* from before the refresh, i.e., for blocks which did not change,
* the ids remain the same.
*/
- provider.refresh();
- Iterator<FileRegion> iter = provider.iterator();
+ aliasMap.refresh();
+ BlockAliasMap.Reader<FileRegion> reader = aliasMap.getReader(null);
+ if (reader == null) {
+ LOG.warn("Got null reader from BlockAliasMap " + aliasMap
+ + "; no blocks will be populated in scan report");
+ return;
+ }
+ Iterator<FileRegion> iter = reader.iterator();
while(iter.hasNext()) {
reportCompiler.throttle();
FileRegion region = iter.next();
@@ -284,15 +296,15 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
private String bpid;
private String name;
- private FileRegionProvider provider;
+ private BlockAliasMap<FileRegion> blockAliasMap;
private Iterator<FileRegion> blockIterator;
private ProvidedBlockIteratorState state;
ProviderBlockIteratorImpl(String bpid, String name,
- FileRegionProvider provider) {
+ BlockAliasMap<FileRegion> blockAliasMap) {
this.bpid = bpid;
this.name = name;
- this.provider = provider;
+ this.blockAliasMap = blockAliasMap;
rewind();
}
@@ -330,7 +342,17 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
@Override
public void rewind() {
- blockIterator = provider.iterator();
+ BlockAliasMap.Reader<FileRegion> reader = null;
+ try {
+ reader = blockAliasMap.getReader(null);
+ } catch (IOException e) {
+ LOG.warn("Exception in getting reader from provided alias map");
+ }
+ if (reader != null) {
+ blockIterator = reader.iterator();
+ } else {
+ blockIterator = null;
+ }
state = new ProvidedBlockIteratorState();
}
@@ -372,14 +394,14 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
@Override
public BlockIterator newBlockIterator(String bpid, String name) {
return new ProviderBlockIteratorImpl(bpid, name,
- bpSlices.get(bpid).getFileRegionProvider());
+ bpSlices.get(bpid).getBlockAliasMap());
}
@Override
public BlockIterator loadBlockIterator(String bpid, String name)
throws IOException {
ProviderBlockIteratorImpl iter = new ProviderBlockIteratorImpl(bpid, name,
- bpSlices.get(bpid).getFileRegionProvider());
+ bpSlices.get(bpid).getBlockAliasMap());
iter.load();
return iter;
}
@@ -425,8 +447,8 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
}
@VisibleForTesting
- FileRegionProvider getFileRegionProvider(String bpid) throws IOException {
- return getProvidedBlockPoolSlice(bpid).getFileRegionProvider();
+ BlockAliasMap<FileRegion> getBlockFormat(String bpid) throws IOException {
+ return getProvidedBlockPoolSlice(bpid).getBlockAliasMap();
}
@Override
@@ -571,12 +593,12 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
}
@VisibleForTesting
- void setFileRegionProvider(String bpid, FileRegionProvider provider)
- throws IOException {
+ void setFileRegionProvider(String bpid,
+ BlockAliasMap<FileRegion> blockAliasMap) throws IOException {
ProvidedBlockPoolSlice bp = bpSlices.get(bpid);
if (bp == null) {
throw new IOException("block pool " + bpid + " is not found");
}
- bp.setFileRegionProvider(provider);
+ bp.setFileRegionProvider(blockAliasMap);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 0f1407a..835d8c4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -4630,26 +4630,6 @@
</property>
<property>
- <name>dfs.namenode.block.provider.class</name>
- <value>org.apache.hadoop.hdfs.server.blockmanagement.BlockFormatProvider</value>
- <description>
- The class that is used to load provided blocks in the Namenode.
- </description>
- </property>
-
- <property>
- <name>dfs.provider.class</name>
- <value>org.apache.hadoop.hdfs.server.common.TextFileRegionProvider</value>
- <description>
- The class that is used to load information about blocks stored in
- provided storages.
- org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TextFileRegionProvider
- is used as the default, which expects the blocks to be specified
- using a delimited text file.
- </description>
- </property>
-
- <property>
<name>dfs.provided.df.class</name>
<value>org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.DefaultProvidedVolumeDF</value>
<description>
@@ -4666,12 +4646,12 @@
</property>
<property>
- <name>dfs.provided.blockformat.class</name>
- <value>org.apache.hadoop.hdfs.server.common.TextFileRegionFormat</value>
+ <name>dfs.provided.aliasmap.class</name>
+ <value>org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap</value>
<description>
The class that is used to specify the input format of the blocks on
provided storages. The default is
- org.apache.hadoop.hdfs.server.common.TextFileRegionFormat which uses
+ org.apache.hadoop.hdfs.server.common.TextFileRegionAliasMap which uses
file regions to describe blocks. The file regions are specified as a
delimited text file. Each file region is a 6-tuple containing the
block id, remote file path, offset into file, length of block, the
@@ -4681,7 +4661,7 @@
</property>
<property>
- <name>dfs.provided.textprovider.delimiter</name>
+ <name>dfs.provided.aliasmap.text.delimiter</name>
<value>,</value>
<description>
The delimiter used when the provided block map is specified as
@@ -4690,7 +4670,7 @@
</property>
<property>
- <name>dfs.provided.textprovider.read.path</name>
+ <name>dfs.provided.aliasmap.text.read.path</name>
<value></value>
<description>
The path specifying the provided block map as a text file, specified as
@@ -4699,7 +4679,7 @@
</property>
<property>
- <name>dfs.provided.textprovider.read.codec</name>
+ <name>dfs.provided.aliasmap.text.codec</name>
<value></value>
<description>
The codec used to de-compress the provided block map.
@@ -4707,7 +4687,7 @@
</property>
<property>
- <name>dfs.provided.textprovider.write.path</name>
+ <name>dfs.provided.aliasmap.text.write.path</name>
<value></value>
<description>
The path to which the provided block map should be written as a text
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
index 2296c82..89741b5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
@@ -17,20 +17,19 @@
*/
package org.apache.hadoop.hdfs.server.blockmanagement;
-import org.apache.hadoop.conf.Configurable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DFSTestUtil;
import org.apache.hadoop.hdfs.HdfsConfiguration;
-import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl;
import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
import org.apache.hadoop.hdfs.util.RwLock;
import org.junit.Before;
import org.junit.Test;
import java.io.IOException;
-import java.util.Iterator;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
@@ -47,37 +46,6 @@ public class TestProvidedStorageMap {
private RwLock nameSystemLock;
private String providedStorageID;
- static class TestBlockProvider extends BlockProvider
- implements Configurable {
-
- @Override
- public void setConf(Configuration conf) {
- }
-
- @Override
- public Configuration getConf() {
- return null;
- }
-
- @Override
- public Iterator<Block> iterator() {
- return new Iterator<Block>() {
- @Override
- public boolean hasNext() {
- return false;
- }
- @Override
- public Block next() {
- return null;
- }
- @Override
- public void remove() {
- throw new UnsupportedOperationException();
- }
- };
- }
- }
-
@Before
public void setup() {
providedStorageID = DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT;
@@ -85,8 +53,9 @@ public class TestProvidedStorageMap {
conf.set(DFSConfigKeys.DFS_PROVIDER_STORAGEUUID,
providedStorageID);
conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED, true);
- conf.setClass(DFSConfigKeys.DFS_NAMENODE_BLOCK_PROVIDER_CLASS,
- TestBlockProvider.class, BlockProvider.class);
+ conf.setClass(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
+ TestProvidedImpl.TestFileRegionBlockAliasMap.class,
+ BlockAliasMap.class);
bm = mock(BlockManager.class);
nameSystemLock = mock(RwLock.class);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java
deleted file mode 100644
index eaaac22..0000000
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestTextBlockFormat.java
+++ /dev/null
@@ -1,160 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hdfs.server.common;
-
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStreamWriter;
-import java.util.Iterator;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
-import org.apache.hadoop.hdfs.server.common.TextFileRegionFormat.*;
-import org.apache.hadoop.io.DataInputBuffer;
-import org.apache.hadoop.io.DataOutputBuffer;
-import org.apache.hadoop.io.compress.CompressionCodec;
-
-import org.junit.Test;
-import static org.junit.Assert.*;
-
-/**
- * Test for the text based block format for provided block maps.
- */
-public class TestTextBlockFormat {
-
- static final Path OUTFILE = new Path("hdfs://dummyServer:0000/dummyFile.txt");
-
- void check(TextWriter.Options opts, final Path vp,
- final Class<? extends CompressionCodec> vc) throws IOException {
- TextFileRegionFormat mFmt = new TextFileRegionFormat() {
- @Override
- public TextWriter createWriter(Path file, CompressionCodec codec,
- String delim, Configuration conf) throws IOException {
- assertEquals(vp, file);
- if (null == vc) {
- assertNull(codec);
- } else {
- assertEquals(vc, codec.getClass());
- }
- return null; // ignored
- }
- };
- mFmt.getWriter(opts);
- }
-
- @Test
- public void testWriterOptions() throws Exception {
- TextWriter.Options opts = TextWriter.defaults();
- assertTrue(opts instanceof WriterOptions);
- WriterOptions wopts = (WriterOptions) opts;
- Path def = new Path(DFSConfigKeys.DFS_PROVIDED_BLOCK_MAP_PATH_DEFAULT);
- assertEquals(def, wopts.getFile());
- assertNull(wopts.getCodec());
-
- opts.filename(OUTFILE);
- check(opts, OUTFILE, null);
-
- opts.filename(OUTFILE);
- opts.codec("gzip");
- Path cp = new Path(OUTFILE.getParent(), OUTFILE.getName() + ".gz");
- check(opts, cp, org.apache.hadoop.io.compress.GzipCodec.class);
-
- }
-
- @Test
- public void testCSVReadWrite() throws Exception {
- final DataOutputBuffer out = new DataOutputBuffer();
- FileRegion r1 = new FileRegion(4344L, OUTFILE, 0, 1024);
- FileRegion r2 = new FileRegion(4345L, OUTFILE, 1024, 1024);
- FileRegion r3 = new FileRegion(4346L, OUTFILE, 2048, 512);
- try (TextWriter csv = new TextWriter(new OutputStreamWriter(out), ",")) {
- csv.store(r1);
- csv.store(r2);
- csv.store(r3);
- }
- Iterator<FileRegion> i3;
- try (TextReader csv = new TextReader(null, null, null, ",") {
- @Override
- public InputStream createStream() {
- DataInputBuffer in = new DataInputBuffer();
- in.reset(out.getData(), 0, out.getLength());
- return in;
- }}) {
- Iterator<FileRegion> i1 = csv.iterator();
- assertEquals(r1, i1.next());
- Iterator<FileRegion> i2 = csv.iterator();
- assertEquals(r1, i2.next());
- assertEquals(r2, i2.next());
- assertEquals(r3, i2.next());
- assertEquals(r2, i1.next());
- assertEquals(r3, i1.next());
-
- assertFalse(i1.hasNext());
- assertFalse(i2.hasNext());
- i3 = csv.iterator();
- }
- try {
- i3.next();
- } catch (IllegalStateException e) {
- return;
- }
- fail("Invalid iterator");
- }
-
- @Test
- public void testCSVReadWriteTsv() throws Exception {
- final DataOutputBuffer out = new DataOutputBuffer();
- FileRegion r1 = new FileRegion(4344L, OUTFILE, 0, 1024);
- FileRegion r2 = new FileRegion(4345L, OUTFILE, 1024, 1024);
- FileRegion r3 = new FileRegion(4346L, OUTFILE, 2048, 512);
- try (TextWriter csv = new TextWriter(new OutputStreamWriter(out), "\t")) {
- csv.store(r1);
- csv.store(r2);
- csv.store(r3);
- }
- Iterator<FileRegion> i3;
- try (TextReader csv = new TextReader(null, null, null, "\t") {
- @Override
- public InputStream createStream() {
- DataInputBuffer in = new DataInputBuffer();
- in.reset(out.getData(), 0, out.getLength());
- return in;
- }}) {
- Iterator<FileRegion> i1 = csv.iterator();
- assertEquals(r1, i1.next());
- Iterator<FileRegion> i2 = csv.iterator();
- assertEquals(r1, i2.next());
- assertEquals(r2, i2.next());
- assertEquals(r3, i2.next());
- assertEquals(r2, i1.next());
- assertEquals(r3, i1.next());
-
- assertFalse(i1.hasNext());
- assertFalse(i2.hasNext());
- i3 = csv.iterator();
- }
- try {
- i3.next();
- } catch (IllegalStateException e) {
- return;
- }
- fail("Invalid iterator");
- }
-
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/926ead5e/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestTextBlockAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestTextBlockAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestTextBlockAliasMap.java
new file mode 100644
index 0000000..79308a3
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestTextBlockAliasMap.java
@@ -0,0 +1,161 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common.blockaliasmap.impl;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStreamWriter;
+import java.util.Iterator;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap.*;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.compress.CompressionCodec;
+
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+/**
+ * Test for the text based block format for provided block maps.
+ */
+public class TestTextBlockAliasMap {
+
+ static final Path OUTFILE = new Path("hdfs://dummyServer:0000/dummyFile.txt");
+
+ void check(TextWriter.Options opts, final Path vp,
+ final Class<? extends CompressionCodec> vc) throws IOException {
+ TextFileRegionAliasMap mFmt = new TextFileRegionAliasMap() {
+ @Override
+ public TextWriter createWriter(Path file, CompressionCodec codec,
+ String delim, Configuration conf) throws IOException {
+ assertEquals(vp, file);
+ if (null == vc) {
+ assertNull(codec);
+ } else {
+ assertEquals(vc, codec.getClass());
+ }
+ return null; // ignored
+ }
+ };
+ mFmt.getWriter(opts);
+ }
+
+ @Test
+ public void testWriterOptions() throws Exception {
+ TextWriter.Options opts = TextWriter.defaults();
+ assertTrue(opts instanceof WriterOptions);
+ WriterOptions wopts = (WriterOptions) opts;
+ Path def = new Path(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_TEXT_PATH_DEFAULT);
+ assertEquals(def, wopts.getFile());
+ assertNull(wopts.getCodec());
+
+ opts.filename(OUTFILE);
+ check(opts, OUTFILE, null);
+
+ opts.filename(OUTFILE);
+ opts.codec("gzip");
+ Path cp = new Path(OUTFILE.getParent(), OUTFILE.getName() + ".gz");
+ check(opts, cp, org.apache.hadoop.io.compress.GzipCodec.class);
+
+ }
+
+ @Test
+ public void testCSVReadWrite() throws Exception {
+ final DataOutputBuffer out = new DataOutputBuffer();
+ FileRegion r1 = new FileRegion(4344L, OUTFILE, 0, 1024);
+ FileRegion r2 = new FileRegion(4345L, OUTFILE, 1024, 1024);
+ FileRegion r3 = new FileRegion(4346L, OUTFILE, 2048, 512);
+ try (TextWriter csv = new TextWriter(new OutputStreamWriter(out), ",")) {
+ csv.store(r1);
+ csv.store(r2);
+ csv.store(r3);
+ }
+ Iterator<FileRegion> i3;
+ try (TextReader csv = new TextReader(null, null, null, ",") {
+ @Override
+ public InputStream createStream() {
+ DataInputBuffer in = new DataInputBuffer();
+ in.reset(out.getData(), 0, out.getLength());
+ return in;
+ }}) {
+ Iterator<FileRegion> i1 = csv.iterator();
+ assertEquals(r1, i1.next());
+ Iterator<FileRegion> i2 = csv.iterator();
+ assertEquals(r1, i2.next());
+ assertEquals(r2, i2.next());
+ assertEquals(r3, i2.next());
+ assertEquals(r2, i1.next());
+ assertEquals(r3, i1.next());
+
+ assertFalse(i1.hasNext());
+ assertFalse(i2.hasNext());
+ i3 = csv.iterator();
+ }
+ try {
+ i3.next();
+ } catch (IllegalStateException e) {
+ return;
+ }
+ fail("Invalid iterator");
+ }
+
+ @Test
+ public void testCSVReadWriteTsv() throws Exception {
+ final DataOutputBuffer out = new DataOutputBuffer();
+ FileRegion r1 = new FileRegion(4344L, OUTFILE, 0, 1024);
+ FileRegion r2 = new FileRegion(4345L, OUTFILE, 1024, 1024);
+ FileRegion r3 = new FileRegion(4346L, OUTFILE, 2048, 512);
+ try (TextWriter csv = new TextWriter(new OutputStreamWriter(out), "\t")) {
+ csv.store(r1);
+ csv.store(r2);
+ csv.store(r3);
+ }
+ Iterator<FileRegion> i3;
+ try (TextReader csv = new TextReader(null, null, null, "\t") {
+ @Override
+ public InputStream createStream() {
+ DataInputBuffer in = new DataInputBuffer();
+ in.reset(out.getData(), 0, out.getLength());
+ return in;
+ }}) {
+ Iterator<FileRegion> i1 = csv.iterator();
+ assertEquals(r1, i1.next());
+ Iterator<FileRegion> i2 = csv.iterator();
+ assertEquals(r1, i2.next());
+ assertEquals(r2, i2.next());
+ assertEquals(r3, i2.next());
+ assertEquals(r2, i1.next());
+ assertEquals(r3, i1.next());
+
+ assertFalse(i1.hasNext());
+ assertFalse(i2.hasNext());
+ i3 = csv.iterator();
+ }
+ try {
+ i3.next();
+ } catch (IllegalStateException e) {
+ return;
+ }
+ fail("Invalid iterator");
+ }
+
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[41/50] [abbrv] hadoop git commit: HDFS-12777. [READ] Reduce memory
and CPU footprint for PROVIDED volumes.
Posted by vi...@apache.org.
HDFS-12777. [READ] Reduce memory and CPU footprint for PROVIDED volumes.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4310e059
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4310e059
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4310e059
Branch: refs/heads/HDFS-9806
Commit: 4310e059d02a28dc14b9d0a19612873569a01e7e
Parents: ec6f48f
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Fri Nov 10 10:19:33 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../hdfs/server/datanode/DirectoryScanner.java | 4 +
.../datanode/FinalizedProvidedReplica.java | 8 ++
.../hdfs/server/datanode/ProvidedReplica.java | 77 +++++++++++++++++++-
.../hdfs/server/datanode/ReplicaBuilder.java | 37 +++++++++-
.../fsdataset/impl/ProvidedVolumeImpl.java | 30 +++++++-
.../fsdataset/impl/TestProvidedImpl.java | 76 ++++++++++++-------
6 files changed, 196 insertions(+), 36 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4310e059/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 3b6d06c..8fb8551 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -530,6 +530,10 @@ public class DirectoryScanner implements Runnable {
new HashMap<Integer, Future<ScanInfoPerBlockPool>>();
for (int i = 0; i < volumes.size(); i++) {
+ if (volumes.get(i).getStorageType() == StorageType.PROVIDED) {
+ // Disable scanning PROVIDED volumes to keep overhead low
+ continue;
+ }
ReportCompiler reportCompiler =
new ReportCompiler(datanode, volumes.get(i));
Future<ScanInfoPerBlockPool> result =
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4310e059/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
index e23d6be..bcc9a38 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FinalizedProvidedReplica.java
@@ -21,6 +21,7 @@ import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
import org.apache.hadoop.hdfs.server.protocol.ReplicaRecoveryInfo;
@@ -37,6 +38,13 @@ public class FinalizedProvidedReplica extends ProvidedReplica {
remoteFS);
}
+ public FinalizedProvidedReplica(long blockId, Path pathPrefix,
+ String pathSuffix, long fileOffset, long blockLen, long genStamp,
+ FsVolumeSpi volume, Configuration conf, FileSystem remoteFS) {
+ super(blockId, pathPrefix, pathSuffix, fileOffset, blockLen,
+ genStamp, volume, conf, remoteFS);
+ }
+
@Override
public ReplicaState getState() {
return ReplicaState.FINALIZED;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4310e059/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
index 2b3bd13..8681421 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ProvidedReplica.java
@@ -23,6 +23,7 @@ import java.io.InputStream;
import java.io.OutputStream;
import java.net.URI;
+import com.google.common.annotations.VisibleForTesting;
import org.apache.commons.io.input.BoundedInputStream;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
@@ -51,18 +52,23 @@ public abstract class ProvidedReplica extends ReplicaInfo {
static final byte[] NULL_CHECKSUM_ARRAY =
FsDatasetUtil.createNullChecksumByteArray();
private URI fileURI;
+ private Path pathPrefix;
+ private String pathSuffix;
private long fileOffset;
private Configuration conf;
private FileSystem remoteFS;
/**
* Constructor.
+ *
* @param blockId block id
* @param fileURI remote URI this block is to be read from
* @param fileOffset the offset in the remote URI
* @param blockLen the length of the block
* @param genStamp the generation stamp of the block
* @param volume the volume this block belongs to
+ * @param conf the configuration
+ * @param remoteFS reference to the remote filesystem to use for this replica.
*/
public ProvidedReplica(long blockId, URI fileURI, long fileOffset,
long blockLen, long genStamp, FsVolumeSpi volume, Configuration conf,
@@ -85,23 +91,86 @@ public abstract class ProvidedReplica extends ReplicaInfo {
}
}
+ /**
+ * Constructor.
+ *
+ * @param blockId block id
+ * @param pathPrefix A prefix of the {@link Path} associated with this replica
+ * on the remote {@link FileSystem}.
+ * @param pathSuffix A suffix of the {@link Path} associated with this replica
+ * on the remote {@link FileSystem}. Resolving the {@code pathSuffix}
+ * against the {@code pathPrefix} should provide the exact
+ * {@link Path} of the data associated with this replica on the
+ * remote {@link FileSystem}.
+ * @param fileOffset the offset in the remote URI
+ * @param blockLen the length of the block
+ * @param genStamp the generation stamp of the block
+ * @param volume the volume this block belongs to
+ * @param conf the configuration
+ * @param remoteFS reference to the remote filesystem to use for this replica.
+ */
+ public ProvidedReplica(long blockId, Path pathPrefix, String pathSuffix,
+ long fileOffset, long blockLen, long genStamp, FsVolumeSpi volume,
+ Configuration conf, FileSystem remoteFS) {
+ super(volume, blockId, blockLen, genStamp);
+ this.fileURI = null;
+ this.pathPrefix = pathPrefix;
+ this.pathSuffix = pathSuffix;
+ this.fileOffset = fileOffset;
+ this.conf = conf;
+ if (remoteFS != null) {
+ this.remoteFS = remoteFS;
+ } else {
+ LOG.warn(
+ "Creating an reference to the remote FS for provided block " + this);
+ try {
+ this.remoteFS = FileSystem.get(pathPrefix.toUri(), this.conf);
+ } catch (IOException e) {
+ LOG.warn("Failed to obtain filesystem for " + pathPrefix);
+ this.remoteFS = null;
+ }
+ }
+ }
+
public ProvidedReplica(ProvidedReplica r) {
super(r);
this.fileURI = r.fileURI;
this.fileOffset = r.fileOffset;
this.conf = r.conf;
this.remoteFS = r.remoteFS;
+ this.pathPrefix = r.pathPrefix;
+ this.pathSuffix = r.pathSuffix;
}
@Override
public URI getBlockURI() {
- return this.fileURI;
+ return getRemoteURI();
+ }
+
+ @VisibleForTesting
+ public String getPathSuffix() {
+ return pathSuffix;
+ }
+
+ @VisibleForTesting
+ public Path getPathPrefix() {
+ return pathPrefix;
+ }
+
+ private URI getRemoteURI() {
+ if (fileURI != null) {
+ return fileURI;
+ } else if (pathPrefix == null) {
+ return new Path(pathSuffix).toUri();
+ } else {
+ return new Path(pathPrefix, pathSuffix).toUri();
+ }
}
@Override
public InputStream getDataInputStream(long seekOffset) throws IOException {
if (remoteFS != null) {
- FSDataInputStream ins = remoteFS.open(new Path(fileURI));
+ FSDataInputStream ins = remoteFS.open(new Path(getRemoteURI()));
ins.seek(fileOffset + seekOffset);
return new BoundedInputStream(
new FSDataInputStream(ins), getBlockDataLength());
@@ -132,7 +201,7 @@ public abstract class ProvidedReplica extends ReplicaInfo {
public boolean blockDataExists() {
if(remoteFS != null) {
try {
- return remoteFS.exists(new Path(fileURI));
+ return remoteFS.exists(new Path(getRemoteURI()));
} catch (IOException e) {
return false;
}
@@ -220,7 +289,7 @@ public abstract class ProvidedReplica extends ReplicaInfo {
public int compareWith(ScanInfo info) {
//local scanning cannot find any provided blocks.
if (info.getFileRegion().equals(
- new FileRegion(this.getBlockId(), new Path(fileURI),
+ new FileRegion(this.getBlockId(), new Path(getRemoteURI()),
fileOffset, this.getNumBytes(), this.getGenerationStamp()))) {
return 0;
} else {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4310e059/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
index c5cb6a5..de68e2d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaBuilder.java
@@ -21,6 +21,7 @@ import java.io.File;
import java.net.URI;
import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.StorageType;
@@ -52,6 +53,8 @@ public class ReplicaBuilder {
private Configuration conf;
private FileRegion fileRegion;
private FileSystem remoteFS;
+ private String pathSuffix;
+ private Path pathPrefix;
public ReplicaBuilder(ReplicaState state) {
volume = null;
@@ -145,6 +148,28 @@ public class ReplicaBuilder {
return this;
}
+ /**
+ * Set the suffix of the {@link Path} associated with the replica.
+ * Intended to be use only for {@link ProvidedReplica}s.
+ * @param suffix the path suffix.
+ * @return the builder with the path suffix set.
+ */
+ public ReplicaBuilder setPathSuffix(String suffix) {
+ this.pathSuffix = suffix;
+ return this;
+ }
+
+ /**
+ * Set the prefix of the {@link Path} associated with the replica.
+ * Intended to be use only for {@link ProvidedReplica}s.
+ * @param prefix the path prefix.
+ * @return the builder with the path prefix set.
+ */
+ public ReplicaBuilder setPathPrefix(Path prefix) {
+ this.pathPrefix = prefix;
+ return this;
+ }
+
public LocalReplicaInPipeline buildLocalReplicaInPipeline()
throws IllegalArgumentException {
LocalReplicaInPipeline info = null;
@@ -275,14 +300,20 @@ public class ReplicaBuilder {
throw new IllegalArgumentException("Finalized PROVIDED replica " +
"cannot be constructed from another replica");
}
- if (fileRegion == null && uri == null) {
+ if (fileRegion == null && uri == null &&
+ (pathPrefix == null || pathSuffix == null)) {
throw new IllegalArgumentException(
"Trying to construct a provided replica on " + volume +
" without enough information");
}
if (fileRegion == null) {
- info = new FinalizedProvidedReplica(blockId, uri, offset,
- length, genStamp, volume, conf, remoteFS);
+ if (uri != null) {
+ info = new FinalizedProvidedReplica(blockId, uri, offset,
+ length, genStamp, volume, conf, remoteFS);
+ } else {
+ info = new FinalizedProvidedReplica(blockId, pathPrefix, pathSuffix,
+ offset, length, genStamp, volume, conf, remoteFS);
+ }
} else {
info = new FinalizedProvidedReplica(fileRegion.getBlock().getBlockId(),
fileRegion.getPath().toUri(),
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4310e059/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index 092672d..d103b64 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -29,6 +29,7 @@ import java.util.concurrent.ConcurrentHashMap;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.protocol.Block;
@@ -65,6 +66,29 @@ import org.apache.hadoop.util.Time;
*/
public class ProvidedVolumeImpl extends FsVolumeImpl {
+ /**
+ * Get a suffix of the full path, excluding the given prefix.
+ *
+ * @param prefix a prefix of the path.
+ * @param fullPath the full path whose suffix is needed.
+ * @return the suffix of the path, which when resolved against {@code prefix}
+ * gets back the {@code fullPath}.
+ */
+ @VisibleForTesting
+ protected static String getSuffix(final Path prefix, final Path fullPath) {
+ String prefixStr = prefix.toString();
+ String pathStr = fullPath.toString();
+ if (!pathStr.startsWith(prefixStr)) {
+ LOG.debug("Path {} is not a prefix of the path {}", prefix, fullPath);
+ return pathStr;
+ }
+ String suffix = pathStr.replaceFirst("^" + prefixStr, "");
+ if (suffix.startsWith("/")) {
+ suffix = suffix.substring(1);
+ }
+ return suffix;
+ }
+
static class ProvidedBlockPoolSlice {
private ProvidedVolumeImpl providedVolume;
@@ -106,15 +130,19 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
return;
}
Iterator<FileRegion> iter = reader.iterator();
+ Path blockPrefixPath = new Path(providedVolume.getBaseURI());
while (iter.hasNext()) {
FileRegion region = iter.next();
if (region.getBlockPoolId() != null
&& region.getBlockPoolId().equals(bpid)
&& containsBlock(providedVolume.baseURI,
region.getPath().toUri())) {
+ String blockSuffix =
+ getSuffix(blockPrefixPath, new Path(region.getPath().toUri()));
ReplicaInfo newReplica = new ReplicaBuilder(ReplicaState.FINALIZED)
.setBlockId(region.getBlock().getBlockId())
- .setURI(region.getPath().toUri())
+ .setPathPrefix(blockPrefixPath)
+ .setPathSuffix(blockSuffix)
.setOffset(region.getOffset())
.setLength(region.getBlock().getNumBytes())
.setGenerationStamp(region.getBlock().getGenerationStamp())
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4310e059/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index 40d77f7a..ecab06b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -62,7 +62,7 @@ import org.apache.hadoop.hdfs.server.datanode.BlockScanner;
import org.apache.hadoop.hdfs.server.datanode.DNConf;
import org.apache.hadoop.hdfs.server.datanode.DataNode;
import org.apache.hadoop.hdfs.server.datanode.DataStorage;
-import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner;
+import org.apache.hadoop.hdfs.server.datanode.ProvidedReplica;
import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
import org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry;
import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
@@ -509,33 +509,6 @@ public class TestProvidedImpl {
}
}
- @Test
- public void testRefresh() throws IOException {
- conf.setInt(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_THREADS_KEY, 1);
- for (int i = 0; i < providedVolumes.size(); i++) {
- ProvidedVolumeImpl vol = (ProvidedVolumeImpl) providedVolumes.get(i);
- TestFileRegionBlockAliasMap testBlockFormat =
- (TestFileRegionBlockAliasMap) vol
- .getBlockFormat(BLOCK_POOL_IDS[CHOSEN_BP_ID]);
- //equivalent to two new blocks appearing
- testBlockFormat.setBlockCount(NUM_PROVIDED_BLKS + 2);
- //equivalent to deleting the first block
- testBlockFormat.setMinBlkId(MIN_BLK_ID + 1);
-
- DirectoryScanner scanner = new DirectoryScanner(datanode, dataset, conf);
- scanner.reconcile();
- ReplicaInfo info = dataset.getBlockReplica(
- BLOCK_POOL_IDS[CHOSEN_BP_ID], NUM_PROVIDED_BLKS + 1);
- //new replica should be added to the dataset
- assertTrue(info != null);
- try {
- info = dataset.getBlockReplica(BLOCK_POOL_IDS[CHOSEN_BP_ID], 0);
- } catch(Exception ex) {
- LOG.info("Exception expected: " + ex);
- }
- }
- }
-
private int getBlocksInProvidedVolumes(String basePath, int numBlocks,
int minBlockId) throws IOException {
TestFileRegionIterator fileRegionIterator =
@@ -621,4 +594,51 @@ public class TestProvidedImpl {
ProvidedVolumeImpl.containsBlock(new URI("/bucket1/dir1/"),
new URI("s3a:/bucket1/dir1/temp.txt")));
}
+
+ @Test
+ public void testProvidedReplicaSuffixExtraction() {
+ assertEquals("B.txt", ProvidedVolumeImpl.getSuffix(
+ new Path("file:///A/"), new Path("file:///A/B.txt")));
+ assertEquals("B/C.txt", ProvidedVolumeImpl.getSuffix(
+ new Path("file:///A/"), new Path("file:///A/B/C.txt")));
+ assertEquals("B/C/D.txt", ProvidedVolumeImpl.getSuffix(
+ new Path("file:///A/"), new Path("file:///A/B/C/D.txt")));
+ assertEquals("D.txt", ProvidedVolumeImpl.getSuffix(
+ new Path("file:///A/B/C/"), new Path("file:///A/B/C/D.txt")));
+ assertEquals("file:/A/B/C/D.txt", ProvidedVolumeImpl.getSuffix(
+ new Path("file:///X/B/C/"), new Path("file:///A/B/C/D.txt")));
+ assertEquals("D.txt", ProvidedVolumeImpl.getSuffix(
+ new Path("/A/B/C"), new Path("/A/B/C/D.txt")));
+ assertEquals("D.txt", ProvidedVolumeImpl.getSuffix(
+ new Path("/A/B/C/"), new Path("/A/B/C/D.txt")));
+
+ assertEquals("data/current.csv", ProvidedVolumeImpl.getSuffix(
+ new Path("wasb:///users/alice/"),
+ new Path("wasb:///users/alice/data/current.csv")));
+ assertEquals("current.csv", ProvidedVolumeImpl.getSuffix(
+ new Path("wasb:///users/alice/data"),
+ new Path("wasb:///users/alice/data/current.csv")));
+
+ assertEquals("wasb:/users/alice/data/current.csv",
+ ProvidedVolumeImpl.getSuffix(
+ new Path("wasb:///users/bob/"),
+ new Path("wasb:///users/alice/data/current.csv")));
+ }
+
+ @Test
+ public void testProvidedReplicaPrefix() throws Exception {
+ for (int i = 0; i < providedVolumes.size(); i++) {
+ FsVolumeImpl vol = providedVolumes.get(i);
+ ReplicaMap volumeMap = new ReplicaMap(new AutoCloseableLock());
+ vol.getVolumeMap(volumeMap, null);
+
+ Path expectedPrefix = new Path(
+ StorageLocation.normalizeFileURI(new File(providedBasePath).toURI()));
+ for (ReplicaInfo info : volumeMap
+ .replicas(BLOCK_POOL_IDS[CHOSEN_BP_ID])) {
+ ProvidedReplica pInfo = (ProvidedReplica) info;
+ assertEquals(expectedPrefix, pInfo.getPathPrefix());
+ }
+ }
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[47/50] [abbrv] hadoop git commit: HDFS-12778. [READ] Report multiple
locations for PROVIDED blocks
Posted by vi...@apache.org.
HDFS-12778. [READ] Report multiple locations for PROVIDED blocks
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1151f04a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1151f04a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1151f04a
Branch: refs/heads/HDFS-9806
Commit: 1151f04a701146cb40395bbfb88393dab5cf7704
Parents: ecb5602
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Tue Nov 21 14:54:57 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../blockmanagement/ProvidedStorageMap.java | 149 +++++++------------
.../server/namenode/FixedBlockResolver.java | 3 +-
.../TestNameNodeProvidedImplementation.java | 127 +++++++++++-----
3 files changed, 151 insertions(+), 128 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1151f04a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 2bc8faa..6fec977 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -35,7 +35,6 @@ import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.protocol.BlockListAsLongs;
import org.apache.hadoop.hdfs.protocol.DatanodeID;
-import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.DatanodeInfoWithStorage;
import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
@@ -72,6 +71,7 @@ public class ProvidedStorageMap {
private final DatanodeStorageInfo providedStorageInfo;
private boolean providedEnabled;
private long capacity;
+ private int defaultReplication;
ProvidedStorageMap(RwLock lock, BlockManager bm, Configuration conf)
throws IOException {
@@ -95,6 +95,8 @@ public class ProvidedStorageMap {
storageId, State.NORMAL, StorageType.PROVIDED);
providedDescriptor = new ProvidedDescriptor();
providedStorageInfo = providedDescriptor.createProvidedStorage(ds);
+ this.defaultReplication = conf.getInt(DFSConfigKeys.DFS_REPLICATION_KEY,
+ DFSConfigKeys.DFS_REPLICATION_DEFAULT);
this.bm = bm;
this.lock = lock;
@@ -198,63 +200,72 @@ public class ProvidedStorageMap {
*/
class ProvidedBlocksBuilder extends LocatedBlockBuilder {
- private ShadowDatanodeInfoWithStorage pending;
- private boolean hasProvidedLocations;
-
ProvidedBlocksBuilder(int maxBlocks) {
super(maxBlocks);
- pending = new ShadowDatanodeInfoWithStorage(
- providedDescriptor, storageId);
- hasProvidedLocations = false;
+ }
+
+ private DatanodeDescriptor chooseProvidedDatanode(
+ Set<String> excludedUUids) {
+ DatanodeDescriptor dn = providedDescriptor.choose(null, excludedUUids);
+ if (dn == null) {
+ dn = providedDescriptor.choose(null);
+ }
+ return dn;
}
@Override
LocatedBlock newLocatedBlock(ExtendedBlock eb,
DatanodeStorageInfo[] storages, long pos, boolean isCorrupt) {
- DatanodeInfoWithStorage[] locs =
- new DatanodeInfoWithStorage[storages.length];
- String[] sids = new String[storages.length];
- StorageType[] types = new StorageType[storages.length];
+ List<DatanodeInfoWithStorage> locs = new ArrayList<>();
+ List<String> sids = new ArrayList<>();
+ List<StorageType> types = new ArrayList<>();
+ boolean isProvidedBlock = false;
+ Set<String> excludedUUids = new HashSet<>();
+
for (int i = 0; i < storages.length; ++i) {
- sids[i] = storages[i].getStorageID();
- types[i] = storages[i].getStorageType();
- if (StorageType.PROVIDED.equals(storages[i].getStorageType())) {
- locs[i] = pending;
- hasProvidedLocations = true;
+ DatanodeStorageInfo currInfo = storages[i];
+ StorageType storageType = currInfo.getStorageType();
+ sids.add(currInfo.getStorageID());
+ types.add(storageType);
+ if (StorageType.PROVIDED.equals(storageType)) {
+ DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
+ locs.add(
+ new DatanodeInfoWithStorage(
+ dn, currInfo.getStorageID(), currInfo.getStorageType()));
+ excludedUUids.add(dn.getDatanodeUuid());
+ isProvidedBlock = true;
} else {
- locs[i] = new DatanodeInfoWithStorage(
- storages[i].getDatanodeDescriptor(), sids[i], types[i]);
+ locs.add(new DatanodeInfoWithStorage(
+ currInfo.getDatanodeDescriptor(),
+ currInfo.getStorageID(), storageType));
+ excludedUUids.add(currInfo.getDatanodeDescriptor().getDatanodeUuid());
}
}
- return new LocatedBlock(eb, locs, sids, types, pos, isCorrupt, null);
- }
- @Override
- LocatedBlocks build(DatanodeDescriptor client) {
- // TODO: to support multiple provided storages, need to pass/maintain map
- if (hasProvidedLocations) {
- // set all fields of pending DatanodeInfo
- List<String> excludedUUids = new ArrayList<String>();
- for (LocatedBlock b : blocks) {
- DatanodeInfo[] infos = b.getLocations();
- StorageType[] types = b.getStorageTypes();
-
- for (int i = 0; i < types.length; i++) {
- if (!StorageType.PROVIDED.equals(types[i])) {
- excludedUUids.add(infos[i].getDatanodeUuid());
- }
- }
+ int numLocations = locs.size();
+ if (isProvidedBlock) {
+ // add more replicas until we reach the defaultReplication
+ for (int count = numLocations + 1;
+ count <= defaultReplication && count <= providedDescriptor
+ .activeProvidedDatanodes(); count++) {
+ DatanodeDescriptor dn = chooseProvidedDatanode(excludedUUids);
+ locs.add(new DatanodeInfoWithStorage(
+ dn, storageId, StorageType.PROVIDED));
+ sids.add(storageId);
+ types.add(StorageType.PROVIDED);
+ excludedUUids.add(dn.getDatanodeUuid());
}
-
- DatanodeDescriptor dn =
- providedDescriptor.choose(client, excludedUUids);
- if (dn == null) {
- dn = providedDescriptor.choose(client);
- }
- pending.replaceInternal(dn);
}
+ return new LocatedBlock(eb,
+ locs.toArray(new DatanodeInfoWithStorage[locs.size()]),
+ sids.toArray(new String[sids.size()]),
+ types.toArray(new StorageType[types.size()]),
+ pos, isCorrupt, null);
+ }
+ @Override
+ LocatedBlocks build(DatanodeDescriptor client) {
return new LocatedBlocks(
flen, isUC, blocks, last, lastComplete, feInfo, ecPolicy);
}
@@ -266,53 +277,6 @@ public class ProvidedStorageMap {
}
/**
- * An abstract {@link DatanodeInfoWithStorage} to represent provided storage.
- */
- static class ShadowDatanodeInfoWithStorage extends DatanodeInfoWithStorage {
- private String shadowUuid;
-
- ShadowDatanodeInfoWithStorage(DatanodeDescriptor d, String storageId) {
- super(d, storageId, StorageType.PROVIDED);
- }
-
- @Override
- public String getDatanodeUuid() {
- return shadowUuid;
- }
-
- public void setDatanodeUuid(String uuid) {
- shadowUuid = uuid;
- }
-
- void replaceInternal(DatanodeDescriptor dn) {
- updateRegInfo(dn); // overwrite DatanodeID (except UUID)
- setDatanodeUuid(dn.getDatanodeUuid());
- setCapacity(dn.getCapacity());
- setDfsUsed(dn.getDfsUsed());
- setRemaining(dn.getRemaining());
- setBlockPoolUsed(dn.getBlockPoolUsed());
- setCacheCapacity(dn.getCacheCapacity());
- setCacheUsed(dn.getCacheUsed());
- setLastUpdate(dn.getLastUpdate());
- setLastUpdateMonotonic(dn.getLastUpdateMonotonic());
- setXceiverCount(dn.getXceiverCount());
- setNetworkLocation(dn.getNetworkLocation());
- adminState = dn.getAdminState();
- setUpgradeDomain(dn.getUpgradeDomain());
- }
-
- @Override
- public boolean equals(Object obj) {
- return super.equals(obj);
- }
-
- @Override
- public int hashCode() {
- return super.hashCode();
- }
- }
-
- /**
* An abstract DatanodeDescriptor to track datanodes with provided storages.
* NOTE: never resolved through registerDatanode, so not in the topology.
*/
@@ -336,6 +300,7 @@ public class ProvidedStorageMap {
DatanodeStorageInfo getProvidedStorage(
DatanodeDescriptor dn, DatanodeStorage s) {
+ LOG.info("XXXXX adding Datanode " + dn.getDatanodeUuid());
dns.put(dn.getDatanodeUuid(), dn);
// TODO: maintain separate RPC ident per dn
return storageMap.get(s.getStorageID());
@@ -352,7 +317,7 @@ public class ProvidedStorageMap {
DatanodeDescriptor choose(DatanodeDescriptor client) {
// exact match for now
DatanodeDescriptor dn = client != null ?
- dns.get(client.getDatanodeUuid()) : null;
+ dns.get(client.getDatanodeUuid()) : null;
if (null == dn) {
dn = chooseRandom();
}
@@ -360,10 +325,10 @@ public class ProvidedStorageMap {
}
DatanodeDescriptor choose(DatanodeDescriptor client,
- List<String> excludedUUids) {
+ Set<String> excludedUUids) {
// exact match for now
DatanodeDescriptor dn = client != null ?
- dns.get(client.getDatanodeUuid()) : null;
+ dns.get(client.getDatanodeUuid()) : null;
if (null == dn || excludedUUids.contains(client.getDatanodeUuid())) {
dn = null;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1151f04a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
index 8ff9695..4b3a01f 100644
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
@@ -34,6 +34,7 @@ public class FixedBlockResolver extends BlockResolver implements Configurable {
"hdfs.image.writer.resolver.fixed.block.size";
public static final String START_BLOCK =
"hdfs.image.writer.resolver.fixed.block.start";
+ public static final long BLOCKSIZE_DEFAULT = 256 * (1L << 20);
private Configuration conf;
private long blocksize = 256 * (1L << 20);
@@ -42,7 +43,7 @@ public class FixedBlockResolver extends BlockResolver implements Configurable {
@Override
public void setConf(Configuration conf) {
this.conf = conf;
- blocksize = conf.getLong(BLOCKSIZE, 256 * (1L << 20));
+ blocksize = conf.getLong(BLOCKSIZE, BLOCKSIZE_DEFAULT);
blockIds.set(conf.getLong(START_BLOCK, (1L << 30)));
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1151f04a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index f6d38f6..9c82967 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -474,12 +474,12 @@ public class TestNameNodeProvidedImplementation {
}
private DatanodeInfo[] getAndCheckBlockLocations(DFSClient client,
- String filename, int expectedLocations) throws IOException {
- LocatedBlocks locatedBlocks = client.getLocatedBlocks(
- filename, 0, baseFileLen);
- //given the start and length in the above call,
- //only one LocatedBlock in LocatedBlocks
- assertEquals(1, locatedBlocks.getLocatedBlocks().size());
+ String filename, long fileLen, long expectedBlocks, int expectedLocations)
+ throws IOException {
+ LocatedBlocks locatedBlocks = client.getLocatedBlocks(filename, 0, fileLen);
+ // given the start and length in the above call,
+ // only one LocatedBlock in LocatedBlocks
+ assertEquals(expectedBlocks, locatedBlocks.getLocatedBlocks().size());
LocatedBlock locatedBlock = locatedBlocks.getLocatedBlocks().get(0);
assertEquals(expectedLocations, locatedBlock.getLocations().length);
return locatedBlock.getLocations();
@@ -513,17 +513,20 @@ public class TestNameNodeProvidedImplementation {
file, newReplication, 10000);
DFSClient client = new DFSClient(new InetSocketAddress("localhost",
cluster.getNameNodePort()), cluster.getConfiguration(0));
- getAndCheckBlockLocations(client, filename, newReplication);
+ getAndCheckBlockLocations(client, filename, baseFileLen, 1, newReplication);
// set the replication back to 1
newReplication = 1;
LOG.info("Setting replication of file {} back to {}",
filename, newReplication);
fs.setReplication(file, newReplication);
+ // defaultReplication number of replicas should be returned
+ int defaultReplication = conf.getInt(DFSConfigKeys.DFS_REPLICATION_KEY,
+ DFSConfigKeys.DFS_REPLICATION_DEFAULT);
DFSTestUtil.waitForReplication((DistributedFileSystem) fs,
- file, newReplication, 10000);
- // the only replica left should be the PROVIDED datanode
- getAndCheckBlockLocations(client, filename, newReplication);
+ file, (short) defaultReplication, 10000);
+ getAndCheckBlockLocations(client, filename, baseFileLen, 1,
+ defaultReplication);
}
@Test(timeout=30000)
@@ -545,8 +548,9 @@ public class TestNameNodeProvidedImplementation {
if (numFiles >= 1) {
String filename = "/" + filePrefix + (numFiles - 1) + fileSuffix;
-
- DatanodeInfo[] dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ // 2 locations returned as there are 2 PROVIDED datanodes
+ DatanodeInfo[] dnInfos =
+ getAndCheckBlockLocations(client, filename, baseFileLen, 1, 2);
//the location should be one of the provided DNs available
assertTrue(
dnInfos[0].getDatanodeUuid().equals(
@@ -564,7 +568,7 @@ public class TestNameNodeProvidedImplementation {
providedDatanode1.getDatanodeId().getXferAddr());
//should find the block on the 2nd provided datanode
- dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ dnInfos = getAndCheckBlockLocations(client, filename, baseFileLen, 1, 1);
assertEquals(providedDatanode2.getDatanodeUuid(),
dnInfos[0].getDatanodeUuid());
@@ -575,14 +579,14 @@ public class TestNameNodeProvidedImplementation {
BlockManagerTestUtil.noticeDeadDatanode(
cluster.getNameNode(),
providedDatanode2.getDatanodeId().getXferAddr());
- getAndCheckBlockLocations(client, filename, 0);
+ getAndCheckBlockLocations(client, filename, baseFileLen, 1, 0);
//restart the provided datanode
cluster.restartDataNode(providedDNProperties1, true);
cluster.waitActive();
//should find the block on the 1st provided datanode now
- dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ dnInfos = getAndCheckBlockLocations(client, filename, baseFileLen, 1, 1);
//not comparing UUIDs as the datanode can now have a different one.
assertEquals(providedDatanode1.getDatanodeId().getXferAddr(),
dnInfos[0].getXferAddr());
@@ -593,20 +597,18 @@ public class TestNameNodeProvidedImplementation {
public void testTransientDeadDatanodes() throws Exception {
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
- // 2 Datanodes, 1 PROVIDED and other DISK
- startCluster(NNDIRPATH, 2, null,
+ // 3 Datanodes, 2 PROVIDED and other DISK
+ startCluster(NNDIRPATH, 3, null,
new StorageType[][] {
{StorageType.PROVIDED, StorageType.DISK},
+ {StorageType.PROVIDED, StorageType.DISK},
{StorageType.DISK}},
false);
DataNode providedDatanode = cluster.getDataNodes().get(0);
-
- DFSClient client = new DFSClient(new InetSocketAddress("localhost",
- cluster.getNameNodePort()), cluster.getConfiguration(0));
-
for (int i= 0; i < numFiles; i++) {
- verifyFileLocation(i);
+ // expect to have 2 locations as we have 2 provided Datanodes.
+ verifyFileLocation(i, 2);
// NameNode thinks the datanode is down
BlockManagerTestUtil.noticeDeadDatanode(
cluster.getNameNode(),
@@ -614,7 +616,7 @@ public class TestNameNodeProvidedImplementation {
cluster.waitActive();
cluster.triggerHeartbeats();
Thread.sleep(1000);
- verifyFileLocation(i);
+ verifyFileLocation(i, 2);
}
}
@@ -622,17 +624,18 @@ public class TestNameNodeProvidedImplementation {
public void testNamenodeRestart() throws Exception {
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
- // 2 Datanodes, 1 PROVIDED and other DISK
- startCluster(NNDIRPATH, 2, null,
+ // 3 Datanodes, 2 PROVIDED and other DISK
+ startCluster(NNDIRPATH, 3, null,
new StorageType[][] {
{StorageType.PROVIDED, StorageType.DISK},
+ {StorageType.PROVIDED, StorageType.DISK},
{StorageType.DISK}},
false);
- verifyFileLocation(numFiles - 1);
+ verifyFileLocation(numFiles - 1, 2);
cluster.restartNameNodes();
cluster.waitActive();
- verifyFileLocation(numFiles - 1);
+ verifyFileLocation(numFiles - 1, 2);
}
/**
@@ -640,18 +643,21 @@ public class TestNameNodeProvidedImplementation {
* @param fileIndex the index of the file to verify.
* @throws Exception
*/
- private void verifyFileLocation(int fileIndex)
+ private void verifyFileLocation(int fileIndex, int replication)
throws Exception {
- DataNode providedDatanode = cluster.getDataNodes().get(0);
DFSClient client = new DFSClient(
new InetSocketAddress("localhost", cluster.getNameNodePort()),
cluster.getConfiguration(0));
- if (fileIndex <= numFiles && fileIndex >= 0) {
- String filename = "/" + filePrefix + fileIndex + fileSuffix;
- DatanodeInfo[] dnInfos = getAndCheckBlockLocations(client, filename, 1);
- // location should be the provided DN
- assertEquals(providedDatanode.getDatanodeUuid(),
- dnInfos[0].getDatanodeUuid());
+ if (fileIndex < numFiles && fileIndex >= 0) {
+ String filename = filePrefix + fileIndex + fileSuffix;
+ File file = new File(new Path(NAMEPATH, filename).toUri());
+ long fileLen = file.length();
+ long blockSize = conf.getLong(FixedBlockResolver.BLOCKSIZE,
+ FixedBlockResolver.BLOCKSIZE_DEFAULT);
+ long numLocatedBlocks =
+ fileLen == 0 ? 1 : (long) Math.ceil(fileLen * 1.0 / blockSize);
+ getAndCheckBlockLocations(client, "/" + filename, fileLen,
+ numLocatedBlocks, replication);
}
}
@@ -669,4 +675,55 @@ public class TestNameNodeProvidedImplementation {
NameNode nn = cluster.getNameNode();
assertEquals(clusterID, nn.getNamesystem().getClusterId());
}
+
+ @Test(timeout=30000)
+ public void testNumberOfProvidedLocations() throws Exception {
+ // set default replication to 4
+ conf.setInt(DFSConfigKeys.DFS_REPLICATION_KEY, 4);
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class);
+ // start with 4 PROVIDED location
+ startCluster(NNDIRPATH, 4,
+ new StorageType[]{
+ StorageType.PROVIDED, StorageType.DISK},
+ null,
+ false);
+ int expectedLocations = 4;
+ for (int i = 0; i < numFiles; i++) {
+ verifyFileLocation(i, expectedLocations);
+ }
+ // stop 2 datanodes, one after the other and verify number of locations.
+ for (int i = 1; i <= 2; i++) {
+ DataNode dn = cluster.getDataNodes().get(0);
+ cluster.stopDataNode(0);
+ // make NameNode detect that datanode is down
+ BlockManagerTestUtil.noticeDeadDatanode(cluster.getNameNode(),
+ dn.getDatanodeId().getXferAddr());
+
+ expectedLocations = 4 - i;
+ for (int j = 0; j < numFiles; j++) {
+ verifyFileLocation(j, expectedLocations);
+ }
+ }
+ }
+
+ @Test(timeout=30000)
+ public void testNumberOfProvidedLocationsManyBlocks() throws Exception {
+ // increase number of blocks per file to at least 10 blocks per file
+ conf.setLong(FixedBlockResolver.BLOCKSIZE, baseFileLen/10);
+ // set default replication to 4
+ conf.setInt(DFSConfigKeys.DFS_REPLICATION_KEY, 4);
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class);
+ // start with 4 PROVIDED location
+ startCluster(NNDIRPATH, 4,
+ new StorageType[]{
+ StorageType.PROVIDED, StorageType.DISK},
+ null,
+ false);
+ int expectedLocations = 4;
+ for (int i = 0; i < numFiles; i++) {
+ verifyFileLocation(i, expectedLocations);
+ }
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[23/50] [abbrv] hadoop git commit: HDFS-10706. [READ] Add tool
generating FSImage from external store
Posted by vi...@apache.org.
HDFS-10706. [READ] Add tool generating FSImage from external store
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e189df26
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e189df26
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e189df26
Branch: refs/heads/HDFS-9806
Commit: e189df267082ced19e69e4e3e31448199969d00f
Parents: 970028f
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Sat Apr 15 12:15:08 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:57 2017 -0800
----------------------------------------------------------------------
hadoop-tools/hadoop-fs2img/pom.xml | 87 +++
.../hdfs/server/namenode/BlockResolver.java | 95 +++
.../hadoop/hdfs/server/namenode/FSTreeWalk.java | 105 ++++
.../hdfs/server/namenode/FileSystemImage.java | 139 +++++
.../FixedBlockMultiReplicaResolver.java | 44 ++
.../server/namenode/FixedBlockResolver.java | 93 +++
.../hdfs/server/namenode/FsUGIResolver.java | 58 ++
.../hdfs/server/namenode/ImageWriter.java | 600 +++++++++++++++++++
.../hdfs/server/namenode/NullBlockFormat.java | 87 +++
.../hdfs/server/namenode/SingleUGIResolver.java | 90 +++
.../hadoop/hdfs/server/namenode/TreePath.java | 167 ++++++
.../hadoop/hdfs/server/namenode/TreeWalk.java | 103 ++++
.../hdfs/server/namenode/UGIResolver.java | 131 ++++
.../hdfs/server/namenode/package-info.java | 23 +
.../hdfs/server/namenode/RandomTreeWalk.java | 186 ++++++
.../server/namenode/TestFixedBlockResolver.java | 121 ++++
.../server/namenode/TestRandomTreeWalk.java | 130 ++++
.../server/namenode/TestSingleUGIResolver.java | 148 +++++
.../src/test/resources/log4j.properties | 24 +
hadoop-tools/hadoop-tools-dist/pom.xml | 6 +
hadoop-tools/pom.xml | 1 +
21 files changed, 2438 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/pom.xml b/hadoop-tools/hadoop-fs2img/pom.xml
new file mode 100644
index 0000000..36096b7
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/pom.xml
@@ -0,0 +1,87 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<project>
+ <modelVersion>4.0.0</modelVersion>
+ <parent>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-project</artifactId>
+ <version>3.0.0-alpha3-SNAPSHOT</version>
+ <relativePath>../../hadoop-project</relativePath>
+ </parent>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-fs2img</artifactId>
+ <version>3.0.0-alpha3-SNAPSHOT</version>
+ <description>fs2img</description>
+ <name>fs2img</name>
+ <packaging>jar</packaging>
+
+ <properties>
+ <hadoop.log.dir>${project.build.directory}/log</hadoop.log.dir>
+ </properties>
+
+ <dependencies>
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-common</artifactId>
+ <scope>provided</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-hdfs</artifactId>
+ <scope>provided</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-minicluster</artifactId>
+ <scope>provided</scope>
+ </dependency>
+ <dependency>
+ <groupId>com.google.protobuf</groupId>
+ <artifactId>protobuf-java</artifactId>
+ <scope>provided</scope>
+ </dependency>
+ <dependency>
+ <groupId>commons-cli</groupId>
+ <artifactId>commons-cli</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>junit</groupId>
+ <artifactId>junit</artifactId>
+ <scope>test</scope>
+ </dependency>
+ <dependency>
+ <groupId>org.mockito</groupId>
+ <artifactId>mockito-all</artifactId>
+ <scope>test</scope>
+ </dependency>
+ </dependencies>
+
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-jar-plugin</artifactId>
+ <configuration>
+ <archive>
+ <manifest>
+ <mainClass>org.apache.hadoop.hdfs.server.namenode.FileSystemImage</mainClass>
+ </manifest>
+ </archive>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+
+</project>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/BlockResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/BlockResolver.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/BlockResolver.java
new file mode 100644
index 0000000..94b92b8
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/BlockResolver.java
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto;
+
+/**
+ * Given an external reference, create a sequence of blocks and associated
+ * metadata.
+ */
+public abstract class BlockResolver {
+
+ protected BlockProto buildBlock(long blockId, long bytes) {
+ return buildBlock(blockId, bytes, 1001);
+ }
+
+ protected BlockProto buildBlock(long blockId, long bytes, long genstamp) {
+ BlockProto.Builder b = BlockProto.newBuilder()
+ .setBlockId(blockId)
+ .setNumBytes(bytes)
+ .setGenStamp(genstamp);
+ return b.build();
+ }
+
+ /**
+ * @param s the external reference.
+ * @return sequence of blocks that make up the reference.
+ */
+ public Iterable<BlockProto> resolve(FileStatus s) {
+ List<Long> lengths = blockLengths(s);
+ ArrayList<BlockProto> ret = new ArrayList<>(lengths.size());
+ long tot = 0;
+ for (long l : lengths) {
+ tot += l;
+ ret.add(buildBlock(nextId(), l));
+ }
+ if (tot != s.getLen()) {
+ // log a warning?
+ throw new IllegalStateException(
+ "Expected " + s.getLen() + " found " + tot);
+ }
+ return ret;
+ }
+
+ /**
+ * @return the next block id.
+ */
+ public abstract long nextId();
+
+ /**
+ * @return the maximum sequentially allocated block ID for this filesystem.
+ */
+ protected abstract long lastId();
+
+ /**
+ * @param status the external reference.
+ * @return the lengths of the resultant blocks.
+ */
+ protected abstract List<Long> blockLengths(FileStatus status);
+
+
+ /**
+ * @param status the external reference.
+ * @return the block size to assign to this external reference.
+ */
+ public long preferredBlockSize(FileStatus status) {
+ return status.getBlockSize();
+ }
+
+ /**
+ * @param status the external reference.
+ * @return the replication to assign to this external reference.
+ */
+ public abstract int getReplication(FileStatus status);
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
new file mode 100644
index 0000000..f736112
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSTreeWalk.java
@@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.ConcurrentModificationException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Traversal of an external FileSystem.
+ */
+public class FSTreeWalk extends TreeWalk {
+
+ private final Path root;
+ private final FileSystem fs;
+
+ public FSTreeWalk(Path root, Configuration conf) throws IOException {
+ this.root = root;
+ fs = root.getFileSystem(conf);
+ }
+
+ @Override
+ protected Iterable<TreePath> getChildren(TreePath path, long id,
+ TreeIterator i) {
+ // TODO symlinks
+ if (!path.getFileStatus().isDirectory()) {
+ return Collections.emptyList();
+ }
+ try {
+ ArrayList<TreePath> ret = new ArrayList<>();
+ for (FileStatus s : fs.listStatus(path.getFileStatus().getPath())) {
+ ret.add(new TreePath(s, id, i));
+ }
+ return ret;
+ } catch (FileNotFoundException e) {
+ throw new ConcurrentModificationException("FS modified");
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ class FSTreeIterator extends TreeIterator {
+
+ private FSTreeIterator() {
+ }
+
+ FSTreeIterator(TreePath p) {
+ getPendingQueue().addFirst(
+ new TreePath(p.getFileStatus(), p.getParentId(), this));
+ }
+
+ FSTreeIterator(Path p) throws IOException {
+ try {
+ FileStatus s = fs.getFileStatus(root);
+ getPendingQueue().addFirst(new TreePath(s, -1L, this));
+ } catch (FileNotFoundException e) {
+ if (p.equals(root)) {
+ throw e;
+ }
+ throw new ConcurrentModificationException("FS modified");
+ }
+ }
+
+ @Override
+ public TreeIterator fork() {
+ if (getPendingQueue().isEmpty()) {
+ return new FSTreeIterator();
+ }
+ return new FSTreeIterator(getPendingQueue().removeFirst());
+ }
+
+ }
+
+ @Override
+ public TreeIterator iterator() {
+ try {
+ return new FSTreeIterator(root);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
new file mode 100644
index 0000000..e1e85c1
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileSystemImage.java
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.File;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.commons.cli.PosixParser;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.server.common.BlockFormat;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Create FSImage from an external namespace.
+ */
+public class FileSystemImage implements Tool {
+
+ private Configuration conf;
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ // require absolute URI to write anywhere but local
+ FileSystem.setDefaultUri(conf, new File(".").toURI().toString());
+ }
+
+ protected void printUsage() {
+ HelpFormatter formatter = new HelpFormatter();
+ formatter.printHelp("fs2img [OPTIONS] URI", new Options());
+ formatter.setSyntaxPrefix("");
+ formatter.printHelp("Options", options());
+ ToolRunner.printGenericCommandUsage(System.out);
+ }
+
+ static Options options() {
+ Options options = new Options();
+ options.addOption("o", "outdir", true, "Output directory");
+ options.addOption("u", "ugiclass", true, "UGI resolver class");
+ options.addOption("b", "blockclass", true, "Block output class");
+ options.addOption("i", "blockidclass", true, "Block resolver class");
+ options.addOption("c", "cachedirs", true, "Max active dirents");
+ options.addOption("h", "help", false, "Print usage");
+ return options;
+ }
+
+ @Override
+ public int run(String[] argv) throws Exception {
+ Options options = options();
+ CommandLineParser parser = new PosixParser();
+ CommandLine cmd;
+ try {
+ cmd = parser.parse(options, argv);
+ } catch (ParseException e) {
+ System.out.println(
+ "Error parsing command-line options: " + e.getMessage());
+ printUsage();
+ return -1;
+ }
+
+ if (cmd.hasOption("h")) {
+ printUsage();
+ return -1;
+ }
+
+ ImageWriter.Options opts =
+ ReflectionUtils.newInstance(ImageWriter.Options.class, getConf());
+ for (Option o : cmd.getOptions()) {
+ switch (o.getOpt()) {
+ case "o":
+ opts.output(o.getValue());
+ break;
+ case "u":
+ opts.ugi(Class.forName(o.getValue()).asSubclass(UGIResolver.class));
+ break;
+ case "b":
+ opts.blocks(
+ Class.forName(o.getValue()).asSubclass(BlockFormat.class));
+ break;
+ case "i":
+ opts.blockIds(
+ Class.forName(o.getValue()).asSubclass(BlockResolver.class));
+ break;
+ case "c":
+ opts.cache(Integer.parseInt(o.getValue()));
+ break;
+ default:
+ throw new UnsupportedOperationException("Internal error");
+ }
+ }
+
+ String[] rem = cmd.getArgs();
+ if (rem.length != 1) {
+ printUsage();
+ return -1;
+ }
+
+ try (ImageWriter w = new ImageWriter(opts)) {
+ for (TreePath e : new FSTreeWalk(new Path(rem[0]), getConf())) {
+ w.accept(e); // add and continue
+ }
+ }
+ return 0;
+ }
+
+ public static void main(String[] argv) throws Exception {
+ int ret = ToolRunner.run(new FileSystemImage(), argv);
+ System.exit(ret);
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockMultiReplicaResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockMultiReplicaResolver.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockMultiReplicaResolver.java
new file mode 100644
index 0000000..0c8ce6e
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockMultiReplicaResolver.java
@@ -0,0 +1,44 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+
+/**
+ * Resolver mapping all files to a configurable, uniform blocksize
+ * and replication.
+ */
+public class FixedBlockMultiReplicaResolver extends FixedBlockResolver {
+
+ public static final String REPLICATION =
+ "hdfs.image.writer.resolver.fixed.block.replication";
+
+ private int replication;
+
+ @Override
+ public void setConf(Configuration conf) {
+ super.setConf(conf);
+ replication = conf.getInt(REPLICATION, 1);
+ }
+
+ public int getReplication(FileStatus s) {
+ return replication;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
new file mode 100644
index 0000000..8ff9695
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FixedBlockResolver.java
@@ -0,0 +1,93 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+
+/**
+ * Resolver mapping all files to a configurable, uniform blocksize.
+ */
+public class FixedBlockResolver extends BlockResolver implements Configurable {
+
+ public static final String BLOCKSIZE =
+ "hdfs.image.writer.resolver.fixed.block.size";
+ public static final String START_BLOCK =
+ "hdfs.image.writer.resolver.fixed.block.start";
+
+ private Configuration conf;
+ private long blocksize = 256 * (1L << 20);
+ private final AtomicLong blockIds = new AtomicLong(0);
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ blocksize = conf.getLong(BLOCKSIZE, 256 * (1L << 20));
+ blockIds.set(conf.getLong(START_BLOCK, (1L << 30)));
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ protected List<Long> blockLengths(FileStatus s) {
+ ArrayList<Long> ret = new ArrayList<>();
+ if (!s.isFile()) {
+ return ret;
+ }
+ if (0 == s.getLen()) {
+ // the file has length 0; so we will have one block of size 0
+ ret.add(0L);
+ return ret;
+ }
+ int nblocks = (int)((s.getLen() - 1) / blocksize) + 1;
+ for (int i = 0; i < nblocks - 1; ++i) {
+ ret.add(blocksize);
+ }
+ long rem = s.getLen() % blocksize;
+ ret.add(0 == (rem % blocksize) ? blocksize : rem);
+ return ret;
+ }
+
+ @Override
+ public long nextId() {
+ return blockIds.incrementAndGet();
+ }
+
+ @Override
+ public long lastId() {
+ return blockIds.get();
+ }
+
+ @Override
+ public long preferredBlockSize(FileStatus s) {
+ return blocksize;
+ }
+
+ @Override
+ public int getReplication(FileStatus s) {
+ return 1;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsUGIResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsUGIResolver.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsUGIResolver.java
new file mode 100644
index 0000000..ca16d96
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/FsUGIResolver.java
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Dynamically assign ids to users/groups as they appear in the external
+ * filesystem.
+ */
+public class FsUGIResolver extends UGIResolver {
+
+ private int id;
+ private final Set<String> usernames;
+ private final Set<String> groupnames;
+
+ FsUGIResolver() {
+ super();
+ id = 0;
+ usernames = new HashSet<String>();
+ groupnames = new HashSet<String>();
+ }
+
+ @Override
+ public synchronized void addUser(String name) {
+ if (!usernames.contains(name)) {
+ addUser(name, id);
+ id++;
+ usernames.add(name);
+ }
+ }
+
+ @Override
+ public synchronized void addGroup(String name) {
+ if (!groupnames.contains(name)) {
+ addGroup(name, id);
+ id++;
+ groupnames.add(name);
+ }
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
new file mode 100644
index 0000000..a3603a1
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
@@ -0,0 +1,600 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.BufferedOutputStream;
+import java.io.Closeable;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.FilterOutputStream;
+import java.io.OutputStream;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.security.DigestOutputStream;
+import java.security.MessageDigest;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.LinkedHashMap;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.concurrent.atomic.AtomicLong;
+
+import com.google.common.base.Charsets;
+import com.google.protobuf.CodedOutputStream;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.server.common.BlockFormat;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf.SectionName;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.CacheManagerSection;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FileSummary;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.FilesUnderConstructionSection;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeDirectorySection.DirEntry;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection.INode;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.NameSystemSection;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.SecretManagerSection;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.SnapshotDiffSection;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.StringTableSection;
+import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.MD5Hash;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.CompressorStream;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.StringUtils;
+
+import static org.apache.hadoop.hdfs.server.namenode.FSImageUtil.MAGIC_HEADER;
+
+/**
+ * Utility crawling an existing hierarchical FileSystem and emitting
+ * a valid FSImage/NN storage.
+ */
+// TODO: generalize to types beyond FileRegion
+public class ImageWriter implements Closeable {
+
+ private static final int ONDISK_VERSION = 1;
+ private static final int LAYOUT_VERSION = -64; // see NameNodeLayoutVersion
+
+ private final Path outdir;
+ private final FileSystem outfs;
+ private final File dirsTmp;
+ private final OutputStream dirs;
+ private final File inodesTmp;
+ private final OutputStream inodes;
+ private final MessageDigest digest;
+ private final FSImageCompression compress;
+ private final long startBlock;
+ private final long startInode;
+ private final UGIResolver ugis;
+ private final BlockFormat.Writer<FileRegion> blocks;
+ private final BlockResolver blockIds;
+ private final Map<Long, DirEntry.Builder> dircache;
+ private final TrackedOutputStream<DigestOutputStream> raw;
+
+ private boolean closed = false;
+ private long curSec;
+ private long curBlock;
+ private final AtomicLong curInode;
+ private final FileSummary.Builder summary = FileSummary.newBuilder()
+ .setOndiskVersion(ONDISK_VERSION)
+ .setLayoutVersion(LAYOUT_VERSION);
+
+ private final String blockPoolID;
+
+ public static Options defaults() {
+ return new Options();
+ }
+
+ @SuppressWarnings("unchecked")
+ public ImageWriter(Options opts) throws IOException {
+ final OutputStream out;
+ if (null == opts.outStream) {
+ FileSystem fs = opts.outdir.getFileSystem(opts.getConf());
+ outfs = (fs instanceof LocalFileSystem)
+ ? ((LocalFileSystem)fs).getRaw()
+ : fs;
+ Path tmp = opts.outdir;
+ if (!outfs.mkdirs(tmp)) {
+ throw new IOException("Failed to create output dir: " + tmp);
+ }
+ try (NNStorage stor = new NNStorage(opts.getConf(),
+ Arrays.asList(tmp.toUri()), Arrays.asList(tmp.toUri()))) {
+ NamespaceInfo info = NNStorage.newNamespaceInfo();
+ if (info.getLayoutVersion() != LAYOUT_VERSION) {
+ throw new IllegalStateException("Incompatible layout " +
+ info.getLayoutVersion() + " (expected " + LAYOUT_VERSION);
+ }
+ stor.format(info);
+ blockPoolID = info.getBlockPoolID();
+ }
+ outdir = new Path(tmp, "current");
+ out = outfs.create(new Path(outdir, "fsimage_0000000000000000000"));
+ } else {
+ // XXX necessary? writing a NNStorage now...
+ outdir = null;
+ outfs = null;
+ out = opts.outStream;
+ blockPoolID = "";
+ }
+ digest = MD5Hash.getDigester();
+ raw = new TrackedOutputStream<>(new DigestOutputStream(
+ new BufferedOutputStream(out), digest));
+ compress = opts.compress;
+ CompressionCodec codec = compress.getImageCodec();
+ if (codec != null) {
+ summary.setCodec(codec.getClass().getCanonicalName());
+ }
+ startBlock = opts.startBlock;
+ curBlock = startBlock;
+ startInode = opts.startInode;
+ curInode = new AtomicLong(startInode);
+ dircache = Collections.synchronizedMap(new DirEntryCache(opts.maxdircache));
+
+ ugis = null == opts.ugis
+ ? ReflectionUtils.newInstance(opts.ugisClass, opts.getConf())
+ : opts.ugis;
+ BlockFormat<FileRegion> fmt = null == opts.blocks
+ ? ReflectionUtils.newInstance(opts.blockFormatClass, opts.getConf())
+ : opts.blocks;
+ blocks = fmt.getWriter(null);
+ blockIds = null == opts.blockIds
+ ? ReflectionUtils.newInstance(opts.blockIdsClass, opts.getConf())
+ : opts.blockIds;
+
+ // create directory and inode sections as side-files.
+ // The details are written to files to avoid keeping them in memory.
+ dirsTmp = File.createTempFile("fsimg_dir", null);
+ dirsTmp.deleteOnExit();
+ dirs = beginSection(new FileOutputStream(dirsTmp));
+ try {
+ inodesTmp = File.createTempFile("fsimg_inode", null);
+ inodesTmp.deleteOnExit();
+ inodes = new FileOutputStream(inodesTmp);
+ } catch (IOException e) {
+ // appropriate to close raw?
+ IOUtils.cleanup(null, raw, dirs);
+ throw e;
+ }
+
+ raw.write(MAGIC_HEADER);
+ curSec = raw.pos;
+ assert raw.pos == MAGIC_HEADER.length;
+ }
+
+ public void accept(TreePath e) throws IOException {
+ assert e.getParentId() < curInode.get();
+ // allocate ID
+ long id = curInode.getAndIncrement();
+ e.accept(id);
+ assert e.getId() < curInode.get();
+ INode n = e.toINode(ugis, blockIds, blocks, blockPoolID);
+ writeInode(n);
+
+ if (e.getParentId() > 0) {
+ // add DirEntry to map, which may page out entries
+ DirEntry.Builder de = DirEntry.newBuilder()
+ .setParent(e.getParentId())
+ .addChildren(e.getId());
+ dircache.put(e.getParentId(), de);
+ }
+ }
+
+ @SuppressWarnings("serial")
+ class DirEntryCache extends LinkedHashMap<Long, DirEntry.Builder> {
+
+ // should cache path to root, not evict LRCached
+ private final int nEntries;
+
+ DirEntryCache(int nEntries) {
+ this.nEntries = nEntries;
+ }
+
+ @Override
+ public DirEntry.Builder put(Long p, DirEntry.Builder b) {
+ DirEntry.Builder e = get(p);
+ if (null == e) {
+ return super.put(p, b);
+ }
+ //merge
+ e.addAllChildren(b.getChildrenList());
+ // not strictly conforming
+ return e;
+ }
+
+ @Override
+ protected boolean removeEldestEntry(Entry<Long, DirEntry.Builder> be) {
+ if (size() > nEntries) {
+ DirEntry d = be.getValue().build();
+ try {
+ writeDirEntry(d);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ return true;
+ }
+ return false;
+ }
+ }
+
+ synchronized void writeInode(INode n) throws IOException {
+ n.writeDelimitedTo(inodes);
+ }
+
+ synchronized void writeDirEntry(DirEntry e) throws IOException {
+ e.writeDelimitedTo(dirs);
+ }
+
+ // from FSImageFormatProtobuf... why not just read position from the stream?
+ private static int getOndiskSize(com.google.protobuf.GeneratedMessage s) {
+ return CodedOutputStream.computeRawVarint32Size(s.getSerializedSize())
+ + s.getSerializedSize();
+ }
+
+ @Override
+ public synchronized void close() throws IOException {
+ if (closed) {
+ return;
+ }
+ for (DirEntry.Builder b : dircache.values()) {
+ DirEntry e = b.build();
+ writeDirEntry(e);
+ }
+ dircache.clear();
+
+ // close side files
+ IOUtils.cleanup(null, dirs, inodes, blocks);
+ if (null == dirs || null == inodes) {
+ // init failed
+ if (raw != null) {
+ raw.close();
+ }
+ return;
+ }
+ try {
+ writeNameSystemSection();
+ writeINodeSection();
+ writeDirSection();
+ writeStringTableSection();
+
+ // write summary directly to raw
+ FileSummary s = summary.build();
+ s.writeDelimitedTo(raw);
+ int length = getOndiskSize(s);
+ byte[] lengthBytes = new byte[4];
+ ByteBuffer.wrap(lengthBytes).asIntBuffer().put(length);
+ raw.write(lengthBytes);
+ } finally {
+ raw.close();
+ }
+ writeMD5("fsimage_0000000000000000000");
+ closed = true;
+ }
+
+ /**
+ * Write checksum for image file. Pulled from MD5Utils/internals. Awkward to
+ * reuse existing tools/utils.
+ */
+ void writeMD5(String imagename) throws IOException {
+ if (null == outdir) {
+ //LOG.warn("Not writing MD5");
+ return;
+ }
+ MD5Hash md5 = new MD5Hash(digest.digest());
+ String digestString = StringUtils.byteToHexString(md5.getDigest());
+ Path chk = new Path(outdir, imagename + ".md5");
+ try (OutputStream out = outfs.create(chk)) {
+ String md5Line = digestString + " *" + imagename + "\n";
+ out.write(md5Line.getBytes(Charsets.UTF_8));
+ }
+ }
+
+ OutputStream beginSection(OutputStream out) throws IOException {
+ CompressionCodec codec = compress.getImageCodec();
+ if (null == codec) {
+ return out;
+ }
+ return codec.createOutputStream(out);
+ }
+
+ void endSection(OutputStream out, SectionName name) throws IOException {
+ CompressionCodec codec = compress.getImageCodec();
+ if (codec != null) {
+ ((CompressorStream)out).finish();
+ }
+ out.flush();
+ long length = raw.pos - curSec;
+ summary.addSections(FileSummary.Section.newBuilder()
+ .setName(name.toString()) // not strictly correct, but name not visible
+ .setOffset(curSec).setLength(length));
+ curSec += length;
+ }
+
+ void writeNameSystemSection() throws IOException {
+ NameSystemSection.Builder b = NameSystemSection.newBuilder()
+ .setGenstampV1(1000)
+ .setGenstampV1Limit(0)
+ .setGenstampV2(1001)
+ .setLastAllocatedBlockId(blockIds.lastId())
+ .setTransactionId(0);
+ NameSystemSection s = b.build();
+
+ OutputStream sec = beginSection(raw);
+ s.writeDelimitedTo(sec);
+ endSection(sec, SectionName.NS_INFO);
+ }
+
+ void writeINodeSection() throws IOException {
+ // could reset dict to avoid compression cost in close
+ INodeSection.Builder b = INodeSection.newBuilder()
+ .setNumInodes(curInode.get() - startInode)
+ .setLastInodeId(curInode.get());
+ INodeSection s = b.build();
+
+ OutputStream sec = beginSection(raw);
+ s.writeDelimitedTo(sec);
+ // copy inodes
+ try (FileInputStream in = new FileInputStream(inodesTmp)) {
+ IOUtils.copyBytes(in, sec, 4096, false);
+ }
+ endSection(sec, SectionName.INODE);
+ }
+
+ void writeDirSection() throws IOException {
+ // No header, so dirs can be written/compressed independently
+ //INodeDirectorySection.Builder b = INodeDirectorySection.newBuilder();
+ OutputStream sec = raw;
+ // copy dirs
+ try (FileInputStream in = new FileInputStream(dirsTmp)) {
+ IOUtils.copyBytes(in, sec, 4096, false);
+ }
+ endSection(sec, SectionName.INODE_DIR);
+ }
+
+ void writeFilesUCSection() throws IOException {
+ FilesUnderConstructionSection.Builder b =
+ FilesUnderConstructionSection.newBuilder();
+ FilesUnderConstructionSection s = b.build();
+
+ OutputStream sec = beginSection(raw);
+ s.writeDelimitedTo(sec);
+ endSection(sec, SectionName.FILES_UNDERCONSTRUCTION);
+ }
+
+ void writeSnapshotDiffSection() throws IOException {
+ SnapshotDiffSection.Builder b = SnapshotDiffSection.newBuilder();
+ SnapshotDiffSection s = b.build();
+
+ OutputStream sec = beginSection(raw);
+ s.writeDelimitedTo(sec);
+ endSection(sec, SectionName.SNAPSHOT_DIFF);
+ }
+
+ void writeSecretManagerSection() throws IOException {
+ SecretManagerSection.Builder b = SecretManagerSection.newBuilder()
+ .setCurrentId(0)
+ .setTokenSequenceNumber(0);
+ SecretManagerSection s = b.build();
+
+ OutputStream sec = beginSection(raw);
+ s.writeDelimitedTo(sec);
+ endSection(sec, SectionName.SECRET_MANAGER);
+ }
+
+ void writeCacheManagerSection() throws IOException {
+ CacheManagerSection.Builder b = CacheManagerSection.newBuilder()
+ .setNumPools(0)
+ .setNumDirectives(0)
+ .setNextDirectiveId(1);
+ CacheManagerSection s = b.build();
+
+ OutputStream sec = beginSection(raw);
+ s.writeDelimitedTo(sec);
+ endSection(sec, SectionName.CACHE_MANAGER);
+ }
+
+ void writeStringTableSection() throws IOException {
+ StringTableSection.Builder b = StringTableSection.newBuilder();
+ Map<Integer, String> u = ugis.ugiMap();
+ b.setNumEntry(u.size());
+ StringTableSection s = b.build();
+
+ OutputStream sec = beginSection(raw);
+ s.writeDelimitedTo(sec);
+ for (Map.Entry<Integer, String> e : u.entrySet()) {
+ StringTableSection.Entry.Builder x =
+ StringTableSection.Entry.newBuilder()
+ .setId(e.getKey())
+ .setStr(e.getValue());
+ x.build().writeDelimitedTo(sec);
+ }
+ endSection(sec, SectionName.STRING_TABLE);
+ }
+
+ @Override
+ public synchronized String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("{ codec=\"").append(compress.getImageCodec());
+ sb.append("\", startBlock=").append(startBlock);
+ sb.append(", curBlock=").append(curBlock);
+ sb.append(", startInode=").append(startInode);
+ sb.append(", curInode=").append(curInode);
+ sb.append(", ugi=").append(ugis);
+ sb.append(", blockIds=").append(blockIds);
+ sb.append(", offset=").append(raw.pos);
+ sb.append(" }");
+ return sb.toString();
+ }
+
+ static class TrackedOutputStream<T extends OutputStream>
+ extends FilterOutputStream {
+
+ private long pos = 0L;
+
+ TrackedOutputStream(T out) {
+ super(out);
+ }
+
+ @SuppressWarnings("unchecked")
+ public T getInner() {
+ return (T) out;
+ }
+
+ @Override
+ public void write(int b) throws IOException {
+ out.write(b);
+ ++pos;
+ }
+
+ @Override
+ public void write(byte[] b) throws IOException {
+ write(b, 0, b.length);
+ }
+
+ @Override
+ public void write(byte[] b, int off, int len) throws IOException {
+ out.write(b, off, len);
+ pos += len;
+ }
+
+ @Override
+ public void flush() throws IOException {
+ super.flush();
+ }
+
+ @Override
+ public void close() throws IOException {
+ super.close();
+ }
+
+ }
+
+ /**
+ * Configurable options for image generation mapping pluggable components.
+ */
+ public static class Options implements Configurable {
+
+ public static final String START_INODE = "hdfs.image.writer.start.inode";
+ public static final String CACHE_ENTRY = "hdfs.image.writer.cache.entries";
+ public static final String UGI_CLASS = "hdfs.image.writer.ugi.class";
+ public static final String BLOCK_RESOLVER_CLASS =
+ "hdfs.image.writer.blockresolver.class";
+
+ private Path outdir;
+ private Configuration conf;
+ private OutputStream outStream;
+ private int maxdircache;
+ private long startBlock;
+ private long startInode;
+ private UGIResolver ugis;
+ private Class<? extends UGIResolver> ugisClass;
+ private BlockFormat<FileRegion> blocks;
+
+ @SuppressWarnings("rawtypes")
+ private Class<? extends BlockFormat> blockFormatClass;
+ private BlockResolver blockIds;
+ private Class<? extends BlockResolver> blockIdsClass;
+ private FSImageCompression compress =
+ FSImageCompression.createNoopCompression();
+
+ protected Options() {
+ }
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ //long lastTxn = conf.getLong(LAST_TXN, 0L);
+ String def = new File("hdfs/name").toURI().toString();
+ outdir = new Path(conf.get(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, def));
+ startBlock = conf.getLong(FixedBlockResolver.START_BLOCK, (1L << 30) + 1);
+ startInode = conf.getLong(START_INODE, (1L << 14) + 1);
+ maxdircache = conf.getInt(CACHE_ENTRY, 100);
+ ugisClass = conf.getClass(UGI_CLASS,
+ SingleUGIResolver.class, UGIResolver.class);
+ blockFormatClass = conf.getClass(
+ DFSConfigKeys.DFS_PROVIDER_BLK_FORMAT_CLASS,
+ NullBlockFormat.class, BlockFormat.class);
+ blockIdsClass = conf.getClass(BLOCK_RESOLVER_CLASS,
+ FixedBlockResolver.class, BlockResolver.class);
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ public Options output(String out) {
+ this.outdir = new Path(out);
+ return this;
+ }
+
+ public Options outStream(OutputStream outStream) {
+ this.outStream = outStream;
+ return this;
+ }
+
+ public Options codec(String codec) throws IOException {
+ this.compress = FSImageCompression.createCompression(getConf(), codec);
+ return this;
+ }
+
+ public Options cache(int nDirEntries) {
+ this.maxdircache = nDirEntries;
+ return this;
+ }
+
+ public Options ugi(UGIResolver ugis) {
+ this.ugis = ugis;
+ return this;
+ }
+
+ public Options ugi(Class<? extends UGIResolver> ugisClass) {
+ this.ugisClass = ugisClass;
+ return this;
+ }
+
+ public Options blockIds(BlockResolver blockIds) {
+ this.blockIds = blockIds;
+ return this;
+ }
+
+ public Options blockIds(Class<? extends BlockResolver> blockIdsClass) {
+ this.blockIdsClass = blockIdsClass;
+ return this;
+ }
+
+ public Options blocks(BlockFormat<FileRegion> blocks) {
+ this.blocks = blocks;
+ return this;
+ }
+
+ @SuppressWarnings("rawtypes")
+ public Options blocks(Class<? extends BlockFormat> blocksClass) {
+ this.blockFormatClass = blocksClass;
+ return this;
+ }
+
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
new file mode 100644
index 0000000..aabdf74
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockFormat.java
@@ -0,0 +1,87 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.common.BlockFormat;
+import org.apache.hadoop.hdfs.server.common.BlockFormat.Reader.Options;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+
+/**
+ * Null sink for region information emitted from FSImage.
+ */
+public class NullBlockFormat extends BlockFormat<FileRegion> {
+
+ @Override
+ public Reader<FileRegion> getReader(Options opts) throws IOException {
+ return new Reader<FileRegion>() {
+ @Override
+ public Iterator<FileRegion> iterator() {
+ return new Iterator<FileRegion>() {
+ @Override
+ public boolean hasNext() {
+ return false;
+ }
+ @Override
+ public FileRegion next() {
+ throw new NoSuchElementException();
+ }
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ };
+ }
+
+ @Override
+ public void close() throws IOException {
+ // do nothing
+ }
+
+ @Override
+ public FileRegion resolve(Block ident) throws IOException {
+ throw new UnsupportedOperationException();
+ }
+ };
+ }
+
+ @Override
+ public Writer<FileRegion> getWriter(Writer.Options opts) throws IOException {
+ return new Writer<FileRegion>() {
+ @Override
+ public void store(FileRegion token) throws IOException {
+ // do nothing
+ }
+
+ @Override
+ public void close() throws IOException {
+ // do nothing
+ }
+ };
+ }
+
+ @Override
+ public void refresh() throws IOException {
+ // do nothing
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java
new file mode 100644
index 0000000..0fd3f2b
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/SingleUGIResolver.java
@@ -0,0 +1,90 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.security.UserGroupInformation;
+
+/**
+ * Map all owners/groups in external system to a single user in FSImage.
+ */
+public class SingleUGIResolver extends UGIResolver implements Configurable {
+
+ public static final String UID = "hdfs.image.writer.ugi.single.uid";
+ public static final String USER = "hdfs.image.writer.ugi.single.user";
+ public static final String GID = "hdfs.image.writer.ugi.single.gid";
+ public static final String GROUP = "hdfs.image.writer.ugi.single.group";
+
+ private int uid;
+ private int gid;
+ private String user;
+ private String group;
+ private Configuration conf;
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ uid = conf.getInt(UID, 0);
+ user = conf.get(USER);
+ if (null == user) {
+ try {
+ user = UserGroupInformation.getCurrentUser().getShortUserName();
+ } catch (IOException e) {
+ user = "hadoop";
+ }
+ }
+ gid = conf.getInt(GID, 1);
+ group = conf.get(GROUP);
+ if (null == group) {
+ group = user;
+ }
+
+ resetUGInfo();
+ addUser(user, uid);
+ addGroup(group, gid);
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public String user(FileStatus s) {
+ return user;
+ }
+
+ @Override
+ public String group(FileStatus s) {
+ return group;
+ }
+
+ @Override
+ public void addUser(String name) {
+ //do nothing
+ }
+
+ @Override
+ public void addGroup(String name) {
+ //do nothing
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
new file mode 100644
index 0000000..14e6bed
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreePath.java
@@ -0,0 +1,167 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.IOException;
+
+import com.google.protobuf.ByteString;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto;
+import org.apache.hadoop.hdfs.server.common.BlockFormat;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection.INode;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection.INodeDirectory;
+import org.apache.hadoop.hdfs.server.namenode.FsImageProto.INodeSection.INodeFile;
+import static org.apache.hadoop.hdfs.DFSUtil.string2Bytes;
+import static org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.DEFAULT_NAMESPACE_QUOTA;
+import static org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.DEFAULT_STORAGE_SPACE_QUOTA;
+
+/**
+ * Traversal cursor in external filesystem.
+ * TODO: generalize, move FS/FileRegion to FSTreePath
+ */
+public class TreePath {
+ private long id = -1;
+ private final long parentId;
+ private final FileStatus stat;
+ private final TreeWalk.TreeIterator i;
+
+ protected TreePath(FileStatus stat, long parentId, TreeWalk.TreeIterator i) {
+ this.i = i;
+ this.stat = stat;
+ this.parentId = parentId;
+ }
+
+ public FileStatus getFileStatus() {
+ return stat;
+ }
+
+ public long getParentId() {
+ return parentId;
+ }
+
+ public long getId() {
+ if (id < 0) {
+ throw new IllegalStateException();
+ }
+ return id;
+ }
+
+ void accept(long id) {
+ this.id = id;
+ i.onAccept(this, id);
+ }
+
+ public INode toINode(UGIResolver ugi, BlockResolver blk,
+ BlockFormat.Writer<FileRegion> out, String blockPoolID)
+ throws IOException {
+ if (stat.isFile()) {
+ return toFile(ugi, blk, out, blockPoolID);
+ } else if (stat.isDirectory()) {
+ return toDirectory(ugi);
+ } else if (stat.isSymlink()) {
+ throw new UnsupportedOperationException("symlinks not supported");
+ } else {
+ throw new UnsupportedOperationException("Unknown type: " + stat);
+ }
+ }
+
+ @Override
+ public boolean equals(Object other) {
+ if (!(other instanceof TreePath)) {
+ return false;
+ }
+ TreePath o = (TreePath) other;
+ return getParentId() == o.getParentId()
+ && getFileStatus().equals(o.getFileStatus());
+ }
+
+ @Override
+ public int hashCode() {
+ long pId = getParentId() * getFileStatus().hashCode();
+ return (int)(pId ^ (pId >>> 32));
+ }
+
+ void writeBlock(long blockId, long offset, long length,
+ long genStamp, String blockPoolID,
+ BlockFormat.Writer<FileRegion> out) throws IOException {
+ FileStatus s = getFileStatus();
+ out.store(new FileRegion(blockId, s.getPath(), offset, length,
+ blockPoolID, genStamp));
+ }
+
+ INode toFile(UGIResolver ugi, BlockResolver blk,
+ BlockFormat.Writer<FileRegion> out, String blockPoolID)
+ throws IOException {
+ final FileStatus s = getFileStatus();
+ // TODO should this store resolver's user/group?
+ ugi.addUser(s.getOwner());
+ ugi.addGroup(s.getGroup());
+ INodeFile.Builder b = INodeFile.newBuilder()
+ .setReplication(blk.getReplication(s))
+ .setModificationTime(s.getModificationTime())
+ .setAccessTime(s.getAccessTime())
+ .setPreferredBlockSize(blk.preferredBlockSize(s))
+ .setPermission(ugi.resolve(s))
+ .setStoragePolicyID(HdfsConstants.PROVIDED_STORAGE_POLICY_ID);
+ //TODO: storage policy should be configurable per path; use BlockResolver
+ long off = 0L;
+ for (BlockProto block : blk.resolve(s)) {
+ b.addBlocks(block);
+ writeBlock(block.getBlockId(), off, block.getNumBytes(),
+ block.getGenStamp(), blockPoolID, out);
+ off += block.getNumBytes();
+ }
+ INode.Builder ib = INode.newBuilder()
+ .setType(INode.Type.FILE)
+ .setId(id)
+ .setName(ByteString.copyFrom(string2Bytes(s.getPath().getName())))
+ .setFile(b);
+ return ib.build();
+ }
+
+ INode toDirectory(UGIResolver ugi) {
+ final FileStatus s = getFileStatus();
+ ugi.addUser(s.getOwner());
+ ugi.addGroup(s.getGroup());
+ INodeDirectory.Builder b = INodeDirectory.newBuilder()
+ .setModificationTime(s.getModificationTime())
+ .setNsQuota(DEFAULT_NAMESPACE_QUOTA)
+ .setDsQuota(DEFAULT_STORAGE_SPACE_QUOTA)
+ .setPermission(ugi.resolve(s));
+ INode.Builder ib = INode.newBuilder()
+ .setType(INode.Type.DIRECTORY)
+ .setId(id)
+ .setName(ByteString.copyFrom(string2Bytes(s.getPath().getName())))
+ .setDirectory(b);
+ return ib.build();
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("{ stat=\"").append(getFileStatus()).append("\"");
+ sb.append(", id=").append(getId());
+ sb.append(", parentId=").append(getParentId());
+ sb.append(", iterObjId=").append(System.identityHashCode(i));
+ sb.append(" }");
+ return sb.toString();
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java
new file mode 100644
index 0000000..7fd26f9
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/TreeWalk.java
@@ -0,0 +1,103 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.ArrayDeque;
+import java.util.Deque;
+import java.util.Iterator;
+
+/**
+ * Traversal yielding a hierarchical sequence of paths.
+ */
+public abstract class TreeWalk implements Iterable<TreePath> {
+
+ /**
+ * @param path path to the node being explored.
+ * @param id the id of the node.
+ * @param iterator the {@link TreeIterator} to use.
+ * @return paths representing the children of the current node.
+ */
+ protected abstract Iterable<TreePath> getChildren(
+ TreePath path, long id, TreeWalk.TreeIterator iterator);
+
+ public abstract TreeIterator iterator();
+
+ /**
+ * Enumerator class for hierarchies. Implementations SHOULD support a fork()
+ * operation yielding a subtree of the current cursor.
+ */
+ public abstract class TreeIterator implements Iterator<TreePath> {
+
+ private final Deque<TreePath> pending;
+
+ TreeIterator() {
+ this(new ArrayDeque<TreePath>());
+ }
+
+ protected TreeIterator(Deque<TreePath> pending) {
+ this.pending = pending;
+ }
+
+ public abstract TreeIterator fork();
+
+ @Override
+ public boolean hasNext() {
+ return !pending.isEmpty();
+ }
+
+ @Override
+ public TreePath next() {
+ return pending.removeFirst();
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+
+ protected void onAccept(TreePath p, long id) {
+ for (TreePath k : getChildren(p, id, this)) {
+ pending.addFirst(k);
+ }
+ }
+
+ /**
+ * @return the Deque containing the pending paths.
+ */
+ protected Deque<TreePath> getPendingQueue() {
+ return pending;
+ }
+
+ @Override
+ public String toString() {
+ StringBuilder sb = new StringBuilder();
+ sb.append("{ Treewalk=\"").append(TreeWalk.this.toString());
+ sb.append(", pending=[");
+ Iterator<TreePath> i = pending.iterator();
+ if (i.hasNext()) {
+ sb.append("\"").append(i.next()).append("\"");
+ }
+ while (i.hasNext()) {
+ sb.append(", \"").append(i.next()).append("\"");
+ }
+ sb.append("]");
+ sb.append(" }");
+ return sb.toString();
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
new file mode 100644
index 0000000..2d50668
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/UGIResolver.java
@@ -0,0 +1,131 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.permission.FsPermission;
+
+/**
+ * Pluggable class for mapping ownership and permissions from an external
+ * store to an FSImage.
+ */
+public abstract class UGIResolver {
+
+ static final int USER_STRID_OFFSET = 40;
+ static final int GROUP_STRID_OFFSET = 16;
+ static final long USER_GROUP_STRID_MASK = (1 << 24) - 1;
+
+ /**
+ * Permission is serialized as a 64-bit long. [0:24):[25:48):[48:64) (in Big
+ * Endian).
+ * The first and the second parts are the string ids of the user and
+ * group name, and the last 16 bits are the permission bits.
+ * @param owner name of owner
+ * @param group name of group
+ * @param permission Permission octects
+ * @return FSImage encoding of permissions
+ */
+ protected final long buildPermissionStatus(
+ String owner, String group, short permission) {
+
+ long userId = users.get(owner);
+ if (0L != ((~USER_GROUP_STRID_MASK) & userId)) {
+ throw new IllegalArgumentException("UID must fit in 24 bits");
+ }
+
+ long groupId = groups.get(group);
+ if (0L != ((~USER_GROUP_STRID_MASK) & groupId)) {
+ throw new IllegalArgumentException("GID must fit in 24 bits");
+ }
+ return ((userId & USER_GROUP_STRID_MASK) << USER_STRID_OFFSET)
+ | ((groupId & USER_GROUP_STRID_MASK) << GROUP_STRID_OFFSET)
+ | permission;
+ }
+
+ private final Map<String, Integer> users;
+ private final Map<String, Integer> groups;
+
+ public UGIResolver() {
+ this(new HashMap<String, Integer>(), new HashMap<String, Integer>());
+ }
+
+ UGIResolver(Map<String, Integer> users, Map<String, Integer> groups) {
+ this.users = users;
+ this.groups = groups;
+ }
+
+ public Map<Integer, String> ugiMap() {
+ Map<Integer, String> ret = new HashMap<>();
+ for (Map<String, Integer> m : Arrays.asList(users, groups)) {
+ for (Map.Entry<String, Integer> e : m.entrySet()) {
+ String s = ret.put(e.getValue(), e.getKey());
+ if (s != null) {
+ throw new IllegalStateException("Duplicate mapping: " +
+ e.getValue() + " " + s + " " + e.getKey());
+ }
+ }
+ }
+ return ret;
+ }
+
+ public abstract void addUser(String name);
+
+ protected void addUser(String name, int id) {
+ Integer uid = users.put(name, id);
+ if (uid != null) {
+ throw new IllegalArgumentException("Duplicate mapping: " + name +
+ " " + uid + " " + id);
+ }
+ }
+
+ public abstract void addGroup(String name);
+
+ protected void addGroup(String name, int id) {
+ Integer gid = groups.put(name, id);
+ if (gid != null) {
+ throw new IllegalArgumentException("Duplicate mapping: " + name +
+ " " + gid + " " + id);
+ }
+ }
+
+ protected void resetUGInfo() {
+ users.clear();
+ groups.clear();
+ }
+
+ public long resolve(FileStatus s) {
+ return buildPermissionStatus(user(s), group(s), permission(s).toShort());
+ }
+
+ public String user(FileStatus s) {
+ return s.getOwner();
+ }
+
+ public String group(FileStatus s) {
+ return s.getGroup();
+ }
+
+ public FsPermission permission(FileStatus s) {
+ return s.getPermission();
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/package-info.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/package-info.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/package-info.java
new file mode 100644
index 0000000..956292e
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/package-info.java
@@ -0,0 +1,23 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Unstable
+package org.apache.hadoop.hdfs.server.namenode;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
new file mode 100644
index 0000000..c82c489
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/RandomTreeWalk.java
@@ -0,0 +1,186 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Random;
+
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Random, repeatable hierarchy generator.
+ */
+public class RandomTreeWalk extends TreeWalk {
+
+ private final Path root;
+ private final long seed;
+ private final float depth;
+ private final int children;
+ private final Map<Long, Long> mSeed;
+ //private final AtomicLong blockIds = new AtomicLong(1L << 30);
+
+ RandomTreeWalk(long seed) {
+ this(seed, 10);
+ }
+
+ RandomTreeWalk(long seed, int children) {
+ this(seed, children, 0.15f);
+ }
+
+ RandomTreeWalk(long seed, int children, float depth) {
+ this(randomRoot(seed), seed, children, 0.15f);
+ }
+
+ RandomTreeWalk(Path root, long seed, int children, float depth) {
+ this.seed = seed;
+ this.depth = depth;
+ this.children = children;
+ mSeed = Collections.synchronizedMap(new HashMap<Long, Long>());
+ mSeed.put(-1L, seed);
+ this.root = root;
+ }
+
+ static Path randomRoot(long seed) {
+ Random r = new Random(seed);
+ String scheme;
+ do {
+ scheme = genName(r, 3, 5).toLowerCase();
+ } while (Character.isDigit(scheme.charAt(0)));
+ String authority = genName(r, 3, 15).toLowerCase();
+ int port = r.nextInt(1 << 13) + 1000;
+ return new Path(scheme, authority + ":" + port, "/");
+ }
+
+ @Override
+ public TreeIterator iterator() {
+ return new RandomTreeIterator(seed);
+ }
+
+ @Override
+ protected Iterable<TreePath> getChildren(TreePath p, long id,
+ TreeIterator walk) {
+ final FileStatus pFs = p.getFileStatus();
+ if (pFs.isFile()) {
+ return Collections.emptyList();
+ }
+ // seed is f(parent seed, attrib)
+ long cseed = mSeed.get(p.getParentId()) * p.getFileStatus().hashCode();
+ mSeed.put(p.getId(), cseed);
+ Random r = new Random(cseed);
+
+ int nChildren = r.nextInt(children);
+ ArrayList<TreePath> ret = new ArrayList<TreePath>();
+ for (int i = 0; i < nChildren; ++i) {
+ ret.add(new TreePath(genFileStatus(p, r), p.getId(), walk));
+ }
+ return ret;
+ }
+
+ FileStatus genFileStatus(TreePath parent, Random r) {
+ final int blocksize = 128 * (1 << 20);
+ final Path name;
+ final boolean isDir;
+ if (null == parent) {
+ name = root;
+ isDir = true;
+ } else {
+ Path p = parent.getFileStatus().getPath();
+ name = new Path(p, genName(r, 3, 10));
+ isDir = r.nextFloat() < depth;
+ }
+ final long len = isDir ? 0 : r.nextInt(Integer.MAX_VALUE);
+ final int nblocks = 0 == len ? 0 : (((int)((len - 1) / blocksize)) + 1);
+ BlockLocation[] blocks = genBlocks(r, nblocks, blocksize, len);
+ try {
+ return new LocatedFileStatus(new FileStatus(
+ len, /* long length, */
+ isDir, /* boolean isdir, */
+ 1, /* int block_replication, */
+ blocksize, /* long blocksize, */
+ 0L, /* long modification_time, */
+ 0L, /* long access_time, */
+ null, /* FsPermission permission, */
+ "hadoop", /* String owner, */
+ "hadoop", /* String group, */
+ name), /* Path path */
+ blocks);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ BlockLocation[] genBlocks(Random r, int nblocks, int blocksize, long len) {
+ BlockLocation[] blocks = new BlockLocation[nblocks];
+ if (0 == nblocks) {
+ return blocks;
+ }
+ for (int i = 0; i < nblocks - 1; ++i) {
+ blocks[i] = new BlockLocation(null, null, i * blocksize, blocksize);
+ }
+ blocks[nblocks - 1] = new BlockLocation(null, null,
+ (nblocks - 1) * blocksize,
+ 0 == (len % blocksize) ? blocksize : len % blocksize);
+ return blocks;
+ }
+
+ static String genName(Random r, int min, int max) {
+ int len = r.nextInt(max - min + 1) + min;
+ char[] ret = new char[len];
+ while (len > 0) {
+ int c = r.nextInt() & 0x7F; // restrict to ASCII
+ if (Character.isLetterOrDigit(c)) {
+ ret[--len] = (char) c;
+ }
+ }
+ return new String(ret);
+ }
+
+ class RandomTreeIterator extends TreeIterator {
+
+ RandomTreeIterator() {
+ }
+
+ RandomTreeIterator(long seed) {
+ Random r = new Random(seed);
+ FileStatus iroot = genFileStatus(null, r);
+ getPendingQueue().addFirst(new TreePath(iroot, -1, this));
+ }
+
+ RandomTreeIterator(TreePath p) {
+ getPendingQueue().addFirst(
+ new TreePath(p.getFileStatus(), p.getParentId(), this));
+ }
+
+ @Override
+ public TreeIterator fork() {
+ if (getPendingQueue().isEmpty()) {
+ return new RandomTreeIterator();
+ }
+ return new RandomTreeIterator(getPendingQueue().removeFirst());
+ }
+
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFixedBlockResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFixedBlockResolver.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFixedBlockResolver.java
new file mode 100644
index 0000000..8b52ffd
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFixedBlockResolver.java
@@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.Iterator;
+import java.util.Random;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.BlockProto;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+import static org.junit.Assert.*;
+
+/**
+ * Validate fixed-size block partitioning.
+ */
+public class TestFixedBlockResolver {
+
+ @Rule public TestName name = new TestName();
+
+ private final FixedBlockResolver blockId = new FixedBlockResolver();
+
+ @Before
+ public void setup() {
+ Configuration conf = new Configuration(false);
+ conf.setLong(FixedBlockResolver.BLOCKSIZE, 512L * (1L << 20));
+ conf.setLong(FixedBlockResolver.START_BLOCK, 512L * (1L << 20));
+ blockId.setConf(conf);
+ System.out.println(name.getMethodName());
+ }
+
+ @Test
+ public void testExactBlock() throws Exception {
+ FileStatus f = file(512, 256);
+ int nblocks = 0;
+ for (BlockProto b : blockId.resolve(f)) {
+ ++nblocks;
+ assertEquals(512L * (1L << 20), b.getNumBytes());
+ }
+ assertEquals(1, nblocks);
+
+ FileStatus g = file(1024, 256);
+ nblocks = 0;
+ for (BlockProto b : blockId.resolve(g)) {
+ ++nblocks;
+ assertEquals(512L * (1L << 20), b.getNumBytes());
+ }
+ assertEquals(2, nblocks);
+
+ FileStatus h = file(5120, 256);
+ nblocks = 0;
+ for (BlockProto b : blockId.resolve(h)) {
+ ++nblocks;
+ assertEquals(512L * (1L << 20), b.getNumBytes());
+ }
+ assertEquals(10, nblocks);
+ }
+
+ @Test
+ public void testEmpty() throws Exception {
+ FileStatus f = file(0, 100);
+ Iterator<BlockProto> b = blockId.resolve(f).iterator();
+ assertTrue(b.hasNext());
+ assertEquals(0, b.next().getNumBytes());
+ assertFalse(b.hasNext());
+ }
+
+ @Test
+ public void testRandomFile() throws Exception {
+ Random r = new Random();
+ long seed = r.nextLong();
+ System.out.println("seed: " + seed);
+ r.setSeed(seed);
+
+ int len = r.nextInt(4096) + 512;
+ int blk = r.nextInt(len - 128) + 128;
+ FileStatus s = file(len, blk);
+ long nbytes = 0;
+ for (BlockProto b : blockId.resolve(s)) {
+ nbytes += b.getNumBytes();
+ assertTrue(512L * (1L << 20) >= b.getNumBytes());
+ }
+ assertEquals(s.getLen(), nbytes);
+ }
+
+ FileStatus file(long lenMB, long blocksizeMB) {
+ Path p = new Path("foo://bar:4344/baz/dingo");
+ return new FileStatus(
+ lenMB * (1 << 20), /* long length, */
+ false, /* boolean isdir, */
+ 1, /* int block_replication, */
+ blocksizeMB * (1 << 20), /* long blocksize, */
+ 0L, /* long modification_time, */
+ 0L, /* long access_time, */
+ null, /* FsPermission permission, */
+ "hadoop", /* String owner, */
+ "hadoop", /* String group, */
+ p); /* Path path */
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e189df26/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRandomTreeWalk.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRandomTreeWalk.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRandomTreeWalk.java
new file mode 100644
index 0000000..b8e6ac9
--- /dev/null
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestRandomTreeWalk.java
@@ -0,0 +1,130 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.namenode;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Random;
+import java.util.Set;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+import static org.junit.Assert.*;
+
+/**
+ * Validate randomly generated hierarchies, including fork() support in
+ * base class.
+ */
+public class TestRandomTreeWalk {
+
+ @Rule public TestName name = new TestName();
+
+ private Random r = new Random();
+
+ @Before
+ public void setSeed() {
+ long seed = r.nextLong();
+ r.setSeed(seed);
+ System.out.println(name.getMethodName() + " seed: " + seed);
+ }
+
+ @Test
+ public void testRandomTreeWalkRepeat() throws Exception {
+ Set<TreePath> ns = new HashSet<>();
+ final long seed = r.nextLong();
+ RandomTreeWalk t1 = new RandomTreeWalk(seed, 10, .1f);
+ int i = 0;
+ for (TreePath p : t1) {
+ p.accept(i++);
+ assertTrue(ns.add(p));
+ }
+
+ RandomTreeWalk t2 = new RandomTreeWalk(seed, 10, .1f);
+ int j = 0;
+ for (TreePath p : t2) {
+ p.accept(j++);
+ assertTrue(ns.remove(p));
+ }
+ assertTrue(ns.isEmpty());
+ }
+
+ @Test
+ public void testRandomTreeWalkFork() throws Exception {
+ Set<FileStatus> ns = new HashSet<>();
+
+ final long seed = r.nextLong();
+ RandomTreeWalk t1 = new RandomTreeWalk(seed, 10, .15f);
+ int i = 0;
+ for (TreePath p : t1) {
+ p.accept(i++);
+ assertTrue(ns.add(p.getFileStatus()));
+ }
+
+ RandomTreeWalk t2 = new RandomTreeWalk(seed, 10, .15f);
+ int j = 0;
+ ArrayList<TreeWalk.TreeIterator> iters = new ArrayList<>();
+ iters.add(t2.iterator());
+ while (!iters.isEmpty()) {
+ for (TreeWalk.TreeIterator sub = iters.remove(iters.size() - 1);
+ sub.hasNext();) {
+ TreePath p = sub.next();
+ if (0 == (r.nextInt() % 4)) {
+ iters.add(sub.fork());
+ Collections.shuffle(iters, r);
+ }
+ p.accept(j++);
+ assertTrue(ns.remove(p.getFileStatus()));
+ }
+ }
+ assertTrue(ns.isEmpty());
+ }
+
+ @Test
+ public void testRandomRootWalk() throws Exception {
+ Set<FileStatus> ns = new HashSet<>();
+ final long seed = r.nextLong();
+ Path root = new Path("foo://bar:4344/dingos");
+ String sroot = root.toString();
+ int nroot = sroot.length();
+ RandomTreeWalk t1 = new RandomTreeWalk(root, seed, 10, .1f);
+ int i = 0;
+ for (TreePath p : t1) {
+ p.accept(i++);
+ FileStatus stat = p.getFileStatus();
+ assertTrue(ns.add(stat));
+ assertEquals(sroot, stat.getPath().toString().substring(0, nroot));
+ }
+
+ RandomTreeWalk t2 = new RandomTreeWalk(root, seed, 10, .1f);
+ int j = 0;
+ for (TreePath p : t2) {
+ p.accept(j++);
+ FileStatus stat = p.getFileStatus();
+ assertTrue(ns.remove(stat));
+ assertEquals(sroot, stat.getPath().toString().substring(0, nroot));
+ }
+ assertTrue(ns.isEmpty());
+ }
+
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[40/50] [abbrv] hadoop git commit: HDFS-12607. [READ] Even one dead
datanode with PROVIDED storage results in ProvidedStorageInfo being marked as
FAILED
Posted by vi...@apache.org.
HDFS-12607. [READ] Even one dead datanode with PROVIDED storage results in ProvidedStorageInfo being marked as FAILED
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dacc6bc1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dacc6bc1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dacc6bc1
Branch: refs/heads/HDFS-9806
Commit: dacc6bc1d02025404666700e00b706be9547de4f
Parents: 926ead5
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Mon Nov 6 11:05:59 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../blockmanagement/DatanodeDescriptor.java | 6 ++-
.../blockmanagement/ProvidedStorageMap.java | 40 +++++++++++++-------
.../TestNameNodeProvidedImplementation.java | 40 ++++++++++++++++++++
3 files changed, 71 insertions(+), 15 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/dacc6bc1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
index e3d6582..c17ab4c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
@@ -455,8 +455,10 @@ public class DatanodeDescriptor extends DatanodeInfo {
totalDfsUsed += report.getDfsUsed();
totalNonDfsUsed += report.getNonDfsUsed();
- if (StorageType.PROVIDED.equals(
- report.getStorage().getStorageType())) {
+ // for PROVIDED storages, do not call updateStorage() unless
+ // DatanodeStorageInfo already exists!
+ if (StorageType.PROVIDED.equals(report.getStorage().getStorageType())
+ && storageMap.get(report.getStorage().getStorageID()) == null) {
continue;
}
DatanodeStorageInfo storage = updateStorage(report.getStorage());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/dacc6bc1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index a848d50..3d19775 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -66,7 +66,6 @@ public class ProvidedStorageMap {
// limit to a single provider for now
private RwLock lock;
private BlockManager bm;
- private boolean hasDNs = false;
private BlockAliasMap aliasMap;
private final String storageId;
@@ -123,6 +122,11 @@ public class ProvidedStorageMap {
BlockReportContext context) throws IOException {
if (providedEnabled && storageId.equals(s.getStorageID())) {
if (StorageType.PROVIDED.equals(s.getStorageType())) {
+ if (providedStorageInfo.getState() == State.FAILED
+ && s.getState() == State.NORMAL) {
+ providedStorageInfo.setState(State.NORMAL);
+ LOG.info("Provided storage transitioning to state " + State.NORMAL);
+ }
processProvidedStorageReport(context);
dn.injectStorage(providedStorageInfo);
return providedDescriptor.getProvidedStorage(dn, s);
@@ -135,21 +139,14 @@ public class ProvidedStorageMap {
private void processProvidedStorageReport(BlockReportContext context)
throws IOException {
assert lock.hasWriteLock() : "Not holding write lock";
- if (hasDNs) {
- return;
- }
- if (providedStorageInfo.getBlockReportCount() == 0) {
+ if (providedStorageInfo.getBlockReportCount() == 0
+ || providedDescriptor.activeProvidedDatanodes() == 0) {
LOG.info("Calling process first blk report from storage: "
+ providedStorageInfo);
// first pass; periodic refresh should call bm.processReport
bm.processFirstBlockReport(providedStorageInfo,
new ProvidedBlockList(aliasMap.getReader(null).iterator()));
- } else {
- bm.processReport(providedStorageInfo,
- new ProvidedBlockList(aliasMap.getReader(null).iterator()),
- context);
}
- hasDNs = true;
}
@VisibleForTesting
@@ -167,9 +164,10 @@ public class ProvidedStorageMap {
public void removeDatanode(DatanodeDescriptor dnToRemove) {
if (providedEnabled) {
assert lock.hasWriteLock() : "Not holding write lock";
- int remainingDatanodes = providedDescriptor.remove(dnToRemove);
- if (remainingDatanodes == 0) {
- hasDNs = false;
+ providedDescriptor.remove(dnToRemove);
+ // if all datanodes fail, set the block report count to 0
+ if (providedDescriptor.activeProvidedDatanodes() == 0) {
+ providedStorageInfo.setBlockReportCount(0);
}
}
}
@@ -466,6 +464,22 @@ public class ProvidedStorageMap {
return false;
}
}
+
+ @Override
+ void setState(DatanodeStorage.State state) {
+ if (state == State.FAILED) {
+ // The state should change to FAILED only when there are no active
+ // datanodes with PROVIDED storage.
+ ProvidedDescriptor dn = (ProvidedDescriptor) getDatanodeDescriptor();
+ if (dn.activeProvidedDatanodes() == 0) {
+ LOG.info("Provided storage {} transitioning to state {}",
+ this, State.FAILED);
+ super.setState(state);
+ }
+ } else {
+ super.setState(state);
+ }
+ }
}
/**
* Used to emulate block reports for provided blocks.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/dacc6bc1/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 2170baa..aae04be 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -492,4 +492,44 @@ public class TestNameNodeProvidedImplementation {
dnInfos[0].getXferAddr());
}
}
+
+ @Test(timeout=300000)
+ public void testTransientDeadDatanodes() throws Exception {
+ createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
+ FixedBlockResolver.class);
+ // 2 Datanodes, 1 PROVIDED and other DISK
+ startCluster(NNDIRPATH, 2, null,
+ new StorageType[][] {
+ {StorageType.PROVIDED},
+ {StorageType.DISK}},
+ false);
+
+ DataNode providedDatanode = cluster.getDataNodes().get(0);
+
+ DFSClient client = new DFSClient(new InetSocketAddress("localhost",
+ cluster.getNameNodePort()), cluster.getConfiguration(0));
+
+ for (int i= 0; i < numFiles; i++) {
+ String filename = "/" + filePrefix + i + fileSuffix;
+
+ DatanodeInfo[] dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ // location should be the provided DN.
+ assertTrue(dnInfos[0].getDatanodeUuid()
+ .equals(providedDatanode.getDatanodeUuid()));
+
+ // NameNode thinks the datanode is down
+ BlockManagerTestUtil.noticeDeadDatanode(
+ cluster.getNameNode(),
+ providedDatanode.getDatanodeId().getXferAddr());
+ cluster.waitActive();
+ cluster.triggerHeartbeats();
+ Thread.sleep(1000);
+
+ // should find the block on the 2nd provided datanode.
+ dnInfos = getAndCheckBlockLocations(client, filename, 1);
+ assertTrue(
+ dnInfos[0].getDatanodeUuid()
+ .equals(providedDatanode.getDatanodeUuid()));
+ }
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[45/50] [abbrv] hadoop git commit: HDFS-12789. [READ] Image
generation tool does not close an opened stream
Posted by vi...@apache.org.
HDFS-12789. [READ] Image generation tool does not close an opened stream
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3ed1348e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3ed1348e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3ed1348e
Branch: refs/heads/HDFS-9806
Commit: 3ed1348e320e44c5ffc3d1ea8ca11d2359defaf5
Parents: f0805c8
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Wed Nov 8 10:28:50 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../hadoop/hdfs/server/namenode/ImageWriter.java | 17 ++++++++++++-----
1 file changed, 12 insertions(+), 5 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3ed1348e/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
index ea1888a..390bb39 100644
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/ImageWriter.java
@@ -165,16 +165,23 @@ public class ImageWriter implements Closeable {
// create directory and inode sections as side-files.
// The details are written to files to avoid keeping them in memory.
- dirsTmp = File.createTempFile("fsimg_dir", null);
- dirsTmp.deleteOnExit();
- dirs = beginSection(new FileOutputStream(dirsTmp));
+ FileOutputStream dirsTmpStream = null;
+ try {
+ dirsTmp = File.createTempFile("fsimg_dir", null);
+ dirsTmp.deleteOnExit();
+ dirsTmpStream = new FileOutputStream(dirsTmp);
+ dirs = beginSection(dirsTmpStream);
+ } catch (IOException e) {
+ IOUtils.cleanupWithLogger(null, raw, dirsTmpStream);
+ throw e;
+ }
+
try {
inodesTmp = File.createTempFile("fsimg_inode", null);
inodesTmp.deleteOnExit();
inodes = new FileOutputStream(inodesTmp);
} catch (IOException e) {
- // appropriate to close raw?
- IOUtils.cleanup(null, raw, dirs);
+ IOUtils.cleanupWithLogger(null, raw, dirsTmpStream, dirs);
throw e;
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[02/50] [abbrv] hadoop git commit: YARN-7495. Improve robustness of
the AggregatedLogDeletionService. Contributed by Jonathan Eagles
Posted by vi...@apache.org.
YARN-7495. Improve robustness of the AggregatedLogDeletionService. Contributed by Jonathan Eagles
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5cfaee2e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5cfaee2e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5cfaee2e
Branch: refs/heads/HDFS-9806
Commit: 5cfaee2e6db8b2ac55708de0968ff5539ee3bd76
Parents: 75a3ab8
Author: Jason Lowe <jl...@apache.org>
Authored: Thu Nov 30 12:39:18 2017 -0600
Committer: Jason Lowe <jl...@apache.org>
Committed: Thu Nov 30 12:39:18 2017 -0600
----------------------------------------------------------------------
.../AggregatedLogDeletionService.java | 90 ++++++++++++--------
.../TestAggregatedLogDeletionService.java | 68 +++++++++++++++
2 files changed, 122 insertions(+), 36 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5cfaee2e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.java
index a80f9d7..562bd2c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogDeletionService.java
@@ -85,49 +85,67 @@ public class AggregatedLogDeletionService extends AbstractService {
deleteOldLogDirsFrom(userDirPath, cutoffMillis, fs, rmClient);
}
}
- } catch (IOException e) {
- logIOException("Error reading root log dir this deletion " +
- "attempt is being aborted", e);
+ } catch (Throwable t) {
+ logException("Error reading root log dir this deletion " +
+ "attempt is being aborted", t);
}
LOG.info("aggregated log deletion finished.");
}
private static void deleteOldLogDirsFrom(Path dir, long cutoffMillis,
FileSystem fs, ApplicationClientProtocol rmClient) {
+ FileStatus[] appDirs;
try {
- for(FileStatus appDir : fs.listStatus(dir)) {
- if(appDir.isDirectory() &&
- appDir.getModificationTime() < cutoffMillis) {
- boolean appTerminated =
- isApplicationTerminated(ApplicationId.fromString(appDir
- .getPath().getName()), rmClient);
- if(appTerminated && shouldDeleteLogDir(appDir, cutoffMillis, fs)) {
- try {
- LOG.info("Deleting aggregated logs in "+appDir.getPath());
- fs.delete(appDir.getPath(), true);
- } catch (IOException e) {
- logIOException("Could not delete "+appDir.getPath(), e);
- }
- } else if (!appTerminated){
- try {
- for(FileStatus node: fs.listStatus(appDir.getPath())) {
- if(node.getModificationTime() < cutoffMillis) {
- try {
- fs.delete(node.getPath(), true);
- } catch (IOException ex) {
- logIOException("Could not delete "+appDir.getPath(), ex);
- }
- }
+ appDirs = fs.listStatus(dir);
+ } catch (IOException e) {
+ logException("Could not read the contents of " + dir, e);
+ return;
+ }
+ for (FileStatus appDir : appDirs) {
+ deleteAppDirLogs(cutoffMillis, fs, rmClient, appDir);
+ }
+ }
+
+ private static void deleteAppDirLogs(long cutoffMillis, FileSystem fs,
+ ApplicationClientProtocol rmClient,
+ FileStatus appDir) {
+ try {
+ if (appDir.isDirectory() &&
+ appDir.getModificationTime() < cutoffMillis) {
+ ApplicationId appId = ApplicationId.fromString(
+ appDir.getPath().getName());
+ boolean appTerminated = isApplicationTerminated(appId, rmClient);
+ if (!appTerminated) {
+ // Application is still running
+ FileStatus[] logFiles;
+ try {
+ logFiles = fs.listStatus(appDir.getPath());
+ } catch (IOException e) {
+ logException("Error reading the contents of "
+ + appDir.getPath(), e);
+ return;
+ }
+ for (FileStatus node : logFiles) {
+ if (node.getModificationTime() < cutoffMillis) {
+ try {
+ fs.delete(node.getPath(), true);
+ } catch (IOException ex) {
+ logException("Could not delete " + appDir.getPath(), ex);
}
- } catch(IOException e) {
- logIOException(
- "Error reading the contents of " + appDir.getPath(), e);
}
}
+ } else if (shouldDeleteLogDir(appDir, cutoffMillis, fs)) {
+ // Application is no longer running
+ try {
+ LOG.info("Deleting aggregated logs in " + appDir.getPath());
+ fs.delete(appDir.getPath(), true);
+ } catch (IOException e) {
+ logException("Could not delete " + appDir.getPath(), e);
+ }
}
}
- } catch (IOException e) {
- logIOException("Could not read the contents of " + dir, e);
+ } catch (Exception e) {
+ logException("Could not delete " + appDir.getPath(), e);
}
}
@@ -142,7 +160,7 @@ public class AggregatedLogDeletionService extends AbstractService {
}
}
} catch(IOException e) {
- logIOException("Error reading the contents of " + dir.getPath(), e);
+ logException("Error reading the contents of " + dir.getPath(), e);
shouldDelete = false;
}
return shouldDelete;
@@ -172,14 +190,14 @@ public class AggregatedLogDeletionService extends AbstractService {
}
}
- private static void logIOException(String comment, IOException e) {
- if(e instanceof AccessControlException) {
- String message = e.getMessage();
+ private static void logException(String comment, Throwable t) {
+ if(t instanceof AccessControlException) {
+ String message = t.getMessage();
//TODO fix this after HADOOP-8661
message = message.split("\n")[0];
LOG.warn(comment + " " + message);
} else {
- LOG.error(comment, e);
+ LOG.error(comment, t);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5cfaee2e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java
index 026996e..4e2d302 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/logaggregation/TestAggregatedLogDeletionService.java
@@ -385,6 +385,74 @@ public class TestAggregatedLogDeletionService {
deletionSvc.stop();
}
+ @Test
+ public void testRobustLogDeletion() throws Exception {
+ final long RETENTION_SECS = 10 * 24 * 3600;
+
+ String root = "mockfs://foo/";
+ String remoteRootLogDir = root+"tmp/logs";
+ String suffix = "logs";
+ Configuration conf = new Configuration();
+ conf.setClass("fs.mockfs.impl", MockFileSystem.class,
+ FileSystem.class);
+ conf.set(YarnConfiguration.LOG_AGGREGATION_ENABLED, "true");
+ conf.set(YarnConfiguration.LOG_AGGREGATION_RETAIN_SECONDS, "864000");
+ conf.set(YarnConfiguration.LOG_AGGREGATION_RETAIN_CHECK_INTERVAL_SECONDS,
+ "1");
+ conf.set(YarnConfiguration.NM_REMOTE_APP_LOG_DIR, remoteRootLogDir);
+ conf.set(YarnConfiguration.NM_REMOTE_APP_LOG_DIR_SUFFIX, suffix);
+
+ // prevent us from picking up the same mockfs instance from another test
+ FileSystem.closeAll();
+ Path rootPath = new Path(root);
+ FileSystem rootFs = rootPath.getFileSystem(conf);
+ FileSystem mockFs = ((FilterFileSystem)rootFs).getRawFileSystem();
+
+ Path remoteRootLogPath = new Path(remoteRootLogDir);
+
+ Path userDir = new Path(remoteRootLogPath, "me");
+ FileStatus userDirStatus = new FileStatus(0, true, 0, 0, 0, userDir);
+
+ when(mockFs.listStatus(remoteRootLogPath)).thenReturn(
+ new FileStatus[]{userDirStatus});
+
+ Path userLogDir = new Path(userDir, suffix);
+ ApplicationId appId1 =
+ ApplicationId.newInstance(System.currentTimeMillis(), 1);
+ Path app1Dir = new Path(userLogDir, appId1.toString());
+ FileStatus app1DirStatus = new FileStatus(0, true, 0, 0, 0, app1Dir);
+ ApplicationId appId2 =
+ ApplicationId.newInstance(System.currentTimeMillis(), 2);
+ Path app2Dir = new Path(userLogDir, "application_a");
+ FileStatus app2DirStatus = new FileStatus(0, true, 0, 0, 0, app2Dir);
+ ApplicationId appId3 =
+ ApplicationId.newInstance(System.currentTimeMillis(), 3);
+ Path app3Dir = new Path(userLogDir, appId3.toString());
+ FileStatus app3DirStatus = new FileStatus(0, true, 0, 0, 0, app3Dir);
+
+ when(mockFs.listStatus(userLogDir)).thenReturn(
+ new FileStatus[]{app1DirStatus, app2DirStatus, app3DirStatus});
+
+ when(mockFs.listStatus(app1Dir)).thenThrow(
+ new RuntimeException("Should Be Caught and Logged"));
+ Path app3Log3 = new Path(app3Dir, "host1");
+ FileStatus app3Log3Status = new FileStatus(10, false, 1, 1, 0, app3Log3);
+ when(mockFs.listStatus(app3Dir)).thenReturn(
+ new FileStatus[]{app3Log3Status});
+
+ final List<ApplicationId> finishedApplications =
+ Collections.unmodifiableList(Arrays.asList(appId1, appId3));
+
+ ApplicationClientProtocol rmClient =
+ createMockRMClient(finishedApplications, null);
+ AggregatedLogDeletionService.LogDeletionTask deletionTask =
+ new AggregatedLogDeletionService.LogDeletionTask(conf,
+ RETENTION_SECS,
+ rmClient);
+ deletionTask.run();
+ verify(mockFs).delete(app3Dir, true);
+ }
+
static class MockFileSystem extends FilterFileSystem {
MockFileSystem() {
super(mock(FileSystem.class));
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[16/50] [abbrv] hadoop git commit: MAPREDUCE-5124. AM lacks flow
control for task events. Contributed by Peter Bacsko
Posted by vi...@apache.org.
MAPREDUCE-5124. AM lacks flow control for task events. Contributed by Peter Bacsko
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/21d36273
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/21d36273
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/21d36273
Branch: refs/heads/HDFS-9806
Commit: 21d36273551fa45c4130e5523b6724358cf34b1e
Parents: 0faf506
Author: Jason Lowe <jl...@apache.org>
Authored: Fri Dec 1 14:03:01 2017 -0600
Committer: Jason Lowe <jl...@apache.org>
Committed: Fri Dec 1 14:04:25 2017 -0600
----------------------------------------------------------------------
.../hadoop/mapred/TaskAttemptListenerImpl.java | 69 +++-
.../job/event/TaskAttemptStatusUpdateEvent.java | 12 +-
.../v2/app/job/impl/TaskAttemptImpl.java | 20 +-
.../mapred/TestTaskAttemptListenerImpl.java | 315 ++++++++++++-------
.../mapreduce/v2/app/TestFetchFailure.java | 3 +-
.../mapreduce/v2/app/TestMRClientService.java | 4 +-
.../v2/TestSpeculativeExecutionWithMRApp.java | 13 +-
7 files changed, 302 insertions(+), 134 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/21d36273/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
index 9b6148c..67f8ff0 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
@@ -22,9 +22,11 @@ import java.io.IOException;
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.Collections;
+import java.util.List;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.atomic.AtomicReference;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
@@ -36,6 +38,7 @@ import org.apache.hadoop.mapreduce.MRJobConfig;
import org.apache.hadoop.mapreduce.TypeConverter;
import org.apache.hadoop.mapreduce.checkpoint.TaskCheckpointID;
import org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager;
+import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId;
import org.apache.hadoop.mapreduce.v2.api.records.TaskId;
import org.apache.hadoop.mapreduce.v2.app.AppContext;
import org.apache.hadoop.mapreduce.v2.app.TaskAttemptListener;
@@ -58,6 +61,8 @@ import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import com.google.common.annotations.VisibleForTesting;
+
/**
* This class is responsible for talking to the task umblical.
* It also converts all the old data structures
@@ -66,7 +71,6 @@ import org.slf4j.LoggerFactory;
* This class HAS to be in this package to access package private
* methods/classes.
*/
-@SuppressWarnings({"unchecked"})
public class TaskAttemptListenerImpl extends CompositeService
implements TaskUmbilicalProtocol, TaskAttemptListener {
@@ -84,6 +88,11 @@ public class TaskAttemptListenerImpl extends CompositeService
private ConcurrentMap<WrappedJvmID, org.apache.hadoop.mapred.Task>
jvmIDToActiveAttemptMap
= new ConcurrentHashMap<WrappedJvmID, org.apache.hadoop.mapred.Task>();
+
+ private ConcurrentMap<TaskAttemptId,
+ AtomicReference<TaskAttemptStatus>> attemptIdToStatus
+ = new ConcurrentHashMap<>();
+
private Set<WrappedJvmID> launchedJVMs = Collections
.newSetFromMap(new ConcurrentHashMap<WrappedJvmID, Boolean>());
@@ -359,6 +368,13 @@ public class TaskAttemptListenerImpl extends CompositeService
org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId yarnAttemptID =
TypeConverter.toYarn(taskAttemptID);
+ AtomicReference<TaskAttemptStatus> lastStatusRef =
+ attemptIdToStatus.get(yarnAttemptID);
+ if (lastStatusRef == null) {
+ throw new IllegalStateException("Status update was called"
+ + " with illegal TaskAttemptId: " + yarnAttemptID);
+ }
+
AMFeedback feedback = new AMFeedback();
feedback.setTaskFound(true);
@@ -437,9 +453,8 @@ public class TaskAttemptListenerImpl extends CompositeService
// // isn't ever changed by the Task itself.
// taskStatus.getIncludeCounters();
- context.getEventHandler().handle(
- new TaskAttemptStatusUpdateEvent(taskAttemptStatus.id,
- taskAttemptStatus));
+ coalesceStatusUpdate(yarnAttemptID, taskAttemptStatus, lastStatusRef);
+
return feedback;
}
@@ -520,6 +535,8 @@ public class TaskAttemptListenerImpl extends CompositeService
launchedJVMs.add(jvmId);
taskHeartbeatHandler.register(attemptID);
+
+ attemptIdToStatus.put(attemptID, new AtomicReference<>());
}
@Override
@@ -541,6 +558,8 @@ public class TaskAttemptListenerImpl extends CompositeService
//unregister this attempt
taskHeartbeatHandler.unregister(attemptID);
+
+ attemptIdToStatus.remove(attemptID);
}
@Override
@@ -563,4 +582,46 @@ public class TaskAttemptListenerImpl extends CompositeService
preemptionPolicy.setCheckpointID(tid, cid);
}
+ private void coalesceStatusUpdate(TaskAttemptId yarnAttemptID,
+ TaskAttemptStatus taskAttemptStatus,
+ AtomicReference<TaskAttemptStatus> lastStatusRef) {
+ boolean asyncUpdatedNeeded = false;
+ TaskAttemptStatus lastStatus = lastStatusRef.get();
+
+ if (lastStatus == null) {
+ lastStatusRef.set(taskAttemptStatus);
+ asyncUpdatedNeeded = true;
+ } else {
+ List<TaskAttemptId> oldFetchFailedMaps =
+ taskAttemptStatus.fetchFailedMaps;
+
+ // merge fetchFailedMaps from the previous update
+ if (lastStatus.fetchFailedMaps != null) {
+ if (taskAttemptStatus.fetchFailedMaps == null) {
+ taskAttemptStatus.fetchFailedMaps = lastStatus.fetchFailedMaps;
+ } else {
+ taskAttemptStatus.fetchFailedMaps.addAll(lastStatus.fetchFailedMaps);
+ }
+ }
+
+ if (!lastStatusRef.compareAndSet(lastStatus, taskAttemptStatus)) {
+ // update failed - async dispatcher has processed it in the meantime
+ taskAttemptStatus.fetchFailedMaps = oldFetchFailedMaps;
+ lastStatusRef.set(taskAttemptStatus);
+ asyncUpdatedNeeded = true;
+ }
+ }
+
+ if (asyncUpdatedNeeded) {
+ context.getEventHandler().handle(
+ new TaskAttemptStatusUpdateEvent(taskAttemptStatus.id,
+ lastStatusRef));
+ }
+ }
+
+ @VisibleForTesting
+ ConcurrentMap<TaskAttemptId,
+ AtomicReference<TaskAttemptStatus>> getAttemptIdToStatus() {
+ return attemptIdToStatus;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/21d36273/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/event/TaskAttemptStatusUpdateEvent.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/event/TaskAttemptStatusUpdateEvent.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/event/TaskAttemptStatusUpdateEvent.java
index 715f63d..cef4fd0 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/event/TaskAttemptStatusUpdateEvent.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/event/TaskAttemptStatusUpdateEvent.java
@@ -19,6 +19,7 @@
package org.apache.hadoop.mapreduce.v2.app.job.event;
import java.util.List;
+import java.util.concurrent.atomic.AtomicReference;
import org.apache.hadoop.mapreduce.Counters;
import org.apache.hadoop.mapreduce.v2.api.records.Phase;
@@ -26,17 +27,16 @@ import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId;
import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptState;
public class TaskAttemptStatusUpdateEvent extends TaskAttemptEvent {
-
- private TaskAttemptStatus reportedTaskAttemptStatus;
+ private AtomicReference<TaskAttemptStatus> taskAttemptStatusRef;
public TaskAttemptStatusUpdateEvent(TaskAttemptId id,
- TaskAttemptStatus taskAttemptStatus) {
+ AtomicReference<TaskAttemptStatus> taskAttemptStatusRef) {
super(id, TaskAttemptEventType.TA_UPDATE);
- this.reportedTaskAttemptStatus = taskAttemptStatus;
+ this.taskAttemptStatusRef = taskAttemptStatusRef;
}
- public TaskAttemptStatus getReportedTaskAttemptStatus() {
- return reportedTaskAttemptStatus;
+ public AtomicReference<TaskAttemptStatus> getTaskAttemptStatusRef() {
+ return taskAttemptStatusRef;
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/21d36273/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
index 90e0d21..431128b 100755
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
@@ -37,6 +37,7 @@ import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicReference;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
@@ -1780,7 +1781,6 @@ public abstract class TaskAttemptImpl implements
taskAttempt.updateProgressSplits();
}
-
static class RequestContainerTransition implements
SingleArcTransition<TaskAttemptImpl, TaskAttemptEvent> {
private final boolean rescheduled;
@@ -1965,6 +1965,7 @@ public abstract class TaskAttemptImpl implements
// register it to TaskAttemptListener so that it can start monitoring it.
taskAttempt.taskAttemptListener
.registerLaunchedTask(taskAttempt.attemptId, taskAttempt.jvmID);
+
//TODO Resolve to host / IP in case of a local address.
InetSocketAddress nodeHttpInetAddr = // TODO: Costly to create sock-addr?
NetUtils.createSocketAddr(taskAttempt.container.getNodeHttpAddress());
@@ -2430,15 +2431,20 @@ public abstract class TaskAttemptImpl implements
}
private static class StatusUpdater
- implements SingleArcTransition<TaskAttemptImpl, TaskAttemptEvent> {
+ implements SingleArcTransition<TaskAttemptImpl, TaskAttemptEvent> {
@SuppressWarnings("unchecked")
@Override
public void transition(TaskAttemptImpl taskAttempt,
TaskAttemptEvent event) {
- // Status update calls don't really change the state of the attempt.
+ TaskAttemptStatusUpdateEvent statusEvent =
+ ((TaskAttemptStatusUpdateEvent)event);
+
+ AtomicReference<TaskAttemptStatus> taskAttemptStatusRef =
+ statusEvent.getTaskAttemptStatusRef();
+
TaskAttemptStatus newReportedStatus =
- ((TaskAttemptStatusUpdateEvent) event)
- .getReportedTaskAttemptStatus();
+ taskAttemptStatusRef.getAndSet(null);
+
// Now switch the information in the reportedStatus
taskAttempt.reportedStatus = newReportedStatus;
taskAttempt.reportedStatus.taskState = taskAttempt.getState();
@@ -2447,12 +2453,10 @@ public abstract class TaskAttemptImpl implements
taskAttempt.eventHandler.handle
(new SpeculatorEvent
(taskAttempt.reportedStatus, taskAttempt.clock.getTime()));
-
taskAttempt.updateProgressSplits();
-
//if fetch failures are present, send the fetch failure event to job
//this only will happen in reduce attempt type
- if (taskAttempt.reportedStatus.fetchFailedMaps != null &&
+ if (taskAttempt.reportedStatus.fetchFailedMaps != null &&
taskAttempt.reportedStatus.fetchFailedMaps.size() > 0) {
String hostname = taskAttempt.container == null ? "UNKNOWN"
: taskAttempt.container.getNodeId().getHost();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/21d36273/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java
index fa8418a..4ff6fb2 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java
@@ -24,6 +24,8 @@ import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.atomic.AtomicReference;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
@@ -35,6 +37,7 @@ import org.apache.hadoop.mapreduce.checkpoint.FSCheckpointID;
import org.apache.hadoop.mapreduce.checkpoint.TaskCheckpointID;
import org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager;
import org.apache.hadoop.mapreduce.v2.api.records.JobId;
+import org.apache.hadoop.mapreduce.v2.api.records.Phase;
import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptCompletionEvent;
import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptCompletionEventStatus;
import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptId;
@@ -42,6 +45,8 @@ import org.apache.hadoop.mapreduce.v2.api.records.TaskId;
import org.apache.hadoop.mapreduce.v2.app.AppContext;
import org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler;
import org.apache.hadoop.mapreduce.v2.app.job.Job;
+import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent;
+import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent.TaskAttemptStatus;
import org.apache.hadoop.mapreduce.v2.app.rm.preemption.AMPreemptionPolicy;
import org.apache.hadoop.mapreduce.v2.app.rm.preemption.CheckpointAMPreemptionPolicy;
import org.apache.hadoop.mapreduce.v2.app.rm.RMHeartbeatHandler;
@@ -52,12 +57,69 @@ import org.apache.hadoop.yarn.event.EventHandler;
import org.apache.hadoop.yarn.factories.RecordFactory;
import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
import org.apache.hadoop.yarn.util.SystemClock;
-
+import org.junit.After;
import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.mockito.ArgumentCaptor;
+import org.mockito.Captor;
+import org.mockito.Mock;
+import org.mockito.runners.MockitoJUnitRunner;
+
import static org.junit.Assert.*;
import static org.mockito.Mockito.*;
+/**
+ * Tests the behavior of TaskAttemptListenerImpl.
+ */
+@RunWith(MockitoJUnitRunner.class)
public class TestTaskAttemptListenerImpl {
+ private static final String ATTEMPT1_ID =
+ "attempt_123456789012_0001_m_000001_0";
+ private static final String ATTEMPT2_ID =
+ "attempt_123456789012_0001_m_000002_0";
+
+ private static final TaskAttemptId TASKATTEMPTID1 =
+ TypeConverter.toYarn(TaskAttemptID.forName(ATTEMPT1_ID));
+ private static final TaskAttemptId TASKATTEMPTID2 =
+ TypeConverter.toYarn(TaskAttemptID.forName(ATTEMPT2_ID));
+
+ @Mock
+ private AppContext appCtx;
+
+ @Mock
+ private JobTokenSecretManager secret;
+
+ @Mock
+ private RMHeartbeatHandler rmHeartbeatHandler;
+
+ @Mock
+ private TaskHeartbeatHandler hbHandler;
+
+ @Mock
+ private Dispatcher dispatcher;
+
+ @Mock
+ private Task task;
+
+ @SuppressWarnings("rawtypes")
+ @Mock
+ private EventHandler<Event> ea;
+
+ @SuppressWarnings("rawtypes")
+ @Captor
+ private ArgumentCaptor<Event> eventCaptor;
+
+ private CheckpointAMPreemptionPolicy policy;
+ private JVMId id;
+ private WrappedJvmID wid;
+ private TaskAttemptID attemptID;
+ private TaskAttemptId attemptId;
+ private ReduceTaskStatus firstReduceStatus;
+ private ReduceTaskStatus secondReduceStatus;
+ private ReduceTaskStatus thirdReduceStatus;
+
+ private MockTaskAttemptListenerImpl listener;
+
public static class MockTaskAttemptListenerImpl
extends TaskAttemptListenerImpl {
@@ -93,34 +155,24 @@ public class TestTaskAttemptListenerImpl {
//Empty
}
}
-
+
+ @After
+ public void after() throws IOException {
+ if (listener != null) {
+ listener.close();
+ listener = null;
+ }
+ }
+
@Test (timeout=5000)
public void testGetTask() throws IOException {
- AppContext appCtx = mock(AppContext.class);
- JobTokenSecretManager secret = mock(JobTokenSecretManager.class);
- RMHeartbeatHandler rmHeartbeatHandler =
- mock(RMHeartbeatHandler.class);
- TaskHeartbeatHandler hbHandler = mock(TaskHeartbeatHandler.class);
- Dispatcher dispatcher = mock(Dispatcher.class);
- @SuppressWarnings("unchecked")
- EventHandler<Event> ea = mock(EventHandler.class);
- when(dispatcher.getEventHandler()).thenReturn(ea);
-
- when(appCtx.getEventHandler()).thenReturn(ea);
- CheckpointAMPreemptionPolicy policy = new CheckpointAMPreemptionPolicy();
- policy.init(appCtx);
- MockTaskAttemptListenerImpl listener =
- new MockTaskAttemptListenerImpl(appCtx, secret,
- rmHeartbeatHandler, hbHandler, policy);
- Configuration conf = new Configuration();
- listener.init(conf);
- listener.start();
- JVMId id = new JVMId("foo",1, true, 1);
- WrappedJvmID wid = new WrappedJvmID(id.getJobId(), id.isMap, id.getId());
+ configureMocks();
+ startListener(false);
// Verify ask before registration.
//The JVM ID has not been registered yet so we should kill it.
JvmContext context = new JvmContext();
+
context.jvmId = id;
JvmTask result = listener.getTask(context);
assertNotNull(result);
@@ -128,20 +180,18 @@ public class TestTaskAttemptListenerImpl {
// Verify ask after registration but before launch.
// Don't kill, should be null.
- TaskAttemptId attemptID = mock(TaskAttemptId.class);
- Task task = mock(Task.class);
//Now put a task with the ID
listener.registerPendingTask(task, wid);
result = listener.getTask(context);
assertNull(result);
// Unregister for more testing.
- listener.unregister(attemptID, wid);
+ listener.unregister(attemptId, wid);
// Verify ask after registration and launch
//Now put a task with the ID
listener.registerPendingTask(task, wid);
- listener.registerLaunchedTask(attemptID, wid);
- verify(hbHandler).register(attemptID);
+ listener.registerLaunchedTask(attemptId, wid);
+ verify(hbHandler).register(attemptId);
result = listener.getTask(context);
assertNotNull(result);
assertFalse(result.shouldDie);
@@ -152,15 +202,13 @@ public class TestTaskAttemptListenerImpl {
assertNotNull(result);
assertTrue(result.shouldDie);
- listener.unregister(attemptID, wid);
+ listener.unregister(attemptId, wid);
// Verify after unregistration.
result = listener.getTask(context);
assertNotNull(result);
assertTrue(result.shouldDie);
- listener.stop();
-
// test JVMID
JVMId jvmid = JVMId.forName("jvm_001_002_m_004");
assertNotNull(jvmid);
@@ -206,20 +254,10 @@ public class TestTaskAttemptListenerImpl {
when(mockJob.getMapAttemptCompletionEvents(2, 100)).thenReturn(
TypeConverter.fromYarn(empty));
- AppContext appCtx = mock(AppContext.class);
+ configureMocks();
when(appCtx.getJob(any(JobId.class))).thenReturn(mockJob);
- JobTokenSecretManager secret = mock(JobTokenSecretManager.class);
- RMHeartbeatHandler rmHeartbeatHandler =
- mock(RMHeartbeatHandler.class);
- final TaskHeartbeatHandler hbHandler = mock(TaskHeartbeatHandler.class);
- Dispatcher dispatcher = mock(Dispatcher.class);
- @SuppressWarnings("unchecked")
- EventHandler<Event> ea = mock(EventHandler.class);
- when(dispatcher.getEventHandler()).thenReturn(ea);
- when(appCtx.getEventHandler()).thenReturn(ea);
- CheckpointAMPreemptionPolicy policy = new CheckpointAMPreemptionPolicy();
- policy.init(appCtx);
- TaskAttemptListenerImpl listener = new MockTaskAttemptListenerImpl(
+
+ listener = new MockTaskAttemptListenerImpl(
appCtx, secret, rmHeartbeatHandler, policy) {
@Override
protected void registerHeartbeatHandler(Configuration conf) {
@@ -262,26 +300,17 @@ public class TestTaskAttemptListenerImpl {
public void testCommitWindow() throws IOException {
SystemClock clock = SystemClock.getInstance();
+ configureMocks();
+
org.apache.hadoop.mapreduce.v2.app.job.Task mockTask =
mock(org.apache.hadoop.mapreduce.v2.app.job.Task.class);
when(mockTask.canCommit(any(TaskAttemptId.class))).thenReturn(true);
Job mockJob = mock(Job.class);
when(mockJob.getTask(any(TaskId.class))).thenReturn(mockTask);
- AppContext appCtx = mock(AppContext.class);
when(appCtx.getJob(any(JobId.class))).thenReturn(mockJob);
when(appCtx.getClock()).thenReturn(clock);
- JobTokenSecretManager secret = mock(JobTokenSecretManager.class);
- RMHeartbeatHandler rmHeartbeatHandler =
- mock(RMHeartbeatHandler.class);
- final TaskHeartbeatHandler hbHandler = mock(TaskHeartbeatHandler.class);
- Dispatcher dispatcher = mock(Dispatcher.class);
- @SuppressWarnings("unchecked")
- EventHandler<Event> ea = mock(EventHandler.class);
- when(dispatcher.getEventHandler()).thenReturn(ea);
- when(appCtx.getEventHandler()).thenReturn(ea);
- CheckpointAMPreemptionPolicy policy = new CheckpointAMPreemptionPolicy();
- policy.init(appCtx);
- TaskAttemptListenerImpl listener = new MockTaskAttemptListenerImpl(
+
+ listener = new MockTaskAttemptListenerImpl(
appCtx, secret, rmHeartbeatHandler, policy) {
@Override
protected void registerHeartbeatHandler(Configuration conf) {
@@ -300,44 +329,29 @@ public class TestTaskAttemptListenerImpl {
verify(mockTask, never()).canCommit(any(TaskAttemptId.class));
// verify commit allowed when RM heartbeat is recent
- when(rmHeartbeatHandler.getLastHeartbeatTime()).thenReturn(clock.getTime());
+ when(rmHeartbeatHandler.getLastHeartbeatTime())
+ .thenReturn(clock.getTime());
canCommit = listener.canCommit(tid);
assertTrue(canCommit);
verify(mockTask, times(1)).canCommit(any(TaskAttemptId.class));
-
- listener.stop();
}
@Test
public void testCheckpointIDTracking()
throws IOException, InterruptedException{
-
SystemClock clock = SystemClock.getInstance();
+ configureMocks();
+
org.apache.hadoop.mapreduce.v2.app.job.Task mockTask =
mock(org.apache.hadoop.mapreduce.v2.app.job.Task.class);
when(mockTask.canCommit(any(TaskAttemptId.class))).thenReturn(true);
Job mockJob = mock(Job.class);
when(mockJob.getTask(any(TaskId.class))).thenReturn(mockTask);
-
- Dispatcher dispatcher = mock(Dispatcher.class);
- @SuppressWarnings("unchecked")
- EventHandler<Event> ea = mock(EventHandler.class);
- when(dispatcher.getEventHandler()).thenReturn(ea);
-
- RMHeartbeatHandler rmHeartbeatHandler =
- mock(RMHeartbeatHandler.class);
-
- AppContext appCtx = mock(AppContext.class);
when(appCtx.getJob(any(JobId.class))).thenReturn(mockJob);
when(appCtx.getClock()).thenReturn(clock);
- when(appCtx.getEventHandler()).thenReturn(ea);
- JobTokenSecretManager secret = mock(JobTokenSecretManager.class);
- final TaskHeartbeatHandler hbHandler = mock(TaskHeartbeatHandler.class);
- when(appCtx.getEventHandler()).thenReturn(ea);
- CheckpointAMPreemptionPolicy policy = new CheckpointAMPreemptionPolicy();
- policy.init(appCtx);
- TaskAttemptListenerImpl listener = new MockTaskAttemptListenerImpl(
+
+ listener = new MockTaskAttemptListenerImpl(
appCtx, secret, rmHeartbeatHandler, policy) {
@Override
protected void registerHeartbeatHandler(Configuration conf) {
@@ -387,42 +401,13 @@ public class TestTaskAttemptListenerImpl {
//assert it worked
assert outcid == incid;
-
- listener.stop();
-
}
- @SuppressWarnings("rawtypes")
@Test
public void testStatusUpdateProgress()
throws IOException, InterruptedException {
- AppContext appCtx = mock(AppContext.class);
- JobTokenSecretManager secret = mock(JobTokenSecretManager.class);
- RMHeartbeatHandler rmHeartbeatHandler =
- mock(RMHeartbeatHandler.class);
- TaskHeartbeatHandler hbHandler = mock(TaskHeartbeatHandler.class);
- Dispatcher dispatcher = mock(Dispatcher.class);
- @SuppressWarnings("unchecked")
- EventHandler<Event> ea = mock(EventHandler.class);
- when(dispatcher.getEventHandler()).thenReturn(ea);
-
- when(appCtx.getEventHandler()).thenReturn(ea);
- CheckpointAMPreemptionPolicy policy = new CheckpointAMPreemptionPolicy();
- policy.init(appCtx);
- MockTaskAttemptListenerImpl listener =
- new MockTaskAttemptListenerImpl(appCtx, secret,
- rmHeartbeatHandler, hbHandler, policy);
- Configuration conf = new Configuration();
- listener.init(conf);
- listener.start();
- JVMId id = new JVMId("foo",1, true, 1);
- WrappedJvmID wid = new WrappedJvmID(id.getJobId(), id.isMap, id.getId());
-
- TaskAttemptID attemptID = new TaskAttemptID("1", 1, TaskType.MAP, 1, 1);
- TaskAttemptId attemptId = TypeConverter.toYarn(attemptID);
- Task task = mock(Task.class);
- listener.registerPendingTask(task, wid);
- listener.registerLaunchedTask(attemptId, wid);
+ configureMocks();
+ startListener(true);
verify(hbHandler).register(attemptId);
// make sure a ping doesn't report progress
@@ -437,6 +422,116 @@ public class TestTaskAttemptListenerImpl {
feedback = listener.statusUpdate(attemptID, mockStatus);
assertTrue(feedback.getTaskFound());
verify(hbHandler).progressing(eq(attemptId));
- listener.close();
+ }
+
+ @Test
+ public void testSingleStatusUpdate()
+ throws IOException, InterruptedException {
+ configureMocks();
+ startListener(true);
+
+ listener.statusUpdate(attemptID, firstReduceStatus);
+
+ verify(ea).handle(eventCaptor.capture());
+ TaskAttemptStatusUpdateEvent updateEvent =
+ (TaskAttemptStatusUpdateEvent) eventCaptor.getValue();
+
+ TaskAttemptStatus status = updateEvent.getTaskAttemptStatusRef().get();
+ assertTrue(status.fetchFailedMaps.contains(TASKATTEMPTID1));
+ assertEquals(1, status.fetchFailedMaps.size());
+ assertEquals(Phase.SHUFFLE, status.phase);
+ }
+
+ @Test
+ public void testStatusUpdateEventCoalescing()
+ throws IOException, InterruptedException {
+ configureMocks();
+ startListener(true);
+
+ listener.statusUpdate(attemptID, firstReduceStatus);
+ listener.statusUpdate(attemptID, secondReduceStatus);
+
+ verify(ea).handle(any(Event.class));
+ ConcurrentMap<TaskAttemptId,
+ AtomicReference<TaskAttemptStatus>> attemptIdToStatus =
+ listener.getAttemptIdToStatus();
+ TaskAttemptStatus status = attemptIdToStatus.get(attemptId).get();
+
+ assertTrue(status.fetchFailedMaps.contains(TASKATTEMPTID1));
+ assertTrue(status.fetchFailedMaps.contains(TASKATTEMPTID2));
+ assertEquals(2, status.fetchFailedMaps.size());
+ assertEquals(Phase.SORT, status.phase);
+ }
+
+ @Test
+ public void testCoalescedStatusUpdatesCleared()
+ throws IOException, InterruptedException {
+ // First two events are coalesced, the third is not
+ configureMocks();
+ startListener(true);
+
+ listener.statusUpdate(attemptID, firstReduceStatus);
+ listener.statusUpdate(attemptID, secondReduceStatus);
+ ConcurrentMap<TaskAttemptId,
+ AtomicReference<TaskAttemptStatus>> attemptIdToStatus =
+ listener.getAttemptIdToStatus();
+ attemptIdToStatus.get(attemptId).set(null);
+ listener.statusUpdate(attemptID, thirdReduceStatus);
+
+ verify(ea, times(2)).handle(eventCaptor.capture());
+ TaskAttemptStatusUpdateEvent updateEvent =
+ (TaskAttemptStatusUpdateEvent) eventCaptor.getValue();
+
+ TaskAttemptStatus status = updateEvent.getTaskAttemptStatusRef().get();
+ assertNull(status.fetchFailedMaps);
+ assertEquals(Phase.REDUCE, status.phase);
+ }
+
+ @Test(expected = IllegalStateException.class)
+ public void testStatusUpdateFromUnregisteredTask()
+ throws IOException, InterruptedException{
+ configureMocks();
+ startListener(false);
+
+ listener.statusUpdate(attemptID, firstReduceStatus);
+ }
+
+ private void configureMocks() {
+ firstReduceStatus = new ReduceTaskStatus(attemptID, 0.0f, 1,
+ TaskStatus.State.RUNNING, "", "RUNNING", "", TaskStatus.Phase.SHUFFLE,
+ new Counters());
+ firstReduceStatus.addFetchFailedMap(TaskAttemptID.forName(ATTEMPT1_ID));
+
+ secondReduceStatus = new ReduceTaskStatus(attemptID, 0.0f, 1,
+ TaskStatus.State.RUNNING, "", "RUNNING", "", TaskStatus.Phase.SORT,
+ new Counters());
+ secondReduceStatus.addFetchFailedMap(TaskAttemptID.forName(ATTEMPT2_ID));
+
+ thirdReduceStatus = new ReduceTaskStatus(attemptID, 0.0f, 1,
+ TaskStatus.State.RUNNING, "", "RUNNING", "",
+ TaskStatus.Phase.REDUCE, new Counters());
+
+ when(dispatcher.getEventHandler()).thenReturn(ea);
+ when(appCtx.getEventHandler()).thenReturn(ea);
+ policy = new CheckpointAMPreemptionPolicy();
+ policy.init(appCtx);
+ listener = new MockTaskAttemptListenerImpl(appCtx, secret,
+ rmHeartbeatHandler, hbHandler, policy);
+ id = new JVMId("foo", 1, true, 1);
+ wid = new WrappedJvmID(id.getJobId(), id.isMap, id.getId());
+ attemptID = new TaskAttemptID("1", 1, TaskType.MAP, 1, 1);
+ attemptId = TypeConverter.toYarn(attemptID);
+ }
+
+ private void startListener(boolean registerTask) {
+ Configuration conf = new Configuration();
+
+ listener.init(conf);
+ listener.start();
+
+ if (registerTask) {
+ listener.registerPendingTask(task, wid);
+ listener.registerLaunchedTask(attemptId, wid);
+ }
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/21d36273/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java
index cb2a29e..67a8901 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestFetchFailure.java
@@ -23,6 +23,7 @@ import static org.junit.Assert.assertEquals;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
+import java.util.concurrent.atomic.AtomicReference;
import com.google.common.base.Supplier;
import org.apache.hadoop.conf.Configuration;
@@ -442,7 +443,7 @@ public class TestFetchFailure {
status.stateString = "OK";
status.taskState = attempt.getState();
TaskAttemptStatusUpdateEvent event = new TaskAttemptStatusUpdateEvent(attempt.getID(),
- status);
+ new AtomicReference<>(status));
app.getContext().getEventHandler().handle(event);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/21d36273/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java
index 77f9a09..ca3c28c 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRClientService.java
@@ -24,6 +24,7 @@ import java.io.IOException;
import java.security.PrivilegedExceptionAction;
import java.util.Iterator;
import java.util.List;
+import java.util.concurrent.atomic.AtomicReference;
import org.junit.Assert;
@@ -103,7 +104,8 @@ public class TestMRClientService {
taskAttemptStatus.phase = Phase.MAP;
// send the status update
app.getContext().getEventHandler().handle(
- new TaskAttemptStatusUpdateEvent(attempt.getID(), taskAttemptStatus));
+ new TaskAttemptStatusUpdateEvent(attempt.getID(),
+ new AtomicReference<>(taskAttemptStatus)));
//verify that all object are fully populated by invoking RPCs.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/21d36273/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestSpeculativeExecutionWithMRApp.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestSpeculativeExecutionWithMRApp.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestSpeculativeExecutionWithMRApp.java
index e8003c0..de171c7 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestSpeculativeExecutionWithMRApp.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestSpeculativeExecutionWithMRApp.java
@@ -22,6 +22,7 @@ import java.util.Collection;
import java.util.Iterator;
import java.util.Map;
import java.util.Random;
+import java.util.concurrent.atomic.AtomicReference;
import org.junit.Assert;
import org.apache.hadoop.conf.Configuration;
@@ -84,7 +85,8 @@ public class TestSpeculativeExecutionWithMRApp {
createTaskAttemptStatus(taskAttempt.getKey(), (float) 0.8,
TaskAttemptState.RUNNING);
TaskAttemptStatusUpdateEvent event =
- new TaskAttemptStatusUpdateEvent(taskAttempt.getKey(), status);
+ new TaskAttemptStatusUpdateEvent(taskAttempt.getKey(),
+ new AtomicReference<>(status));
appEventHandler.handle(event);
}
}
@@ -155,7 +157,8 @@ public class TestSpeculativeExecutionWithMRApp {
createTaskAttemptStatus(taskAttempt.getKey(), (float) 0.5,
TaskAttemptState.RUNNING);
TaskAttemptStatusUpdateEvent event =
- new TaskAttemptStatusUpdateEvent(taskAttempt.getKey(), status);
+ new TaskAttemptStatusUpdateEvent(taskAttempt.getKey(),
+ new AtomicReference<>(status));
appEventHandler.handle(event);
}
}
@@ -180,7 +183,8 @@ public class TestSpeculativeExecutionWithMRApp {
TaskAttemptState.RUNNING);
speculatedTask = task.getValue();
TaskAttemptStatusUpdateEvent event =
- new TaskAttemptStatusUpdateEvent(taskAttempt.getKey(), status);
+ new TaskAttemptStatusUpdateEvent(taskAttempt.getKey(),
+ new AtomicReference<>(status));
appEventHandler.handle(event);
}
}
@@ -195,7 +199,8 @@ public class TestSpeculativeExecutionWithMRApp {
createTaskAttemptStatus(taskAttempt.getKey(), (float) 0.75,
TaskAttemptState.RUNNING);
TaskAttemptStatusUpdateEvent event =
- new TaskAttemptStatusUpdateEvent(taskAttempt.getKey(), status);
+ new TaskAttemptStatusUpdateEvent(taskAttempt.getKey(),
+ new AtomicReference<>(status));
appEventHandler.handle(event);
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[49/50] [abbrv] hadoop git commit: HDFS-12665. [AliasMap] Create a
version of the AliasMap that runs in memory in the Namenode (leveldb)
Posted by vi...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
index 1ef2f2b..faf1f83 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
@@ -28,7 +28,6 @@ import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
import org.apache.hadoop.hdfs.util.RwLock;
import org.junit.Before;
import org.junit.Test;
-
import java.io.IOException;
import static org.junit.Assert.assertNotNull;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
new file mode 100644
index 0000000..4a9661b
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestInMemoryLevelDBAliasMapClient.java
@@ -0,0 +1,341 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common.blockaliasmap.impl;
+
+import com.google.common.collect.Lists;
+import com.google.common.io.Files;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.List;
+import java.util.Optional;
+import java.util.Random;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.stream.Collectors;
+
+/**
+ * Tests the {@link InMemoryLevelDBAliasMapClient}.
+ */
+public class TestInMemoryLevelDBAliasMapClient {
+
+ private InMemoryLevelDBAliasMapServer levelDBAliasMapServer;
+ private InMemoryLevelDBAliasMapClient inMemoryLevelDBAliasMapClient;
+ private File tempDir;
+ private Configuration conf;
+
+ @Before
+ public void setUp() throws IOException {
+ levelDBAliasMapServer =
+ new InMemoryLevelDBAliasMapServer(InMemoryAliasMap::init);
+ conf = new Configuration();
+ int port = 9876;
+
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS,
+ "localhost:" + port);
+ tempDir = Files.createTempDir();
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR,
+ tempDir.getAbsolutePath());
+ inMemoryLevelDBAliasMapClient = new InMemoryLevelDBAliasMapClient();
+ }
+
+ @After
+ public void tearDown() throws IOException {
+ levelDBAliasMapServer.close();
+ inMemoryLevelDBAliasMapClient.close();
+ FileUtils.deleteDirectory(tempDir);
+ }
+
+ @Test
+ public void writeRead() throws Exception {
+ inMemoryLevelDBAliasMapClient.setConf(conf);
+ levelDBAliasMapServer.setConf(conf);
+ levelDBAliasMapServer.start();
+ Block block = new Block(42, 43, 44);
+ byte[] nonce = "blackbird".getBytes();
+ ProvidedStorageLocation providedStorageLocation
+ = new ProvidedStorageLocation(new Path("cuckoo"),
+ 45, 46, nonce);
+ BlockAliasMap.Writer<FileRegion> writer =
+ inMemoryLevelDBAliasMapClient.getWriter(null);
+ writer.store(new FileRegion(block, providedStorageLocation));
+
+ BlockAliasMap.Reader<FileRegion> reader =
+ inMemoryLevelDBAliasMapClient.getReader(null);
+ Optional<FileRegion> fileRegion = reader.resolve(block);
+ assertEquals(new FileRegion(block, providedStorageLocation),
+ fileRegion.get());
+ }
+
+ @Test
+ public void iterateSingleBatch() throws Exception {
+ inMemoryLevelDBAliasMapClient.setConf(conf);
+ levelDBAliasMapServer.setConf(conf);
+ levelDBAliasMapServer.start();
+ Block block1 = new Block(42, 43, 44);
+ Block block2 = new Block(43, 44, 45);
+ byte[] nonce1 = "blackbird".getBytes();
+ byte[] nonce2 = "cuckoo".getBytes();
+ ProvidedStorageLocation providedStorageLocation1 =
+ new ProvidedStorageLocation(new Path("eagle"),
+ 46, 47, nonce1);
+ ProvidedStorageLocation providedStorageLocation2 =
+ new ProvidedStorageLocation(new Path("falcon"),
+ 46, 47, nonce2);
+ BlockAliasMap.Writer<FileRegion> writer1 =
+ inMemoryLevelDBAliasMapClient.getWriter(null);
+ writer1.store(new FileRegion(block1, providedStorageLocation1));
+ BlockAliasMap.Writer<FileRegion> writer2 =
+ inMemoryLevelDBAliasMapClient.getWriter(null);
+ writer2.store(new FileRegion(block2, providedStorageLocation2));
+
+ BlockAliasMap.Reader<FileRegion> reader =
+ inMemoryLevelDBAliasMapClient.getReader(null);
+ List<FileRegion> actualFileRegions =
+ Lists.newArrayListWithCapacity(2);
+ for (FileRegion fileRegion : reader) {
+ actualFileRegions.add(fileRegion);
+ }
+
+ assertArrayEquals(
+ new FileRegion[] {new FileRegion(block1, providedStorageLocation1),
+ new FileRegion(block2, providedStorageLocation2)},
+ actualFileRegions.toArray());
+ }
+
+ @Test
+ public void iterateThreeBatches() throws Exception {
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_BATCH_SIZE, "2");
+ levelDBAliasMapServer.setConf(conf);
+ inMemoryLevelDBAliasMapClient.setConf(conf);
+ levelDBAliasMapServer.start();
+ Block block1 = new Block(42, 43, 44);
+ Block block2 = new Block(43, 44, 45);
+ Block block3 = new Block(44, 45, 46);
+ Block block4 = new Block(47, 48, 49);
+ Block block5 = new Block(50, 51, 52);
+ Block block6 = new Block(53, 54, 55);
+ byte[] nonce1 = "blackbird".getBytes();
+ byte[] nonce2 = "cuckoo".getBytes();
+ byte[] nonce3 = "sparrow".getBytes();
+ byte[] nonce4 = "magpie".getBytes();
+ byte[] nonce5 = "seagull".getBytes();
+ byte[] nonce6 = "finch".getBytes();
+ ProvidedStorageLocation providedStorageLocation1 =
+ new ProvidedStorageLocation(new Path("eagle"),
+ 46, 47, nonce1);
+ ProvidedStorageLocation providedStorageLocation2 =
+ new ProvidedStorageLocation(new Path("falcon"),
+ 48, 49, nonce2);
+ ProvidedStorageLocation providedStorageLocation3 =
+ new ProvidedStorageLocation(new Path("robin"),
+ 50, 51, nonce3);
+ ProvidedStorageLocation providedStorageLocation4 =
+ new ProvidedStorageLocation(new Path("parakeet"),
+ 52, 53, nonce4);
+ ProvidedStorageLocation providedStorageLocation5 =
+ new ProvidedStorageLocation(new Path("heron"),
+ 54, 55, nonce5);
+ ProvidedStorageLocation providedStorageLocation6 =
+ new ProvidedStorageLocation(new Path("duck"),
+ 56, 57, nonce6);
+ inMemoryLevelDBAliasMapClient
+ .getWriter(null)
+ .store(new FileRegion(block1, providedStorageLocation1));
+ inMemoryLevelDBAliasMapClient
+ .getWriter(null)
+ .store(new FileRegion(block2, providedStorageLocation2));
+ inMemoryLevelDBAliasMapClient
+ .getWriter(null)
+ .store(new FileRegion(block3, providedStorageLocation3));
+ inMemoryLevelDBAliasMapClient
+ .getWriter(null)
+ .store(new FileRegion(block4, providedStorageLocation4));
+ inMemoryLevelDBAliasMapClient
+ .getWriter(null)
+ .store(new FileRegion(block5, providedStorageLocation5));
+ inMemoryLevelDBAliasMapClient
+ .getWriter(null)
+ .store(new FileRegion(block6, providedStorageLocation6));
+
+ BlockAliasMap.Reader<FileRegion> reader =
+ inMemoryLevelDBAliasMapClient.getReader(null);
+ List<FileRegion> actualFileRegions =
+ Lists.newArrayListWithCapacity(6);
+ for (FileRegion fileRegion : reader) {
+ actualFileRegions.add(fileRegion);
+ }
+
+ FileRegion[] expectedFileRegions =
+ new FileRegion[] {new FileRegion(block1, providedStorageLocation1),
+ new FileRegion(block2, providedStorageLocation2),
+ new FileRegion(block3, providedStorageLocation3),
+ new FileRegion(block4, providedStorageLocation4),
+ new FileRegion(block5, providedStorageLocation5),
+ new FileRegion(block6, providedStorageLocation6)};
+ assertArrayEquals(expectedFileRegions, actualFileRegions.toArray());
+ }
+
+
+ class ReadThread implements Runnable {
+ private final Block block;
+ private final BlockAliasMap.Reader<FileRegion> reader;
+ private int delay;
+ private Optional<FileRegion> fileRegionOpt;
+
+ ReadThread(Block block, BlockAliasMap.Reader<FileRegion> reader,
+ int delay) {
+ this.block = block;
+ this.reader = reader;
+ this.delay = delay;
+ }
+
+ public Optional<FileRegion> getFileRegion() {
+ return fileRegionOpt;
+ }
+
+ @Override
+ public void run() {
+ try {
+ Thread.sleep(delay);
+ fileRegionOpt = reader.resolve(block);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ } catch (InterruptedException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ }
+
+ class WriteThread implements Runnable {
+ private final Block block;
+ private final BlockAliasMap.Writer<FileRegion> writer;
+ private final ProvidedStorageLocation providedStorageLocation;
+ private int delay;
+
+ WriteThread(Block block, ProvidedStorageLocation providedStorageLocation,
+ BlockAliasMap.Writer<FileRegion> writer, int delay) {
+ this.block = block;
+ this.writer = writer;
+ this.providedStorageLocation = providedStorageLocation;
+ this.delay = delay;
+ }
+
+ @Override
+ public void run() {
+ try {
+ Thread.sleep(delay);
+ writer.store(new FileRegion(block, providedStorageLocation));
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ } catch (InterruptedException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ }
+
+ public FileRegion generateRandomFileRegion(int seed) {
+ Block block = new Block(seed, seed + 1, seed + 2);
+ Path path = new Path("koekoek");
+ byte[] nonce = new byte[0];
+ ProvidedStorageLocation providedStorageLocation =
+ new ProvidedStorageLocation(path, seed + 3, seed + 4, nonce);
+ return new FileRegion(block, providedStorageLocation);
+ }
+
+ @Test
+ public void multipleReads() throws IOException {
+ inMemoryLevelDBAliasMapClient.setConf(conf);
+ levelDBAliasMapServer.setConf(conf);
+ levelDBAliasMapServer.start();
+
+ Random r = new Random();
+ List<FileRegion> expectedFileRegions = r.ints(0, 200)
+ .limit(50)
+ .boxed()
+ .map(i -> generateRandomFileRegion(i))
+ .collect(Collectors.toList());
+
+
+ BlockAliasMap.Reader<FileRegion> reader =
+ inMemoryLevelDBAliasMapClient.getReader(null);
+ BlockAliasMap.Writer<FileRegion> writer =
+ inMemoryLevelDBAliasMapClient.getWriter(null);
+
+ ExecutorService executor = Executors.newCachedThreadPool();
+
+ List<ReadThread> readThreads = expectedFileRegions
+ .stream()
+ .map(fileRegion -> new ReadThread(fileRegion.getBlock(),
+ reader,
+ 4000))
+ .collect(Collectors.toList());
+
+
+ List<? extends Future<?>> readFutures =
+ readThreads.stream()
+ .map(readThread -> executor.submit(readThread))
+ .collect(Collectors.toList());
+
+ List<? extends Future<?>> writeFutures = expectedFileRegions
+ .stream()
+ .map(fileRegion -> new WriteThread(fileRegion.getBlock(),
+ fileRegion.getProvidedStorageLocation(),
+ writer,
+ 1000))
+ .map(writeThread -> executor.submit(writeThread))
+ .collect(Collectors.toList());
+
+ readFutures.stream()
+ .map(readFuture -> {
+ try {
+ return readFuture.get();
+ } catch (InterruptedException e) {
+ throw new RuntimeException(e);
+ } catch (ExecutionException e) {
+ throw new RuntimeException(e);
+ }
+ })
+ .collect(Collectors.toList());
+
+ List<FileRegion> actualFileRegions = readThreads.stream()
+ .map(readThread -> readThread.getFileRegion().get())
+ .collect(Collectors.toList());
+
+ assertThat(actualFileRegions).containsExactlyInAnyOrder(
+ expectedFileRegions.toArray(new FileRegion[0]));
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDbMockAliasMapClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDbMockAliasMapClient.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDbMockAliasMapClient.java
new file mode 100644
index 0000000..43fc68c
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TestLevelDbMockAliasMapClient.java
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.common.blockaliasmap.impl;
+
+import com.google.common.io.Files;
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.protocol.ProvidedStorageLocation;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer;
+import org.apache.hadoop.hdfs.server.common.FileRegion;
+import org.iq80.leveldb.DBException;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import java.io.File;
+import java.io.IOException;
+
+import static org.assertj.core.api.AssertionsForClassTypes.assertThatExceptionOfType;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.mock;
+
+/**
+ * Tests the in-memory alias map with a mock level-db implementation.
+ */
+public class TestLevelDbMockAliasMapClient {
+ private InMemoryLevelDBAliasMapServer levelDBAliasMapServer;
+ private InMemoryLevelDBAliasMapClient inMemoryLevelDBAliasMapClient;
+ private File tempDir;
+ private Configuration conf;
+ private InMemoryAliasMap aliasMapMock;
+
+ @Before
+ public void setUp() throws IOException {
+ aliasMapMock = mock(InMemoryAliasMap.class);
+ levelDBAliasMapServer = new InMemoryLevelDBAliasMapServer(
+ config -> aliasMapMock);
+ conf = new Configuration();
+ int port = 9877;
+
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS,
+ "localhost:" + port);
+ tempDir = Files.createTempDir();
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR,
+ tempDir.getAbsolutePath());
+ inMemoryLevelDBAliasMapClient = new InMemoryLevelDBAliasMapClient();
+ inMemoryLevelDBAliasMapClient.setConf(conf);
+ levelDBAliasMapServer.setConf(conf);
+ levelDBAliasMapServer.start();
+ }
+
+ @After
+ public void tearDown() throws IOException {
+ levelDBAliasMapServer.close();
+ inMemoryLevelDBAliasMapClient.close();
+ FileUtils.deleteDirectory(tempDir);
+ }
+
+ @Test
+ public void readFailure() throws Exception {
+ Block block = new Block(42, 43, 44);
+ doThrow(new IOException())
+ .doThrow(new DBException())
+ .when(aliasMapMock)
+ .read(block);
+
+ assertThatExceptionOfType(IOException.class)
+ .isThrownBy(() ->
+ inMemoryLevelDBAliasMapClient.getReader(null).resolve(block));
+
+ assertThatExceptionOfType(IOException.class)
+ .isThrownBy(() ->
+ inMemoryLevelDBAliasMapClient.getReader(null).resolve(block));
+ }
+
+ @Test
+ public void writeFailure() throws IOException {
+ Block block = new Block(42, 43, 44);
+ byte[] nonce = new byte[0];
+ Path path = new Path("koekoek");
+ ProvidedStorageLocation providedStorageLocation =
+ new ProvidedStorageLocation(path, 45, 46, nonce);
+
+ doThrow(new IOException())
+ .when(aliasMapMock)
+ .write(block, providedStorageLocation);
+
+ assertThatExceptionOfType(IOException.class)
+ .isThrownBy(() ->
+ inMemoryLevelDBAliasMapClient.getWriter(null)
+ .store(new FileRegion(block, providedStorageLocation)));
+
+ assertThatExceptionOfType(IOException.class)
+ .isThrownBy(() ->
+ inMemoryLevelDBAliasMapClient.getWriter(null)
+ .store(new FileRegion(block, providedStorageLocation)));
+ }
+
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index 4190730..8bdbaa4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -43,6 +43,7 @@ import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.Optional;
import java.util.Set;
import org.apache.commons.io.FileUtils;
@@ -214,7 +215,8 @@ public class TestProvidedImpl {
}
@Override
- public FileRegion resolve(Block ident) throws IOException {
+ public Optional<FileRegion> resolve(Block ident)
+ throws IOException {
return null;
}
};
@@ -232,6 +234,11 @@ public class TestProvidedImpl {
public void refresh() throws IOException {
// do nothing!
}
+
+ @Override
+ public void close() throws IOException {
+ // do nothing
+ }
}
private static Storage.StorageDirectory createLocalStorageDirectory(
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-project/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 04b93c4..b1a90c3 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1336,7 +1336,6 @@
<artifactId>mssql-jdbc</artifactId>
<version>${mssql.version}</version>
</dependency>
-
<dependency>
<groupId>io.swagger</groupId>
<artifactId>swagger-annotations</artifactId>
@@ -1352,7 +1351,12 @@
<artifactId>snakeyaml</artifactId>
<version>${snakeyaml.version}</version>
</dependency>
-
+ <dependency>
+ <groupId>org.assertj</groupId>
+ <artifactId>assertj-core</artifactId>
+ <version>3.8.0</version>
+ <scope>test</scope>
+ </dependency>
</dependencies>
</dependencyManagement>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-tools/hadoop-fs2img/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/pom.xml b/hadoop-tools/hadoop-fs2img/pom.xml
index e1411f8..8661c82 100644
--- a/hadoop-tools/hadoop-fs2img/pom.xml
+++ b/hadoop-tools/hadoop-fs2img/pom.xml
@@ -66,6 +66,12 @@
<artifactId>mockito-all</artifactId>
<scope>test</scope>
</dependency>
+ <dependency>
+ <groupId>org.assertj</groupId>
+ <artifactId>assertj-core</artifactId>
+ <version>3.8.0</version>
+ <scope>test</scope>
+ </dependency>
</dependencies>
<build>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
index 4cdf473..63d1f27 100644
--- a/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
+++ b/hadoop-tools/hadoop-fs2img/src/main/java/org/apache/hadoop/hdfs/server/namenode/NullBlockAliasMap.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.server.namenode;
import java.io.IOException;
import java.util.Iterator;
import java.util.NoSuchElementException;
+import java.util.Optional;
import org.apache.hadoop.hdfs.protocol.Block;
import org.apache.hadoop.hdfs.server.common.FileRegion;
@@ -57,14 +58,14 @@ public class NullBlockAliasMap extends BlockAliasMap<FileRegion> {
}
@Override
- public FileRegion resolve(Block ident) throws IOException {
+ public Optional<FileRegion> resolve(Block ident) throws IOException {
throw new UnsupportedOperationException();
}
};
}
@Override
- public Writer<FileRegion> getWriter(Writer.Options opts) throws IOException {
+ public Writer getWriter(Writer.Options opts) throws IOException {
return new Writer<FileRegion>() {
@Override
public void store(FileRegion token) throws IOException {
@@ -83,4 +84,8 @@ public class NullBlockAliasMap extends BlockAliasMap<FileRegion> {
// do nothing
}
+ @Override
+ public void close() throws IOException {
+
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/36957f0d/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 09e8f97..70e4c33 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -27,11 +27,13 @@ import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.Channels;
import java.nio.channels.ReadableByteChannel;
+import java.nio.file.Files;
import java.util.HashSet;
import java.util.Iterator;
import java.util.Random;
import java.util.Set;
+import org.apache.commons.io.FileUtils;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
@@ -39,6 +41,7 @@ import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.BlockMissingException;
import org.apache.hadoop.hdfs.DFSClient;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.DFSTestUtil;
@@ -48,6 +51,8 @@ import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
import org.apache.hadoop.hdfs.protocol.LocatedBlock;
import org.apache.hadoop.hdfs.protocol.LocatedBlocks;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryAliasMap;
+import org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockManager;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerTestUtil;
@@ -56,6 +61,7 @@ import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStatistics;
import org.apache.hadoop.hdfs.server.blockmanagement.DatanodeStorageInfo;
import org.apache.hadoop.hdfs.server.blockmanagement.ProvidedStorageMap;
import org.apache.hadoop.hdfs.server.common.blockaliasmap.BlockAliasMap;
+import org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.InMemoryLevelDBAliasMapClient;
import org.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMap;
import org.apache.hadoop.hdfs.server.datanode.DataNode;
@@ -172,16 +178,16 @@ public class TestNameNodeProvidedImplementation {
void createImage(TreeWalk t, Path out,
Class<? extends BlockResolver> blockIdsClass) throws Exception {
- createImage(t, out, blockIdsClass, "");
+ createImage(t, out, blockIdsClass, "", TextFileRegionAliasMap.class);
}
void createImage(TreeWalk t, Path out,
- Class<? extends BlockResolver> blockIdsClass, String clusterID)
- throws Exception {
+ Class<? extends BlockResolver> blockIdsClass, String clusterID,
+ Class<? extends BlockAliasMap> aliasMapClass) throws Exception {
ImageWriter.Options opts = ImageWriter.defaults();
opts.setConf(conf);
opts.output(out.toString())
- .blocks(TextFileRegionAliasMap.class)
+ .blocks(aliasMapClass)
.blockIds(blockIdsClass)
.clusterID(clusterID);
try (ImageWriter w = new ImageWriter(opts)) {
@@ -389,17 +395,8 @@ public class TestNameNodeProvidedImplementation {
return ret;
}
- @Test(timeout=30000)
- public void testBlockRead() throws Exception {
- conf.setClass(ImageWriter.Options.UGI_CLASS,
- FsUGIResolver.class, UGIResolver.class);
- createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
- FixedBlockResolver.class);
- startCluster(NNDIRPATH, 3,
- new StorageType[] {StorageType.PROVIDED, StorageType.DISK}, null,
- false);
+ private void verifyFileSystemContents() throws Exception {
FileSystem fs = cluster.getFileSystem();
- Thread.sleep(2000);
int count = 0;
// read NN metadata, verify contents match
for (TreePath e : new FSTreeWalk(NAMEPATH, conf)) {
@@ -683,7 +680,7 @@ public class TestNameNodeProvidedImplementation {
public void testSetClusterID() throws Exception {
String clusterID = "PROVIDED-CLUSTER";
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
- FixedBlockResolver.class, clusterID);
+ FixedBlockResolver.class, clusterID, TextFileRegionAliasMap.class);
// 2 Datanodes, 1 PROVIDED and other DISK
startCluster(NNDIRPATH, 2, null,
new StorageType[][] {
@@ -744,4 +741,42 @@ public class TestNameNodeProvidedImplementation {
verifyFileLocation(i, expectedLocations);
}
}
+
+
+ // This test will fail until there is a refactoring of the FileRegion
+ // (HDFS-12713).
+ @Test(expected=BlockMissingException.class)
+ public void testInMemoryAliasMap() throws Exception {
+ conf.setClass(ImageWriter.Options.UGI_CLASS,
+ FsUGIResolver.class, UGIResolver.class);
+ conf.setClass(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_CLASS,
+ InMemoryLevelDBAliasMapClient.class, BlockAliasMap.class);
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS,
+ "localhost:32445");
+ File tempDirectory =
+ Files.createTempDirectory("in-memory-alias-map").toFile();
+ conf.set(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_LEVELDB_DIR,
+ tempDirectory.getAbsolutePath());
+ conf.setBoolean(DFSConfigKeys.DFS_PROVIDED_ALIASMAP_INMEMORY_ENABLED, true);
+
+ InMemoryLevelDBAliasMapServer levelDBAliasMapServer =
+ new InMemoryLevelDBAliasMapServer(InMemoryAliasMap::init);
+ levelDBAliasMapServer.setConf(conf);
+ levelDBAliasMapServer.start();
+
+ createImage(new FSTreeWalk(NAMEPATH, conf),
+ NNDIRPATH,
+ FixedBlockResolver.class, "",
+ InMemoryLevelDBAliasMapClient.class);
+ levelDBAliasMapServer.close();
+
+ // start cluster with two datanodes,
+ // each with 1 PROVIDED volume and other DISK volume
+ startCluster(NNDIRPATH, 2,
+ new StorageType[] {StorageType.PROVIDED, StorageType.DISK},
+ null, false);
+ verifyFileSystemContents();
+ FileUtils.deleteDirectory(tempDirectory);
+ }
+
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[44/50] [abbrv] hadoop git commit: HDFS-12685. [READ] FsVolumeImpl
exception when scanning Provided storage volume
Posted by vi...@apache.org.
HDFS-12685. [READ] FsVolumeImpl exception when scanning Provided storage volume
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8da735e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8da735e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8da735e9
Branch: refs/heads/HDFS-9806
Commit: 8da735e9421fbf8545d09d985017746e2932c702
Parents: 6a3ab22
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu Nov 30 10:11:12 2017 -0800
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:59 2017 -0800
----------------------------------------------------------------------
.../impl/TextFileRegionAliasMap.java | 3 +-
.../hdfs/server/datanode/DirectoryScanner.java | 3 +-
.../server/datanode/fsdataset/FsVolumeSpi.java | 40 ++++++++++----------
.../fsdataset/impl/ProvidedVolumeImpl.java | 4 +-
.../fsdataset/impl/TestProvidedImpl.java | 19 ++++++----
5 files changed, 37 insertions(+), 32 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8da735e9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
index 80f48c1..bd04d60 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/blockaliasmap/impl/TextFileRegionAliasMap.java
@@ -439,7 +439,8 @@ public class TextFileRegionAliasMap
@Override
public void refresh() throws IOException {
- //nothing to do;
+ throw new UnsupportedOperationException(
+ "Refresh not supported by " + getClass());
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8da735e9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
index 8fb8551..ab9743c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
@@ -515,7 +515,8 @@ public class DirectoryScanner implements Runnable {
*
* @return a map of sorted arrays of block information
*/
- private Map<String, ScanInfo[]> getDiskReport() {
+ @VisibleForTesting
+ public Map<String, ScanInfo[]> getDiskReport() {
ScanInfoPerBlockPool list = new ScanInfoPerBlockPool();
ScanInfoPerBlockPool[] dirReports = null;
// First get list of data directories
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8da735e9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
index 15e71f0..20a153d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsVolumeSpi.java
@@ -296,8 +296,23 @@ public interface FsVolumeSpi
*/
public ScanInfo(long blockId, File blockFile, File metaFile,
FsVolumeSpi vol) {
- this(blockId, blockFile, metaFile, vol, null,
- (blockFile != null) ? blockFile.length() : 0);
+ this.blockId = blockId;
+ String condensedVolPath =
+ (vol == null || vol.getBaseURI() == null) ? null :
+ getCondensedPath(new File(vol.getBaseURI()).getAbsolutePath());
+ this.blockSuffix = blockFile == null ? null :
+ getSuffix(blockFile, condensedVolPath);
+ this.blockLength = (blockFile != null) ? blockFile.length() : 0;
+ if (metaFile == null) {
+ this.metaSuffix = null;
+ } else if (blockFile == null) {
+ this.metaSuffix = getSuffix(metaFile, condensedVolPath);
+ } else {
+ this.metaSuffix = getSuffix(metaFile,
+ condensedVolPath + blockSuffix);
+ }
+ this.volume = vol;
+ this.fileRegion = null;
}
/**
@@ -305,31 +320,18 @@ public interface FsVolumeSpi
* the block data and meta-data files.
*
* @param blockId the block ID
- * @param blockFile the path to the block data file
- * @param metaFile the path to the block meta-data file
* @param vol the volume that contains the block
* @param fileRegion the file region (for provided blocks)
* @param length the length of the block data
*/
- public ScanInfo(long blockId, File blockFile, File metaFile,
- FsVolumeSpi vol, FileRegion fileRegion, long length) {
+ public ScanInfo(long blockId, FsVolumeSpi vol, FileRegion fileRegion,
+ long length) {
this.blockId = blockId;
- String condensedVolPath =
- (vol == null || vol.getBaseURI() == null) ? null :
- getCondensedPath(new File(vol.getBaseURI()).getAbsolutePath());
- this.blockSuffix = blockFile == null ? null :
- getSuffix(blockFile, condensedVolPath);
this.blockLength = length;
- if (metaFile == null) {
- this.metaSuffix = null;
- } else if (blockFile == null) {
- this.metaSuffix = getSuffix(metaFile, condensedVolPath);
- } else {
- this.metaSuffix = getSuffix(metaFile,
- condensedVolPath + blockSuffix);
- }
this.volume = vol;
this.fileRegion = fileRegion;
+ this.blockSuffix = null;
+ this.metaSuffix = null;
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8da735e9/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
index 65487f9..ab59fa5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ProvidedVolumeImpl.java
@@ -226,9 +226,7 @@ public class ProvidedVolumeImpl extends FsVolumeImpl {
reportCompiler.throttle();
FileRegion region = iter.next();
if (region.getBlockPoolId().equals(bpid)) {
- LOG.info("Adding ScanInfo for blkid " +
- region.getBlock().getBlockId());
- report.add(new ScanInfo(region.getBlock().getBlockId(), null, null,
+ report.add(new ScanInfo(region.getBlock().getBlockId(),
providedVolume, region, region.getLength()));
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8da735e9/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
index 52112f7..4190730 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestProvidedImpl.java
@@ -61,6 +61,7 @@ import org.apache.hadoop.hdfs.server.datanode.BlockScanner;
import org.apache.hadoop.hdfs.server.datanode.DNConf;
import org.apache.hadoop.hdfs.server.datanode.DataNode;
import org.apache.hadoop.hdfs.server.datanode.DataStorage;
+import org.apache.hadoop.hdfs.server.datanode.DirectoryScanner;
import org.apache.hadoop.hdfs.server.datanode.ProvidedReplica;
import org.apache.hadoop.hdfs.server.datanode.ReplicaInfo;
import org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry;
@@ -231,14 +232,6 @@ public class TestProvidedImpl {
public void refresh() throws IOException {
// do nothing!
}
-
- public void setMinBlkId(int minId) {
- this.minId = minId;
- }
-
- public void setBlockCount(int numBlocks) {
- this.numBlocks = numBlocks;
- }
}
private static Storage.StorageDirectory createLocalStorageDirectory(
@@ -606,4 +599,14 @@ public class TestProvidedImpl {
}
}
}
+
+ @Test
+ public void testScannerWithProvidedVolumes() throws Exception {
+ DirectoryScanner scanner = new DirectoryScanner(datanode, dataset, conf);
+ Map<String, FsVolumeSpi.ScanInfo[]> report = scanner.getDiskReport();
+ // no blocks should be reported for the Provided volume as long as
+ // the directoryScanner is disabled.
+ assertEquals(0, report.get(BLOCK_POOL_IDS[CHOSEN_BP_ID]).length);
+ }
+
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[21/50] [abbrv] hadoop git commit: HDFS-11663. [READ] Fix
NullPointerException in ProvidedBlocksBuilder
Posted by vi...@apache.org.
HDFS-11663. [READ] Fix NullPointerException in ProvidedBlocksBuilder
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4ba175f5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4ba175f5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4ba175f5
Branch: refs/heads/HDFS-9806
Commit: 4ba175f533014b3470487dc88d2fb0ecd669a7e1
Parents: f63ec95
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu May 4 13:06:53 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:57 2017 -0800
----------------------------------------------------------------------
.../blockmanagement/ProvidedStorageMap.java | 40 ++++++-----
.../TestNameNodeProvidedImplementation.java | 70 +++++++++++++++-----
2 files changed, 77 insertions(+), 33 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ba175f5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index d222344..518b7e9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -134,11 +134,13 @@ public class ProvidedStorageMap {
class ProvidedBlocksBuilder extends LocatedBlockBuilder {
private ShadowDatanodeInfoWithStorage pending;
+ private boolean hasProvidedLocations;
ProvidedBlocksBuilder(int maxBlocks) {
super(maxBlocks);
pending = new ShadowDatanodeInfoWithStorage(
providedDescriptor, storageId);
+ hasProvidedLocations = false;
}
@Override
@@ -154,6 +156,7 @@ public class ProvidedStorageMap {
types[i] = storages[i].getStorageType();
if (StorageType.PROVIDED.equals(storages[i].getStorageType())) {
locs[i] = pending;
+ hasProvidedLocations = true;
} else {
locs[i] = new DatanodeInfoWithStorage(
storages[i].getDatanodeDescriptor(), sids[i], types[i]);
@@ -165,25 +168,28 @@ public class ProvidedStorageMap {
@Override
LocatedBlocks build(DatanodeDescriptor client) {
// TODO: to support multiple provided storages, need to pass/maintain map
- // set all fields of pending DatanodeInfo
- List<String> excludedUUids = new ArrayList<String>();
- for (LocatedBlock b: blocks) {
- DatanodeInfo[] infos = b.getLocations();
- StorageType[] types = b.getStorageTypes();
-
- for (int i = 0; i < types.length; i++) {
- if (!StorageType.PROVIDED.equals(types[i])) {
- excludedUUids.add(infos[i].getDatanodeUuid());
+ if (hasProvidedLocations) {
+ // set all fields of pending DatanodeInfo
+ List<String> excludedUUids = new ArrayList<String>();
+ for (LocatedBlock b : blocks) {
+ DatanodeInfo[] infos = b.getLocations();
+ StorageType[] types = b.getStorageTypes();
+
+ for (int i = 0; i < types.length; i++) {
+ if (!StorageType.PROVIDED.equals(types[i])) {
+ excludedUUids.add(infos[i].getDatanodeUuid());
+ }
}
}
- }
- DatanodeDescriptor dn = providedDescriptor.choose(client, excludedUUids);
- if (dn == null) {
- dn = providedDescriptor.choose(client);
+ DatanodeDescriptor dn =
+ providedDescriptor.choose(client, excludedUUids);
+ if (dn == null) {
+ dn = providedDescriptor.choose(client);
+ }
+ pending.replaceInternal(dn);
}
- pending.replaceInternal(dn);
return new LocatedBlocks(
flen, isUC, blocks, last, lastComplete, feInfo, ecPolicy);
}
@@ -278,7 +284,8 @@ public class ProvidedStorageMap {
DatanodeDescriptor choose(DatanodeDescriptor client) {
// exact match for now
- DatanodeDescriptor dn = dns.get(client.getDatanodeUuid());
+ DatanodeDescriptor dn = client != null ?
+ dns.get(client.getDatanodeUuid()) : null;
if (null == dn) {
dn = chooseRandom();
}
@@ -288,7 +295,8 @@ public class ProvidedStorageMap {
DatanodeDescriptor choose(DatanodeDescriptor client,
List<String> excludedUUids) {
// exact match for now
- DatanodeDescriptor dn = dns.get(client.getDatanodeUuid());
+ DatanodeDescriptor dn = client != null ?
+ dns.get(client.getDatanodeUuid()) : null;
if (null == dn || excludedUUids.contains(client.getDatanodeUuid())) {
dn = null;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4ba175f5/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
index 3b75806..5062439 100644
--- a/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
+++ b/hadoop-tools/hadoop-fs2img/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeProvidedImplementation.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.hdfs.server.blockmanagement.BlockFormatProvider;
@@ -69,6 +70,10 @@ public class TestNameNodeProvidedImplementation {
final Path BLOCKFILE = new Path(NNDIRPATH, "blocks.csv");
final String SINGLEUSER = "usr1";
final String SINGLEGROUP = "grp1";
+ private final int numFiles = 10;
+ private final String filePrefix = "file";
+ private final String fileSuffix = ".dat";
+ private final int baseFileLen = 1024;
Configuration conf;
MiniDFSCluster cluster;
@@ -114,15 +119,16 @@ public class TestNameNodeProvidedImplementation {
}
// create 10 random files under BASE
- for (int i=0; i < 10; i++) {
- File newFile = new File(new Path(NAMEPATH, "file" + i).toUri());
+ for (int i=0; i < numFiles; i++) {
+ File newFile = new File(
+ new Path(NAMEPATH, filePrefix + i + fileSuffix).toUri());
if(!newFile.exists()) {
try {
LOG.info("Creating " + newFile.toString());
newFile.createNewFile();
Writer writer = new OutputStreamWriter(
new FileOutputStream(newFile.getAbsolutePath()), "utf-8");
- for(int j=0; j < 10*i; j++) {
+ for(int j=0; j < baseFileLen*i; j++) {
writer.write("0");
}
writer.flush();
@@ -161,29 +167,30 @@ public class TestNameNodeProvidedImplementation {
void startCluster(Path nspath, int numDatanodes,
StorageType[] storageTypes,
- StorageType[][] storageTypesPerDatanode)
+ StorageType[][] storageTypesPerDatanode,
+ boolean doFormat)
throws IOException {
conf.set(DFS_NAMENODE_NAME_DIR_KEY, nspath.toString());
if (storageTypesPerDatanode != null) {
cluster = new MiniDFSCluster.Builder(conf)
- .format(false)
- .manageNameDfsDirs(false)
+ .format(doFormat)
+ .manageNameDfsDirs(doFormat)
.numDataNodes(numDatanodes)
.storageTypes(storageTypesPerDatanode)
.build();
} else if (storageTypes != null) {
cluster = new MiniDFSCluster.Builder(conf)
- .format(false)
- .manageNameDfsDirs(false)
+ .format(doFormat)
+ .manageNameDfsDirs(doFormat)
.numDataNodes(numDatanodes)
.storagesPerDatanode(storageTypes.length)
.storageTypes(storageTypes)
.build();
} else {
cluster = new MiniDFSCluster.Builder(conf)
- .format(false)
- .manageNameDfsDirs(false)
+ .format(doFormat)
+ .manageNameDfsDirs(doFormat)
.numDataNodes(numDatanodes)
.build();
}
@@ -195,7 +202,8 @@ public class TestNameNodeProvidedImplementation {
final long seed = r.nextLong();
LOG.info("NAMEPATH: " + NAMEPATH);
createImage(new RandomTreeWalk(seed), NNDIRPATH, FixedBlockResolver.class);
- startCluster(NNDIRPATH, 0, new StorageType[] {StorageType.PROVIDED}, null);
+ startCluster(NNDIRPATH, 0, new StorageType[] {StorageType.PROVIDED},
+ null, false);
FileSystem fs = cluster.getFileSystem();
for (TreePath e : new RandomTreeWalk(seed)) {
@@ -220,7 +228,8 @@ public class TestNameNodeProvidedImplementation {
SingleUGIResolver.class, UGIResolver.class);
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
- startCluster(NNDIRPATH, 1, new StorageType[] {StorageType.PROVIDED}, null);
+ startCluster(NNDIRPATH, 1, new StorageType[] {StorageType.PROVIDED},
+ null, false);
}
@Test(timeout=500000)
@@ -232,10 +241,10 @@ public class TestNameNodeProvidedImplementation {
// make the last Datanode with only DISK
startCluster(NNDIRPATH, 3, null,
new StorageType[][] {
- {StorageType.PROVIDED},
- {StorageType.PROVIDED},
- {StorageType.DISK}}
- );
+ {StorageType.PROVIDED},
+ {StorageType.PROVIDED},
+ {StorageType.DISK}},
+ false);
// wait for the replication to finish
Thread.sleep(50000);
@@ -290,7 +299,8 @@ public class TestNameNodeProvidedImplementation {
FsUGIResolver.class, UGIResolver.class);
createImage(new FSTreeWalk(NAMEPATH, conf), NNDIRPATH,
FixedBlockResolver.class);
- startCluster(NNDIRPATH, 3, new StorageType[] {StorageType.PROVIDED}, null);
+ startCluster(NNDIRPATH, 3, new StorageType[] {StorageType.PROVIDED},
+ null, false);
FileSystem fs = cluster.getFileSystem();
Thread.sleep(2000);
int count = 0;
@@ -342,4 +352,30 @@ public class TestNameNodeProvidedImplementation {
}
}
}
+
+ private BlockLocation[] createFile(Path path, short replication,
+ long fileLen, long blockLen) throws IOException {
+ FileSystem fs = cluster.getFileSystem();
+ //create a sample file that is not provided
+ DFSTestUtil.createFile(fs, path, false, (int) blockLen,
+ fileLen, blockLen, replication, 0, true);
+ return fs.getFileBlockLocations(path, 0, fileLen);
+ }
+
+ @Test
+ public void testClusterWithEmptyImage() throws IOException {
+ // start a cluster with 2 datanodes without any provided storage
+ startCluster(NNDIRPATH, 2, null,
+ new StorageType[][] {
+ {StorageType.DISK},
+ {StorageType.DISK}},
+ true);
+ assertTrue(cluster.isClusterUp());
+ assertTrue(cluster.isDataNodeUp());
+
+ BlockLocation[] locations = createFile(new Path("/testFile1.dat"),
+ (short) 2, 1024*1024, 1024*1024);
+ assertEquals(1, locations.length);
+ assertEquals(2, locations[0].getHosts().length);
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[31/50] [abbrv] hadoop git commit: HDFS-11703. [READ] Tests for
ProvidedStorageMap
Posted by vi...@apache.org.
HDFS-11703. [READ] Tests for ProvidedStorageMap
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3f008df0
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3f008df0
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3f008df0
Branch: refs/heads/HDFS-9806
Commit: 3f008df00dd5f9e0079647a3fcfb6d153cf690f1
Parents: 4ba175f
Author: Virajith Jalaparti <vi...@apache.org>
Authored: Thu May 4 13:14:41 2017 -0700
Committer: Virajith Jalaparti <vi...@apache.org>
Committed: Fri Dec 1 18:16:58 2017 -0800
----------------------------------------------------------------------
.../blockmanagement/ProvidedStorageMap.java | 6 +
.../blockmanagement/TestProvidedStorageMap.java | 153 +++++++++++++++++++
2 files changed, 159 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f008df0/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
index 518b7e9..0faf16d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ProvidedStorageMap.java
@@ -28,6 +28,7 @@ import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ConcurrentSkipListMap;
+import com.google.common.annotations.VisibleForTesting;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.StorageType;
import org.apache.hadoop.hdfs.DFSConfigKeys;
@@ -121,6 +122,11 @@ public class ProvidedStorageMap {
return dn.getStorageInfo(s.getStorageID());
}
+ @VisibleForTesting
+ public DatanodeStorageInfo getProvidedStorageInfo() {
+ return providedStorageInfo;
+ }
+
public LocatedBlockBuilder newLocatedBlocks(int maxValue) {
if (!providedEnabled) {
return new LocatedBlockBuilder(maxValue);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f008df0/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
new file mode 100644
index 0000000..50e2fed
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestProvidedStorageMap.java
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.protocol.Block;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeStorage;
+import org.apache.hadoop.hdfs.util.RwLock;
+import org.junit.Before;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * This class tests the {@link ProvidedStorageMap}.
+ */
+public class TestProvidedStorageMap {
+
+ private Configuration conf;
+ private BlockManager bm;
+ private RwLock nameSystemLock;
+ private String providedStorageID;
+
+ static class TestBlockProvider extends BlockProvider
+ implements Configurable {
+
+ @Override
+ public void setConf(Configuration conf) {
+ }
+
+ @Override
+ public Configuration getConf() {
+ return null;
+ }
+
+ @Override
+ public Iterator<Block> iterator() {
+ return new Iterator<Block>() {
+ @Override
+ public boolean hasNext() {
+ return false;
+ }
+ @Override
+ public Block next() {
+ return null;
+ }
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ };
+ }
+ }
+
+ @Before
+ public void setup() {
+ providedStorageID = DFSConfigKeys.DFS_PROVIDER_STORAGEUUID_DEFAULT;
+ conf = new HdfsConfiguration();
+ conf.set(DFSConfigKeys.DFS_PROVIDER_STORAGEUUID,
+ providedStorageID);
+ conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_PROVIDED_ENABLED, true);
+ conf.setClass(DFSConfigKeys.DFS_NAMENODE_BLOCK_PROVIDER_CLASS,
+ TestBlockProvider.class, BlockProvider.class);
+
+ bm = mock(BlockManager.class);
+ nameSystemLock = mock(RwLock.class);
+ }
+
+ private DatanodeDescriptor createDatanodeDescriptor(int port) {
+ return DFSTestUtil.getDatanodeDescriptor("127.0.0.1", port, "defaultRack",
+ "localhost");
+ }
+
+ @Test
+ public void testProvidedStorageMap() throws IOException {
+ ProvidedStorageMap providedMap = new ProvidedStorageMap(
+ nameSystemLock, bm, conf);
+ DatanodeStorageInfo providedMapStorage =
+ providedMap.getProvidedStorageInfo();
+ //the provided storage cannot be null
+ assertNotNull(providedMapStorage);
+
+ //create a datanode
+ DatanodeDescriptor dn1 = createDatanodeDescriptor(5000);
+
+ //associate two storages to the datanode
+ DatanodeStorage dn1ProvidedStorage = new DatanodeStorage(
+ providedStorageID,
+ DatanodeStorage.State.NORMAL,
+ StorageType.PROVIDED);
+ DatanodeStorage dn1DiskStorage = new DatanodeStorage(
+ "sid-1", DatanodeStorage.State.NORMAL, StorageType.DISK);
+
+ when(nameSystemLock.hasWriteLock()).thenReturn(true);
+ DatanodeStorageInfo dns1Provided = providedMap.getStorage(dn1,
+ dn1ProvidedStorage);
+ DatanodeStorageInfo dns1Disk = providedMap.getStorage(dn1,
+ dn1DiskStorage);
+
+ assertTrue("The provided storages should be equal",
+ dns1Provided == providedMapStorage);
+ assertTrue("Disk storage has not yet been registered with block manager",
+ dns1Disk == null);
+ //add the disk storage to the datanode.
+ DatanodeStorageInfo dnsDisk = new DatanodeStorageInfo(dn1, dn1DiskStorage);
+ dn1.injectStorage(dnsDisk);
+ assertTrue("Disk storage must match the injected storage info",
+ dnsDisk == providedMap.getStorage(dn1, dn1DiskStorage));
+
+ //create a 2nd datanode
+ DatanodeDescriptor dn2 = createDatanodeDescriptor(5010);
+ //associate a provided storage with the datanode
+ DatanodeStorage dn2ProvidedStorage = new DatanodeStorage(
+ providedStorageID,
+ DatanodeStorage.State.NORMAL,
+ StorageType.PROVIDED);
+
+ DatanodeStorageInfo dns2Provided = providedMap.getStorage(
+ dn2, dn2ProvidedStorage);
+ assertTrue("The provided storages should be equal",
+ dns2Provided == providedMapStorage);
+ assertTrue("The DatanodeDescriptor should contain the provided storage",
+ dn2.getStorageInfo(providedStorageID) == providedMapStorage);
+
+
+ }
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org