You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by bo...@apache.org on 2018/07/14 00:44:55 UTC
[01/50] [abbrv] hadoop git commit: HDFS-13719. Docs around
dfs.image.transfer.timeout are misleading. Contributed by Kitti Nansi.
[Forced Update!]
Repository: hadoop
Updated Branches:
refs/heads/YARN-7402 262ca7f16 -> 9c24328be (forced update)
HDFS-13719. Docs around dfs.image.transfer.timeout are misleading. Contributed by Kitti Nansi.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eecb5baa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eecb5baa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eecb5baa
Branch: refs/heads/YARN-7402
Commit: eecb5baaaaa54599aeae758abd4007e55e5b531f
Parents: 43f7fe8
Author: Andrew Wang <wa...@apache.org>
Authored: Mon Jul 9 15:17:21 2018 +0200
Committer: Andrew Wang <wa...@apache.org>
Committed: Mon Jul 9 15:17:21 2018 +0200
----------------------------------------------------------------------
.../hadoop-hdfs/src/main/resources/hdfs-default.xml | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/eecb5baa/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 6dd2d92..384cedf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -1289,11 +1289,10 @@
<name>dfs.image.transfer.timeout</name>
<value>60000</value>
<description>
- Socket timeout for image transfer in milliseconds. This timeout and the related
- dfs.image.transfer.bandwidthPerSec parameter should be configured such
- that normal image transfer can complete successfully.
- This timeout prevents client hangs when the sender fails during
- image transfer. This is socket timeout during image transfer.
+ Socket timeout for the HttpURLConnection instance used in the image
+ transfer. This is measured in milliseconds.
+ This timeout prevents client hangs if the connection is idle
+ for this configured timeout, during image transfer.
</description>
</property>
@@ -1304,9 +1303,7 @@
Maximum bandwidth used for regular image transfers (instead of
bootstrapping the standby namenode), in bytes per second.
This can help keep normal namenode operations responsive during
- checkpointing. The maximum bandwidth and timeout in
- dfs.image.transfer.timeout should be set such that normal image
- transfers can complete successfully.
+ checkpointing.
A default value of 0 indicates that throttling is disabled.
The maximum bandwidth used for bootstrapping standby namenode is
configured with dfs.image.transfer-bootstrap-standby.bandwidthPerSec.
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[45/50] [abbrv] hadoop git commit: YARN-7707. [GPG] Policy generator
framework. Contributed by Young Chen
Posted by bo...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo2.json
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo2.json b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo2.json
new file mode 100644
index 0000000..2ff879e
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo2.json
@@ -0,0 +1,196 @@
+ {
+ "type": "capacityScheduler",
+ "capacity": 100.0,
+ "usedCapacity": 0.0,
+ "maxCapacity": 100.0,
+ "queueName": "root",
+ "queues": {
+ "queue": [
+ {
+ "type": "capacitySchedulerLeafQueueInfo",
+ "capacity": 100.0,
+ "usedCapacity": 0.0,
+ "maxCapacity": 100.0,
+ "absoluteCapacity": 100.0,
+ "absoluteMaxCapacity": 100.0,
+ "absoluteUsedCapacity": 0.0,
+ "numApplications": 484,
+ "queueName": "default",
+ "state": "RUNNING",
+ "resourcesUsed": {
+ "memory": 0,
+ "vCores": 0
+ },
+ "hideReservationQueues": false,
+ "nodeLabels": [
+ "*"
+ ],
+ "numActiveApplications": 484,
+ "numPendingApplications": 0,
+ "numContainers": 0,
+ "maxApplications": 10000,
+ "maxApplicationsPerUser": 10000,
+ "userLimit": 100,
+ "users": {
+ "user": [
+ {
+ "username": "Default",
+ "resourcesUsed": {
+ "memory": 0,
+ "vCores": 0
+ },
+ "numPendingApplications": 0,
+ "numActiveApplications": 468,
+ "AMResourceUsed": {
+ "memory": 30191616,
+ "vCores": 468
+ },
+ "userResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ }
+ }
+ ]
+ },
+ "userLimitFactor": 1.0,
+ "AMResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ },
+ "usedAMResource": {
+ "memory": 30388224,
+ "vCores": 532
+ },
+ "userAMResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ },
+ "preemptionDisabled": true
+ },
+ {
+ "type": "capacitySchedulerLeafQueueInfo",
+ "capacity": 100.0,
+ "usedCapacity": 0.0,
+ "maxCapacity": 100.0,
+ "absoluteCapacity": 100.0,
+ "absoluteMaxCapacity": 100.0,
+ "absoluteUsedCapacity": 0.0,
+ "numApplications": 484,
+ "queueName": "default2",
+ "state": "RUNNING",
+ "resourcesUsed": {
+ "memory": 0,
+ "vCores": 0
+ },
+ "hideReservationQueues": false,
+ "nodeLabels": [
+ "*"
+ ],
+ "numActiveApplications": 484,
+ "numPendingApplications": 0,
+ "numContainers": 0,
+ "maxApplications": 10000,
+ "maxApplicationsPerUser": 10000,
+ "userLimit": 100,
+ "users": {
+ "user": [
+ {
+ "username": "Default",
+ "resourcesUsed": {
+ "memory": 0,
+ "vCores": 0
+ },
+ "numPendingApplications": 0,
+ "numActiveApplications": 468,
+ "AMResourceUsed": {
+ "memory": 30191616,
+ "vCores": 468
+ },
+ "userResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ }
+ }
+ ]
+ },
+ "userLimitFactor": 1.0,
+ "AMResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ },
+ "usedAMResource": {
+ "memory": 30388224,
+ "vCores": 532
+ },
+ "userAMResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ },
+ "preemptionDisabled": true
+ }
+ ]
+ },
+ "health": {
+ "lastrun": 1517951638085,
+ "operationsInfo": {
+ "entry": {
+ "key": "last-allocation",
+ "value": {
+ "nodeId": "node0:0",
+ "containerId": "container_e61477_1517922128312_0340_01_000001",
+ "queue": "root.default"
+ }
+ },
+ "entry": {
+ "key": "last-reservation",
+ "value": {
+ "nodeId": "node0:1",
+ "containerId": "container_e61477_1517879828320_0249_01_000001",
+ "queue": "root.default"
+ }
+ },
+ "entry": {
+ "key": "last-release",
+ "value": {
+ "nodeId": "node0:2",
+ "containerId": "container_e61477_1517922128312_0340_01_000001",
+ "queue": "root.default"
+ }
+ },
+ "entry": {
+ "key": "last-preemption",
+ "value": {
+ "nodeId": "N/A",
+ "containerId": "N/A",
+ "queue": "N/A"
+ }
+ }
+ },
+ "lastRunDetails": [
+ {
+ "operation": "releases",
+ "count": 0,
+ "resources": {
+ "memory": 0,
+ "vCores": 0
+ }
+ },
+ {
+ "operation": "allocations",
+ "count": 0,
+ "resources": {
+ "memory": 0,
+ "vCores": 0
+ }
+ },
+ {
+ "operation": "reservations",
+ "count": 0,
+ "resources": {
+ "memory": 0,
+ "vCores": 0
+ }
+ }
+ ]
+ }
+ }
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[09/50] [abbrv] hadoop git commit: YARN-7899. [AMRMProxy] Stateful
FederationInterceptor for pending requests. Contributed by Botong Huang.
Posted by bo...@apache.org.
YARN-7899. [AMRMProxy] Stateful FederationInterceptor for pending requests. Contributed by Botong Huang.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ea9b6082
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ea9b6082
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ea9b6082
Branch: refs/heads/YARN-7402
Commit: ea9b608237e7f2cf9b1e36b0f78c9674ec84096f
Parents: e12d93b
Author: Giovanni Matteo Fumarola <gi...@apache.com>
Authored: Mon Jul 9 12:27:36 2018 -0700
Committer: Giovanni Matteo Fumarola <gi...@apache.com>
Committed: Mon Jul 9 12:27:36 2018 -0700
----------------------------------------------------------------------
.../hadoop/yarn/client/AMRMClientUtils.java | 91 ------------
.../hadoop/yarn/server/AMRMClientRelayer.java | 9 +-
.../yarn/server/uam/UnmanagedAMPoolManager.java | 16 ++
.../server/uam/UnmanagedApplicationManager.java | 40 ++---
.../yarn/server/MockResourceManagerFacade.java | 13 +-
.../amrmproxy/FederationInterceptor.java | 146 ++++++++++++++++---
.../amrmproxy/BaseAMRMProxyTest.java | 2 +
.../amrmproxy/TestFederationInterceptor.java | 17 +++
8 files changed, 192 insertions(+), 142 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea9b6082/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AMRMClientUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AMRMClientUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AMRMClientUtils.java
index 387e399..5d4ab4a6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AMRMClientUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/AMRMClientUtils.java
@@ -36,19 +36,9 @@ import org.apache.hadoop.security.SaslRpcServer;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.security.token.Token;
import org.apache.hadoop.security.token.TokenIdentifier;
-import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
-import org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterResponse;
-import org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterRequest;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
import org.apache.hadoop.yarn.api.records.Container;
import org.apache.hadoop.yarn.api.records.SchedulingRequest;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.exceptions.ApplicationMasterNotRegisteredException;
-import org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException;
-import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -68,87 +58,6 @@ public final class AMRMClientUtils {
}
/**
- * Handle ApplicationNotRegistered exception and re-register.
- *
- * @param appId application Id
- * @param rmProxy RM proxy instance
- * @param registerRequest the AM re-register request
- * @throws YarnException if re-register fails
- */
- public static void handleNotRegisteredExceptionAndReRegister(
- ApplicationId appId, ApplicationMasterProtocol rmProxy,
- RegisterApplicationMasterRequest registerRequest) throws YarnException {
- LOG.info("App attempt {} not registered, most likely due to RM failover. "
- + " Trying to re-register.", appId);
- try {
- rmProxy.registerApplicationMaster(registerRequest);
- } catch (Exception e) {
- if (e instanceof InvalidApplicationMasterRequestException
- && e.getMessage().contains(APP_ALREADY_REGISTERED_MESSAGE)) {
- LOG.info("Concurrent thread successfully registered, moving on.");
- } else {
- LOG.error("Error trying to re-register AM", e);
- throw new YarnException(e);
- }
- }
- }
-
- /**
- * Helper method for client calling ApplicationMasterProtocol.allocate that
- * handles re-register if RM fails over.
- *
- * @param request allocate request
- * @param rmProxy RM proxy
- * @param registerRequest the register request for re-register
- * @param appId application id
- * @return allocate response
- * @throws YarnException if RM call fails
- * @throws IOException if RM call fails
- */
- public static AllocateResponse allocateWithReRegister(AllocateRequest request,
- ApplicationMasterProtocol rmProxy,
- RegisterApplicationMasterRequest registerRequest, ApplicationId appId)
- throws YarnException, IOException {
- try {
- return rmProxy.allocate(request);
- } catch (ApplicationMasterNotRegisteredException e) {
- handleNotRegisteredExceptionAndReRegister(appId, rmProxy,
- registerRequest);
- // reset responseId after re-register
- request.setResponseId(0);
- // retry allocate
- return allocateWithReRegister(request, rmProxy, registerRequest, appId);
- }
- }
-
- /**
- * Helper method for client calling
- * ApplicationMasterProtocol.finishApplicationMaster that handles re-register
- * if RM fails over.
- *
- * @param request finishApplicationMaster request
- * @param rmProxy RM proxy
- * @param registerRequest the register request for re-register
- * @param appId application id
- * @return finishApplicationMaster response
- * @throws YarnException if RM call fails
- * @throws IOException if RM call fails
- */
- public static FinishApplicationMasterResponse finishAMWithReRegister(
- FinishApplicationMasterRequest request, ApplicationMasterProtocol rmProxy,
- RegisterApplicationMasterRequest registerRequest, ApplicationId appId)
- throws YarnException, IOException {
- try {
- return rmProxy.finishApplicationMaster(request);
- } catch (ApplicationMasterNotRegisteredException ex) {
- handleNotRegisteredExceptionAndReRegister(appId, rmProxy,
- registerRequest);
- // retry finishAM after re-register
- return finishAMWithReRegister(request, rmProxy, registerRequest, appId);
- }
- }
-
- /**
* Create a proxy for the specified protocol.
*
* @param configuration Configuration to generate {@link ClientRMProxy}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea9b6082/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
index e8a7f64..0d1a27e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
@@ -147,6 +147,11 @@ public class AMRMClientRelayer extends AbstractService
super.serviceStop();
}
+ public void setAMRegistrationRequest(
+ RegisterApplicationMasterRequest registerRequest) {
+ this.amRegistrationRequest = registerRequest;
+ }
+
@Override
public RegisterApplicationMasterResponse registerApplicationMaster(
RegisterApplicationMasterRequest request)
@@ -259,8 +264,10 @@ public class AMRMClientRelayer extends AbstractService
}
}
- // re register with RM, then retry allocate recursively
+ // re-register with RM, then retry allocate recursively
registerApplicationMaster(this.amRegistrationRequest);
+ // Reset responseId after re-register
+ allocateRequest.setResponseId(0);
return allocate(allocateRequest);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea9b6082/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java
index 02eef29..5f9d81b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java
@@ -50,6 +50,7 @@ import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
import org.apache.hadoop.yarn.client.AMRMClientUtils;
import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.security.AMRMTokenIdentifier;
+import org.apache.hadoop.yarn.server.AMRMClientRelayer;
import org.apache.hadoop.yarn.util.AsyncCallback;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -385,4 +386,19 @@ public class UnmanagedAMPoolManager extends AbstractService {
return this.unmanagedAppMasterMap.containsKey(uamId);
}
+ /**
+ * Return the rmProxy relayer of an UAM.
+ *
+ * @param uamId uam Id
+ * @return the rmProxy relayer
+ * @throws YarnException if fails
+ */
+ public AMRMClientRelayer getAMRMClientRelayer(String uamId)
+ throws YarnException {
+ if (!this.unmanagedAppMasterMap.containsKey(uamId)) {
+ throw new YarnException("UAM " + uamId + " does not exist");
+ }
+ return this.unmanagedAppMasterMap.get(uamId).getAMRMClientRelayer();
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea9b6082/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java
index 73795dc..856a818 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java
@@ -63,6 +63,7 @@ import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
import org.apache.hadoop.yarn.factories.RecordFactory;
import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
import org.apache.hadoop.yarn.security.AMRMTokenIdentifier;
+import org.apache.hadoop.yarn.server.AMRMClientRelayer;
import org.apache.hadoop.yarn.server.utils.BuilderUtils;
import org.apache.hadoop.yarn.server.utils.YarnServerSecurityUtils;
import org.apache.hadoop.yarn.util.AsyncCallback;
@@ -90,7 +91,7 @@ public class UnmanagedApplicationManager {
private BlockingQueue<AsyncAllocateRequestInfo> requestQueue;
private AMRequestHandlerThread handlerThread;
- private ApplicationMasterProtocol rmProxy;
+ private AMRMClientRelayer rmProxyRelayer;
private ApplicationId applicationId;
private String submitter;
private String appNameSuffix;
@@ -138,7 +139,7 @@ public class UnmanagedApplicationManager {
this.appNameSuffix = appNameSuffix;
this.handlerThread = new AMRequestHandlerThread();
this.requestQueue = new LinkedBlockingQueue<>();
- this.rmProxy = null;
+ this.rmProxyRelayer = null;
this.connectionInitiated = false;
this.registerRequest = null;
this.recordFactory = RecordFactoryProvider.getRecordFactory(conf);
@@ -190,8 +191,9 @@ public class UnmanagedApplicationManager {
throws IOException {
this.userUgi = UserGroupInformation.createProxyUser(
this.applicationId.toString(), UserGroupInformation.getCurrentUser());
- this.rmProxy = createRMProxy(ApplicationMasterProtocol.class, this.conf,
- this.userUgi, amrmToken);
+ this.rmProxyRelayer =
+ new AMRMClientRelayer(createRMProxy(ApplicationMasterProtocol.class,
+ this.conf, this.userUgi, amrmToken));
}
/**
@@ -209,19 +211,18 @@ public class UnmanagedApplicationManager {
// Save the register request for re-register later
this.registerRequest = request;
- // Since we have setKeepContainersAcrossApplicationAttempts = true for UAM.
- // We do not expect application already registered exception here
LOG.info("Registering the Unmanaged application master {}",
this.applicationId);
RegisterApplicationMasterResponse response =
- this.rmProxy.registerApplicationMaster(this.registerRequest);
+ this.rmProxyRelayer.registerApplicationMaster(this.registerRequest);
+ this.lastResponseId = 0;
for (Container container : response.getContainersFromPreviousAttempts()) {
- LOG.info("RegisterUAM returned existing running container "
+ LOG.debug("RegisterUAM returned existing running container "
+ container.getId());
}
for (NMToken nmToken : response.getNMTokensFromPreviousAttempts()) {
- LOG.info("RegisterUAM returned existing NM token for node "
+ LOG.debug("RegisterUAM returned existing NM token for node "
+ nmToken.getNodeId());
}
@@ -249,7 +250,7 @@ public class UnmanagedApplicationManager {
this.handlerThread.shutdown();
- if (this.rmProxy == null) {
+ if (this.rmProxyRelayer == null) {
if (this.connectionInitiated) {
// This is possible if the async launchUAM is still
// blocked and retrying. Return a dummy response in this case.
@@ -261,8 +262,7 @@ public class UnmanagedApplicationManager {
+ "be called before createAndRegister");
}
}
- return AMRMClientUtils.finishAMWithReRegister(request, this.rmProxy,
- this.registerRequest, this.applicationId);
+ return this.rmProxyRelayer.finishApplicationMaster(request);
}
/**
@@ -308,7 +308,7 @@ public class UnmanagedApplicationManager {
//
// In case 2, we have already save the allocate request above, so if the
// registration succeed later, no request is lost.
- if (this.rmProxy == null) {
+ if (this.rmProxyRelayer == null) {
if (this.connectionInitiated) {
LOG.info("Unmanaged AM still not successfully launched/registered yet."
+ " Saving the allocate request and send later.");
@@ -329,6 +329,15 @@ public class UnmanagedApplicationManager {
}
/**
+ * Returns the rmProxy relayer of this UAM.
+ *
+ * @return rmProxy relayer of the UAM
+ */
+ public AMRMClientRelayer getAMRMClientRelayer() {
+ return this.rmProxyRelayer;
+ }
+
+ /**
* Returns RM proxy for the specified protocol type. Unit test cases can
* override this method and return mock proxy instances.
*
@@ -592,10 +601,7 @@ public class UnmanagedApplicationManager {
}
request.setResponseId(lastResponseId);
-
- AllocateResponse response = AMRMClientUtils.allocateWithReRegister(
- request, rmProxy, registerRequest, applicationId);
-
+ AllocateResponse response = rmProxyRelayer.allocate(request);
if (response == null) {
throw new YarnException("Null allocateResponse from allocate");
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea9b6082/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
index 23cd3e2..9b4d91d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
@@ -251,8 +251,6 @@ public class MockResourceManagerFacade implements ApplicationClientProtocol,
ApplicationAttemptId attemptId = getAppIdentifier();
LOG.info("Registering application attempt: " + attemptId);
- shouldReRegisterNext = false;
-
List<Container> containersFromPreviousAttempt = null;
synchronized (applicationContainerIdMap) {
@@ -266,7 +264,7 @@ public class MockResourceManagerFacade implements ApplicationClientProtocol,
containersFromPreviousAttempt.add(Container.newInstance(containerId,
null, null, null, null, null));
}
- } else {
+ } else if (!shouldReRegisterNext) {
throw new InvalidApplicationMasterRequestException(
AMRMClientUtils.APP_ALREADY_REGISTERED_MESSAGE);
}
@@ -276,6 +274,8 @@ public class MockResourceManagerFacade implements ApplicationClientProtocol,
}
}
+ shouldReRegisterNext = false;
+
// Make sure we wait for certain test cases last in the method
synchronized (syncObj) {
syncObj.notifyAll();
@@ -339,13 +339,6 @@ public class MockResourceManagerFacade implements ApplicationClientProtocol,
validateRunning();
- if (request.getAskList() != null && request.getAskList().size() > 0
- && request.getReleaseList() != null
- && request.getReleaseList().size() > 0) {
- Assert.fail("The mock RM implementation does not support receiving "
- + "askList and releaseList in the same heartbeat");
- }
-
ApplicationAttemptId attemptId = getAppIdentifier();
LOG.info("Allocate from application attempt: " + attemptId);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea9b6082/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
index 5740749..645e47e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
@@ -62,14 +62,15 @@ import org.apache.hadoop.yarn.api.records.ResourceBlacklistRequest;
import org.apache.hadoop.yarn.api.records.ResourceRequest;
import org.apache.hadoop.yarn.api.records.StrictPreemptionContract;
import org.apache.hadoop.yarn.api.records.UpdateContainerRequest;
-import org.apache.hadoop.yarn.client.AMRMClientUtils;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.ApplicationMasterNotRegisteredException;
import org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException;
import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
import org.apache.hadoop.yarn.proto.YarnServiceProtos.RegisterApplicationMasterRequestProto;
import org.apache.hadoop.yarn.proto.YarnServiceProtos.RegisterApplicationMasterResponseProto;
import org.apache.hadoop.yarn.security.AMRMTokenIdentifier;
+import org.apache.hadoop.yarn.server.AMRMClientRelayer;
import org.apache.hadoop.yarn.server.federation.failover.FederationProxyProviderUtil;
import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils;
import org.apache.hadoop.yarn.server.federation.policies.amrmproxy.FederationAMRMProxyPolicy;
@@ -106,9 +107,9 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
public static final String NMSS_REG_RESPONSE_KEY =
NMSS_CLASS_PREFIX + "registerResponse";
- /*
+ /**
* When AMRMProxy HA is enabled, secondary AMRMTokens will be stored in Yarn
- * Registry. Otherwise if NM recovery is enabled, the UAM token are store in
+ * Registry. Otherwise if NM recovery is enabled, the UAM token are stored in
* local NMSS instead under this directory name.
*/
public static final String NMSS_SECONDARY_SC_PREFIX =
@@ -119,8 +120,23 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
* The home sub-cluster is the sub-cluster where the AM container is running
* in.
*/
- private ApplicationMasterProtocol homeRM;
+ private AMRMClientRelayer homeRMRelayer;
private SubClusterId homeSubClusterId;
+ private volatile int lastHomeResponseId;
+
+ /**
+ * A flag for work preserving NM restart. If we just recovered, we need to
+ * generate an {@link ApplicationMasterNotRegisteredException} exception back
+ * to AM (similar to what RM will do after its restart/fail-over) in its next
+ * allocate to trigger AM re-register (which we will shield from RM and just
+ * return our saved register response) and a full pending requests re-send, so
+ * that all the {@link AMRMClientRelayer} will be re-populated with all
+ * pending requests.
+ *
+ * TODO: When split-merge is not idempotent, this can lead to some
+ * over-allocation without a full cancel to RM.
+ */
+ private volatile boolean justRecovered;
/**
* UAM pool for secondary sub-clusters (ones other than home sub-cluster),
@@ -134,6 +150,12 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
*/
private UnmanagedAMPoolManager uamPool;
+ /**
+ * The rmProxy relayers for secondary sub-clusters that keep track of all
+ * pending requests.
+ */
+ private Map<String, AMRMClientRelayer> secondaryRelayers;
+
/** Thread pool used for asynchronous operations. */
private ExecutorService threadpool;
@@ -186,8 +208,11 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
this.asyncResponseSink = new ConcurrentHashMap<>();
this.threadpool = Executors.newCachedThreadPool();
this.uamPool = createUnmanagedAMPoolManager(this.threadpool);
+ this.secondaryRelayers = new ConcurrentHashMap<>();
this.amRegistrationRequest = null;
this.amRegistrationResponse = null;
+ this.lastHomeResponseId = Integer.MAX_VALUE;
+ this.justRecovered = false;
}
/**
@@ -224,8 +249,8 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
this.homeSubClusterId =
SubClusterId.newInstance(YarnConfiguration.getClusterId(conf));
- this.homeRM = createHomeRMProxy(appContext, ApplicationMasterProtocol.class,
- this.appOwner);
+ this.homeRMRelayer = new AMRMClientRelayer(createHomeRMProxy(appContext,
+ ApplicationMasterProtocol.class, this.appOwner));
this.federationFacade = FederationStateStoreFacade.getInstance();
this.subClusterResolver = this.federationFacade.getSubClusterResolver();
@@ -240,13 +265,12 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
@Override
public void recover(Map<String, byte[]> recoveredDataMap) {
super.recover(recoveredDataMap);
- LOG.info("Recovering data for FederationInterceptor");
+ ApplicationAttemptId attemptId =
+ getApplicationContext().getApplicationAttemptId();
+ LOG.info("Recovering data for FederationInterceptor for {}", attemptId);
if (recoveredDataMap == null) {
return;
}
-
- ApplicationAttemptId attemptId =
- getApplicationContext().getApplicationAttemptId();
try {
if (recoveredDataMap.containsKey(NMSS_REG_REQUEST_KEY)) {
RegisterApplicationMasterRequestProto pb =
@@ -255,6 +279,9 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
this.amRegistrationRequest =
new RegisterApplicationMasterRequestPBImpl(pb);
LOG.info("amRegistrationRequest recovered for {}", attemptId);
+
+ // Give the register request to homeRMRelayer for future re-registration
+ this.homeRMRelayer.setAMRegistrationRequest(this.amRegistrationRequest);
}
if (recoveredDataMap.containsKey(NMSS_REG_RESPONSE_KEY)) {
RegisterApplicationMasterResponseProto pb =
@@ -263,6 +290,9 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
this.amRegistrationResponse =
new RegisterApplicationMasterResponsePBImpl(pb);
LOG.info("amRegistrationResponse recovered for {}", attemptId);
+ // Trigger re-register and full pending re-send only if we have a
+ // saved register response. This should always be true though.
+ this.justRecovered = true;
}
// Recover UAM amrmTokens from registry or NMSS
@@ -309,6 +339,9 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
getApplicationContext().getUser(), this.homeSubClusterId.getId(),
entry.getValue());
+ this.secondaryRelayers.put(subClusterId.getId(),
+ this.uamPool.getAMRMClientRelayer(subClusterId.getId()));
+
RegisterApplicationMasterResponse response =
this.uamPool.registerApplicationMaster(subClusterId.getId(),
this.amRegistrationRequest);
@@ -436,7 +469,7 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
* the other sub-cluster RM will be done lazily as needed later.
*/
this.amRegistrationResponse =
- this.homeRM.registerApplicationMaster(request);
+ this.homeRMRelayer.registerApplicationMaster(request);
if (this.amRegistrationResponse
.getContainersFromPreviousAttempts() != null) {
cacheAllocatedContainers(
@@ -495,6 +528,34 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
Preconditions.checkArgument(this.policyInterpreter != null,
"Allocate should be called after registerApplicationMaster");
+ if (this.justRecovered && this.lastHomeResponseId == Integer.MAX_VALUE) {
+ // Save the responseId home RM is expecting
+ this.lastHomeResponseId = request.getResponseId();
+
+ throw new ApplicationMasterNotRegisteredException(
+ "AMRMProxy just restarted and recovered for "
+ + getApplicationContext().getApplicationAttemptId()
+ + ". AM should re-register and full re-send pending requests.");
+ }
+
+ // Override responseId in the request in two cases:
+ //
+ // 1. After we just recovered after an NM restart and AM's responseId is
+ // reset due to the exception we generate. We need to override the
+ // responseId to the one homeRM expects.
+ //
+ // 2. After homeRM fail-over, the allocate response with reseted responseId
+ // might not be returned successfully back to AM because of RPC connection
+ // timeout between AM and AMRMProxy. In this case, we remember and reset the
+ // responseId for AM.
+ if (this.justRecovered
+ || request.getResponseId() > this.lastHomeResponseId) {
+ LOG.warn("Setting allocate responseId for {} from {} to {}",
+ getApplicationContext().getApplicationAttemptId(),
+ request.getResponseId(), this.lastHomeResponseId);
+ request.setResponseId(this.lastHomeResponseId);
+ }
+
try {
// Split the heart beat request into multiple requests, one for each
// sub-cluster RM that is used by this application.
@@ -509,10 +570,18 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
sendRequestsToSecondaryResourceManagers(requests);
// Send the request to the home RM and get the response
- AllocateResponse homeResponse = AMRMClientUtils.allocateWithReRegister(
- requests.get(this.homeSubClusterId), this.homeRM,
- this.amRegistrationRequest,
- getApplicationContext().getApplicationAttemptId().getApplicationId());
+ AllocateRequest homeRequest = requests.get(this.homeSubClusterId);
+ LOG.info("{} heartbeating to home RM with responseId {}",
+ getApplicationContext().getApplicationAttemptId(),
+ homeRequest.getResponseId());
+
+ AllocateResponse homeResponse = this.homeRMRelayer.allocate(homeRequest);
+
+ // Reset the flag after the first successful homeRM allocate response,
+ // otherwise keep overriding the responseId of new allocate request
+ if (this.justRecovered) {
+ this.justRecovered = false;
+ }
// Notify policy of home response
try {
@@ -540,6 +609,22 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
newRegistrations.getSuccessfulRegistrations());
}
+ LOG.info("{} heartbeat response from home RM with responseId {}",
+ getApplicationContext().getApplicationAttemptId(),
+ homeResponse.getResponseId());
+
+ // Update lastHomeResponseId in three cases:
+ // 1. The normal responseId increments
+ // 2. homeResponse.getResponseId() == 1. This happens when homeRM fails
+ // over, AMRMClientRelayer auto re-register and full re-send for homeRM.
+ // 3. lastHomeResponseId == MAX_INT. This is the initial case or
+ // responseId about to overflow and wrap around
+ if (homeResponse.getResponseId() == this.lastHomeResponseId + 1
+ || homeResponse.getResponseId() == 1
+ || this.lastHomeResponseId == Integer.MAX_VALUE) {
+ this.lastHomeResponseId = homeResponse.getResponseId();
+ }
+
// return the final response to the application master.
return homeResponse;
} catch (IOException ex) {
@@ -584,6 +669,16 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
try {
uamResponse =
uamPool.finishApplicationMaster(subClusterId, finishRequest);
+
+ if (uamResponse.getIsUnregistered()) {
+ secondaryRelayers.remove(subClusterId);
+
+ if (getNMStateStore() != null) {
+ getNMStateStore().removeAMRMProxyAppContextEntry(
+ getApplicationContext().getApplicationAttemptId(),
+ NMSS_SECONDARY_SC_PREFIX + subClusterId);
+ }
+ }
} catch (Throwable e) {
LOG.warn("Failed to finish unmanaged application master: "
+ "RM address: " + subClusterId + " ApplicationId: "
@@ -600,9 +695,7 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
// asynchronously by other sub-cluster resource managers, send the same
// request to the home resource manager on this thread.
FinishApplicationMasterResponse homeResponse =
- AMRMClientUtils.finishAMWithReRegister(request, this.homeRM,
- this.amRegistrationRequest, getApplicationContext()
- .getApplicationAttemptId().getApplicationId());
+ this.homeRMRelayer.finishApplicationMaster(request);
if (subClusterIds.size() > 0) {
// Wait for other sub-cluster resource managers to return the
@@ -621,10 +714,6 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
if (uamResponse.getResponse() == null
|| !uamResponse.getResponse().getIsUnregistered()) {
failedToUnRegister = true;
- } else if (getNMStateStore() != null) {
- getNMStateStore().removeAMRMProxyAppContextEntry(
- getApplicationContext().getApplicationAttemptId(),
- NMSS_SECONDARY_SC_PREFIX + uamResponse.getSubClusterId());
}
} catch (Throwable e) {
failedToUnRegister = true;
@@ -689,6 +778,11 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
return this.registryClient;
}
+ @VisibleForTesting
+ protected int getLastHomeResponseId() {
+ return this.lastHomeResponseId;
+ }
+
/**
* Create the UAM pool manager for secondary sub-clsuters. For unit test to
* override.
@@ -800,6 +894,9 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
getApplicationContext().getUser(), homeSubClusterId.getId(),
amrmToken);
+ secondaryRelayers.put(subClusterId.getId(),
+ uamPool.getAMRMClientRelayer(subClusterId.getId()));
+
response = uamPool.registerApplicationMaster(
subClusterId.getId(), amRegistrationRequest);
@@ -1098,7 +1195,10 @@ public class FederationInterceptor extends AbstractRequestInterceptor {
token = uamPool.launchUAM(subClusterId, config,
appContext.getApplicationAttemptId().getApplicationId(),
amRegistrationResponse.getQueue(), appContext.getUser(),
- homeSubClusterId.toString(), registryClient != null);
+ homeSubClusterId.toString(), true);
+
+ secondaryRelayers.put(subClusterId,
+ uamPool.getAMRMClientRelayer(subClusterId));
uamResponse = uamPool.registerApplicationMaster(subClusterId,
registerRequest);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea9b6082/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java
index 677732d..2794857 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
import org.apache.hadoop.yarn.api.records.ApplicationId;
import org.apache.hadoop.yarn.api.records.ContainerId;
import org.apache.hadoop.yarn.api.records.ContainerStatus;
+import org.apache.hadoop.yarn.api.records.ExecutionTypeRequest;
import org.apache.hadoop.yarn.api.records.FinalApplicationStatus;
import org.apache.hadoop.yarn.api.records.NodeId;
import org.apache.hadoop.yarn.api.records.Priority;
@@ -536,6 +537,7 @@ public abstract class BaseAMRMProxyTest {
capability.setMemorySize(memory);
capability.setVirtualCores(vCores);
req.setCapability(capability);
+ req.setExecutionTypeRequest(ExecutionTypeRequest.newInstance());
if (labelExpression != null) {
req.setNodeLabelExpression(labelExpression);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ea9b6082/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
index eefaba1..a837eed 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
@@ -52,6 +52,7 @@ import org.apache.hadoop.yarn.api.records.ResourceRequest;
import org.apache.hadoop.yarn.api.records.UpdateContainerError;
import org.apache.hadoop.yarn.api.records.UpdatedContainer;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.ApplicationMasterNotRegisteredException;
import org.apache.hadoop.yarn.exceptions.InvalidApplicationMasterRequestException;
import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.server.MockResourceManagerFacade;
@@ -516,6 +517,22 @@ public class TestFederationInterceptor extends BaseAMRMProxyTest {
interceptor.recover(recoveredDataMap);
Assert.assertEquals(1, interceptor.getUnmanagedAMPoolSize());
+ Assert.assertEquals(Integer.MAX_VALUE,
+ interceptor.getLastHomeResponseId());
+
+ // The first allocate call expects a fail-over exception and re-register
+ int responseId = 10;
+ AllocateRequest allocateRequest =
+ Records.newRecord(AllocateRequest.class);
+ allocateRequest.setResponseId(responseId);
+ try {
+ interceptor.allocate(allocateRequest);
+ Assert.fail("Expecting an ApplicationMasterNotRegisteredException "
+ + " after FederationInterceptor restarts and recovers");
+ } catch (ApplicationMasterNotRegisteredException e) {
+ }
+ interceptor.registerApplicationMaster(registerReq);
+ Assert.assertEquals(responseId, interceptor.getLastHomeResponseId());
// Release all containers
releaseContainersAndAssert(containers);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[41/50] [abbrv] hadoop git commit: HDDS-232. Parallel unit test
execution for HDDS/Ozone. Contributed by Arpit Agarwal.
Posted by bo...@apache.org.
HDDS-232. Parallel unit test execution for HDDS/Ozone. Contributed by Arpit Agarwal.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d1850720
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d1850720
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d1850720
Branch: refs/heads/YARN-7402
Commit: d18507209e268aa5be0d3e56cec23de24107e7d9
Parents: 1fe5b93
Author: Nanda kumar <na...@apache.org>
Authored: Fri Jul 13 19:50:52 2018 +0530
Committer: Nanda kumar <na...@apache.org>
Committed: Fri Jul 13 19:50:52 2018 +0530
----------------------------------------------------------------------
.../common/report/TestReportPublisher.java | 2 +-
hadoop-hdds/pom.xml | 49 ++++++++++++++++++++
hadoop-ozone/pom.xml | 49 ++++++++++++++++++++
3 files changed, 99 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1850720/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
index 026e7aa..d4db55b 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
@@ -111,7 +111,7 @@ public class TestReportPublisher {
publisher.init(dummyContext, executorService);
Thread.sleep(150);
Assert.assertEquals(1, ((DummyReportPublisher) publisher).getReportCount);
- Thread.sleep(150);
+ Thread.sleep(100);
Assert.assertEquals(2, ((DummyReportPublisher) publisher).getReportCount);
executorService.shutdown();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1850720/hadoop-hdds/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index 573803b..09fac33 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -116,4 +116,53 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
</plugin>
</plugins>
</build>
+
+ <profiles>
+ <profile>
+ <id>parallel-tests</id>
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-maven-plugins</artifactId>
+ <executions>
+ <execution>
+ <id>parallel-tests-createdir</id>
+ <goals>
+ <goal>parallel-tests-createdir</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-surefire-plugin</artifactId>
+ <configuration>
+ <forkCount>${testsThreadCount}</forkCount>
+ <reuseForks>false</reuseForks>
+ <argLine>${maven-surefire-plugin.argLine} -DminiClusterDedicatedDirs=true</argLine>
+ <systemPropertyVariables>
+ <testsThreadCount>${testsThreadCount}</testsThreadCount>
+ <test.build.data>${test.build.data}/${surefire.forkNumber}</test.build.data>
+ <test.build.dir>${test.build.dir}/${surefire.forkNumber}</test.build.dir>
+ <hadoop.tmp.dir>${hadoop.tmp.dir}/${surefire.forkNumber}</hadoop.tmp.dir>
+
+ <!-- This is intentionally the same directory for all JUnit -->
+ <!-- forks, for use in the very rare situation that -->
+ <!-- concurrent tests need to coordinate, such as using lock -->
+ <!-- files. -->
+ <test.build.shared.data>${test.build.data}</test.build.shared.data>
+
+ <!-- Due to a Maven quirk, setting this to just -->
+ <!-- surefire.forkNumber won't do the parameter substitution. -->
+ <!-- Putting a prefix in front of it like "fork-" makes it -->
+ <!-- work. -->
+ <test.unique.fork.id>fork-${surefire.forkNumber}</test.unique.fork.id>
+ </systemPropertyVariables>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ </profile>
+ </profiles>
</project>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1850720/hadoop-ozone/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index b655088..e82a3d8 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -178,4 +178,53 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
</plugin>
</plugins>
</build>
+
+ <profiles>
+ <profile>
+ <id>parallel-tests</id>
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-maven-plugins</artifactId>
+ <executions>
+ <execution>
+ <id>parallel-tests-createdir</id>
+ <goals>
+ <goal>parallel-tests-createdir</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-surefire-plugin</artifactId>
+ <configuration>
+ <forkCount>${testsThreadCount}</forkCount>
+ <reuseForks>false</reuseForks>
+ <argLine>${maven-surefire-plugin.argLine} -DminiClusterDedicatedDirs=true</argLine>
+ <systemPropertyVariables>
+ <testsThreadCount>${testsThreadCount}</testsThreadCount>
+ <test.build.data>${test.build.data}/${surefire.forkNumber}</test.build.data>
+ <test.build.dir>${test.build.dir}/${surefire.forkNumber}</test.build.dir>
+ <hadoop.tmp.dir>${hadoop.tmp.dir}/${surefire.forkNumber}</hadoop.tmp.dir>
+
+ <!-- This is intentionally the same directory for all JUnit -->
+ <!-- forks, for use in the very rare situation that -->
+ <!-- concurrent tests need to coordinate, such as using lock -->
+ <!-- files. -->
+ <test.build.shared.data>${test.build.data}</test.build.shared.data>
+
+ <!-- Due to a Maven quirk, setting this to just -->
+ <!-- surefire.forkNumber won't do the parameter substitution. -->
+ <!-- Putting a prefix in front of it like "fork-" makes it -->
+ <!-- work. -->
+ <test.unique.fork.id>fork-${surefire.forkNumber}</test.unique.fork.id>
+ </systemPropertyVariables>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ </profile>
+ </profiles>
</project>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[35/50] [abbrv] hadoop git commit: HDDS-234. Add SCM node report
handler. Contributed by Ajay Kumar.
Posted by bo...@apache.org.
HDDS-234. Add SCM node report handler.
Contributed by Ajay Kumar.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/556d9b36
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/556d9b36
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/556d9b36
Branch: refs/heads/YARN-7402
Commit: 556d9b36be4b0b759646b8f6030c9e693b97bdb8
Parents: 5ee90ef
Author: Anu Engineer <ae...@apache.org>
Authored: Thu Jul 12 12:09:31 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Thu Jul 12 12:09:31 2018 -0700
----------------------------------------------------------------------
.../hadoop/hdds/scm/node/NodeManager.java | 9 ++
.../hadoop/hdds/scm/node/NodeReportHandler.java | 19 +++-
.../hadoop/hdds/scm/node/SCMNodeManager.java | 11 +++
.../hdds/scm/container/MockNodeManager.java | 11 +++
.../hdds/scm/node/TestNodeReportHandler.java | 95 ++++++++++++++++++++
.../testutils/ReplicationNodeManagerMock.java | 10 +++
6 files changed, 152 insertions(+), 3 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index 5e2969d..deb1628 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -17,6 +17,7 @@
*/
package org.apache.hadoop.hdds.scm.node;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.NodeReportProto;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
@@ -138,4 +139,12 @@ public interface NodeManager extends StorageContainerNodeProtocol,
* @param command
*/
void addDatanodeCommand(UUID dnId, SCMCommand command);
+
+ /**
+ * Process node report.
+ *
+ * @param dnUuid
+ * @param nodeReport
+ */
+ void processNodeReport(UUID dnUuid, NodeReportProto nodeReport);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
index aa78d53..331bfed 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
- * http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -18,25 +18,38 @@
package org.apache.hadoop.hdds.scm.node;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
.NodeReportFromDatanode;
import org.apache.hadoop.hdds.server.events.EventHandler;
import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
/**
* Handles Node Reports from datanode.
*/
public class NodeReportHandler implements EventHandler<NodeReportFromDatanode> {
+ private static final Logger LOGGER = LoggerFactory
+ .getLogger(NodeReportHandler.class);
private final NodeManager nodeManager;
public NodeReportHandler(NodeManager nodeManager) {
+ Preconditions.checkNotNull(nodeManager);
this.nodeManager = nodeManager;
}
@Override
public void onMessage(NodeReportFromDatanode nodeReportFromDatanode,
- EventPublisher publisher) {
- //TODO: process node report.
+ EventPublisher publisher) {
+ Preconditions.checkNotNull(nodeReportFromDatanode);
+ DatanodeDetails dn = nodeReportFromDatanode.getDatanodeDetails();
+ Preconditions.checkNotNull(dn, "NodeReport is "
+ + "missing DatanodeDetails.");
+ LOGGER.trace("Processing node report for dn: {}", dn);
+ nodeManager
+ .processNodeReport(dn.getUuid(), nodeReportFromDatanode.getReport());
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index 2ba8067..7370b07 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -423,6 +423,17 @@ public class SCMNodeManager
}
/**
+ * Process node report.
+ *
+ * @param dnUuid
+ * @param nodeReport
+ */
+ @Override
+ public void processNodeReport(UUID dnUuid, NodeReportProto nodeReport) {
+ this.updateNodeStat(dnUuid, nodeReport);
+ }
+
+ /**
* Returns the aggregated node stats.
* @return the aggregated node stats.
*/
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
index 5e83c28..593b780 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
@@ -295,6 +295,17 @@ public class MockNodeManager implements NodeManager {
}
}
+ /**
+ * Empty implementation for processNodeReport.
+ *
+ * @param dnUuid
+ * @param nodeReport
+ */
+ @Override
+ public void processNodeReport(UUID dnUuid, NodeReportProto nodeReport) {
+ // do nothing
+ }
+
// Returns the number of commands that is queued to this node manager.
public int getCommandCount(DatanodeDetails dd) {
List<SCMCommand> list = commandMap.get(dd.getUuid());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeReportHandler.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeReportHandler.java
new file mode 100644
index 0000000..3cbde4b
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeReportHandler.java
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.UUID;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.NodeReportProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.StorageReportProto;
+import org.apache.hadoop.hdds.scm.TestUtils;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.NodeReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.Event;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.hdds.server.events.EventQueue;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class TestNodeReportHandler implements EventPublisher {
+
+ private static Logger LOG = LoggerFactory
+ .getLogger(TestNodeReportHandler.class);
+ private NodeReportHandler nodeReportHandler;
+ private SCMNodeManager nodeManager;
+ private String storagePath = GenericTestUtils.getRandomizedTempPath()
+ .concat("/" + UUID.randomUUID().toString());
+ ;
+
+ @Before
+ public void resetEventCollector() throws IOException {
+ OzoneConfiguration conf = new OzoneConfiguration();
+ nodeManager = new SCMNodeManager(conf, "cluster1", null, new EventQueue());
+ nodeReportHandler = new NodeReportHandler(nodeManager);
+ }
+
+ @Test
+ public void testNodeReport() throws IOException {
+ DatanodeDetails dn = TestUtils.getDatanodeDetails();
+ List<StorageReportProto> reports =
+ TestUtils.createStorageReport(100, 10, 90, storagePath, null,
+ dn.getUuid().toString(), 1);
+
+ nodeReportHandler.onMessage(
+ getNodeReport(dn, reports), this);
+ SCMNodeMetric nodeMetric = nodeManager.getNodeStat(dn);
+
+ Assert.assertTrue(nodeMetric.get().getCapacity().get() == 100);
+ Assert.assertTrue(nodeMetric.get().getRemaining().get() == 90);
+ Assert.assertTrue(nodeMetric.get().getScmUsed().get() == 10);
+
+ reports =
+ TestUtils.createStorageReport(100, 10, 90, storagePath, null,
+ dn.getUuid().toString(), 2);
+ nodeReportHandler.onMessage(
+ getNodeReport(dn, reports), this);
+ nodeMetric = nodeManager.getNodeStat(dn);
+
+ Assert.assertTrue(nodeMetric.get().getCapacity().get() == 200);
+ Assert.assertTrue(nodeMetric.get().getRemaining().get() == 180);
+ Assert.assertTrue(nodeMetric.get().getScmUsed().get() == 20);
+
+ }
+
+ private NodeReportFromDatanode getNodeReport(DatanodeDetails dn,
+ List<StorageReportProto> reports) {
+ NodeReportProto nodeReportProto = TestUtils.createNodeReport(reports);
+ return new NodeReportFromDatanode(dn, nodeReportProto);
+ }
+
+ @Override
+ public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void fireEvent(
+ EVENT_TYPE event, PAYLOAD payload) {
+ LOG.info("Event is published: {}", payload);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
index 2d27d71..a0249aa 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
@@ -289,6 +289,16 @@ public class ReplicationNodeManagerMock implements NodeManager {
this.commandQueue.addCommand(dnId, command);
}
+ /**
+ * Empty implementation for processNodeReport.
+ * @param dnUuid
+ * @param nodeReport
+ */
+ @Override
+ public void processNodeReport(UUID dnUuid, NodeReportProto nodeReport) {
+ // do nothing.
+ }
+
@Override
public void onMessage(CommandForDatanode commandForDatanode,
EventPublisher publisher) {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[25/50] [abbrv] hadoop git commit: HDFS-13726. RBF: Fix RBF
configuration links. Contributed by Takanobu Asanuma.
Posted by bo...@apache.org.
HDFS-13726. RBF: Fix RBF configuration links. Contributed by Takanobu Asanuma.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ae13d41
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ae13d41
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ae13d41
Branch: refs/heads/YARN-7402
Commit: 2ae13d41dcd4f49e6b4ebc099e5f8bb8280b9872
Parents: 52e1bc8
Author: Yiqun Lin <yq...@apache.org>
Authored: Wed Jul 11 22:11:59 2018 +0800
Committer: Yiqun Lin <yq...@apache.org>
Committed: Wed Jul 11 22:11:59 2018 +0800
----------------------------------------------------------------------
.../hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ae13d41/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index 70c6226..73e0f4a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -175,7 +175,7 @@ Deployment
By default, the Router is ready to take requests and monitor the NameNode in the local machine.
It needs to know the State Store endpoint by setting `dfs.federation.router.store.driver.class`.
-The rest of the options are documented in [hdfs-default.xml](../hadoop-hdfs/hdfs-default.xml).
+The rest of the options are documented in [hdfs-rbf-default.xml](../hadoop-hdfs-rbf/hdfs-rbf-default.xml).
Once the Router is configured, it can be started:
@@ -290,7 +290,7 @@ Router configuration
--------------------
One can add the configurations for Router-based federation to **hdfs-site.xml**.
-The main options are documented in [hdfs-default.xml](../hadoop-hdfs/hdfs-default.xml).
+The main options are documented in [hdfs-rbf-default.xml](../hadoop-hdfs-rbf/hdfs-rbf-default.xml).
The configuration values are described in this section.
### RPC server
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[02/50] [abbrv] hadoop git commit: HDDS-213. Single lock to
synchronize KeyValueContainer#update.
Posted by bo...@apache.org.
HDDS-213. Single lock to synchronize KeyValueContainer#update.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/44e19fc7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/44e19fc7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/44e19fc7
Branch: refs/heads/YARN-7402
Commit: 44e19fc7f70b5c19f2b626fe247aea5d51ada51c
Parents: cb9574a
Author: Hanisha Koneru <ha...@apache.org>
Authored: Mon Jul 9 09:33:09 2018 -0700
Committer: Hanisha Koneru <ha...@apache.org>
Committed: Mon Jul 9 09:33:09 2018 -0700
----------------------------------------------------------------------
.../container/common/impl/ContainerData.java | 28 +++--
.../common/impl/ContainerDataYaml.java | 10 +-
.../container/keyvalue/KeyValueContainer.java | 124 +++++++------------
.../container/ozoneimpl/ContainerReader.java | 37 +++---
4 files changed, 87 insertions(+), 112 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/44e19fc7/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
index 0d217e4..54b186b 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java
@@ -182,12 +182,14 @@ public class ContainerData {
}
/**
- * Adds metadata.
+ * Add/Update metadata.
+ * We should hold the container lock before updating the metadata as this
+ * will be persisted on disk. Unless, we are reconstructing ContainerData
+ * from protoBuf or from on disk .container file in which case lock is not
+ * required.
*/
- public void addMetadata(String key, String value) throws IOException {
- synchronized (this.metadata) {
- metadata.put(key, value);
- }
+ public void addMetadata(String key, String value) {
+ metadata.put(key, value);
}
/**
@@ -195,9 +197,19 @@ public class ContainerData {
* @return metadata
*/
public Map<String, String> getMetadata() {
- synchronized (this.metadata) {
- return Collections.unmodifiableMap(this.metadata);
- }
+ return Collections.unmodifiableMap(this.metadata);
+ }
+
+ /**
+ * Set metadata.
+ * We should hold the container lock before updating the metadata as this
+ * will be persisted on disk. Unless, we are reconstructing ContainerData
+ * from protoBuf or from on disk .container file in which case lock is not
+ * required.
+ */
+ public void setMetadata(Map<String, String> metadataMap) {
+ metadata.clear();
+ metadata.putAll(metadataMap);
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/44e19fc7/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
index 70d1615..90af24f 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerDataYaml.java
@@ -200,15 +200,7 @@ public final class ContainerDataYaml {
OzoneConsts.METADATA_PATH));
kvData.setChunksPath((String) nodes.get(OzoneConsts.CHUNKS_PATH));
Map<String, String> meta = (Map) nodes.get(OzoneConsts.METADATA);
- meta.forEach((key, val) -> {
- try {
- kvData.addMetadata(key, val);
- } catch (IOException e) {
- throw new IllegalStateException("Unexpected " +
- "Key Value Pair " + "(" + key + "," + val +")in the metadata " +
- "for containerId " + (long) nodes.get("containerId"));
- }
- });
+ kvData.setMetadata(meta);
String state = (String) nodes.get(OzoneConsts.STATE);
switch (state) {
case "OPEN":
http://git-wip-us.apache.org/repos/asf/hadoop/blob/44e19fc7/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
index b07b053..155a988 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainer.java
@@ -19,6 +19,8 @@
package org.apache.hadoop.ozone.container.keyvalue;
import com.google.common.base.Preconditions;
+import java.nio.file.Files;
+import java.nio.file.StandardCopyOption;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileAlreadyExistsException;
import org.apache.hadoop.fs.FileUtil;
@@ -32,7 +34,6 @@ import org.apache.hadoop.io.nativeio.NativeIO;
import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
-import org.apache.hadoop.ozone.container.common.impl.ContainerData;
import org.apache.hadoop.ozone.container.common.impl.ContainerDataYaml;
import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
import org.apache.hadoop.ozone.container.common.volume.HddsVolume;
@@ -59,8 +60,6 @@ import java.util.concurrent.locks.ReentrantReadWriteLock;
import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
.Result.CONTAINER_ALREADY_EXISTS;
import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
- .Result.CONTAINER_METADATA_ERROR;
-import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
.Result.CONTAINER_INTERNAL_ERROR;
import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
.Result.CONTAINER_FILES_CREATE_ERROR;
@@ -146,7 +145,7 @@ public class KeyValueContainer implements Container {
containerData.setVolume(containerVolume);
// Create .container file and .chksm file
- createContainerFile(containerFile, containerCheckSumFile);
+ writeToContainerFile(containerFile, containerCheckSumFile, true);
} catch (StorageContainerException ex) {
@@ -177,36 +176,50 @@ public class KeyValueContainer implements Container {
* Creates .container file and checksum file.
*
* @param containerFile
- * @param containerCheckSumFile
+ * @param checksumFile
+ * @param isCreate true if we are creating a new container file and false if
+ * we are updating an existing container file.
* @throws StorageContainerException
*/
- private void createContainerFile(File containerFile, File
- containerCheckSumFile) throws StorageContainerException {
+ private void writeToContainerFile(File containerFile, File
+ checksumFile, boolean isCreate)
+ throws StorageContainerException {
File tempContainerFile = null;
- File tempCheckSumFile = null;
+ File tempChecksumFile = null;
FileOutputStream containerCheckSumStream = null;
Writer writer = null;
long containerId = containerData.getContainerID();
try {
tempContainerFile = createTempFile(containerFile);
- tempCheckSumFile = createTempFile(containerCheckSumFile);
+ tempChecksumFile = createTempFile(checksumFile);
ContainerDataYaml.createContainerFile(ContainerProtos.ContainerType
.KeyValueContainer, tempContainerFile, containerData);
//Compute Checksum for container file
String checksum = KeyValueContainerUtil.computeCheckSum(containerId,
tempContainerFile);
- containerCheckSumStream = new FileOutputStream(tempCheckSumFile);
+ containerCheckSumStream = new FileOutputStream(tempChecksumFile);
writer = new OutputStreamWriter(containerCheckSumStream, "UTF-8");
writer.write(checksum);
writer.flush();
- NativeIO.renameTo(tempContainerFile, containerFile);
- NativeIO.renameTo(tempCheckSumFile, containerCheckSumFile);
+ if (isCreate) {
+ // When creating a new container, .container file should not exist
+ // already.
+ NativeIO.renameTo(tempContainerFile, containerFile);
+ NativeIO.renameTo(tempChecksumFile, checksumFile);
+ } else {
+ // When updating a container, the .container file should exist. If
+ // not, the container is in an inconsistent state.
+ Files.move(tempContainerFile.toPath(), containerFile.toPath(),
+ StandardCopyOption.REPLACE_EXISTING);
+ Files.move(tempChecksumFile.toPath(), checksumFile.toPath(),
+ StandardCopyOption.REPLACE_EXISTING);
+ }
} catch (IOException ex) {
throw new StorageContainerException("Error during creation of " +
- "required files(.container, .chksm) for container. Container Name: "
+ "required files(.container, .chksm) for container. ContainerID: "
+ containerId, ex, CONTAINER_FILES_CREATE_ERROR);
} finally {
IOUtils.closeStream(containerCheckSumStream);
@@ -216,8 +229,8 @@ public class KeyValueContainer implements Container {
tempContainerFile.getAbsolutePath());
}
}
- if (tempCheckSumFile != null && tempCheckSumFile.exists()) {
- if (!tempCheckSumFile.delete()) {
+ if (tempChecksumFile != null && tempChecksumFile.exists()) {
+ if (!tempChecksumFile.delete()) {
LOG.warn("Unable to delete container temporary checksum file: {}.",
tempContainerFile.getAbsolutePath());
}
@@ -236,68 +249,24 @@ public class KeyValueContainer implements Container {
private void updateContainerFile(File containerFile, File
- containerCheckSumFile) throws StorageContainerException {
+ checksumFile) throws StorageContainerException {
- File containerBkpFile = null;
- File checkSumBkpFile = null;
long containerId = containerData.getContainerID();
- try {
- if (containerFile.exists() && containerCheckSumFile.exists()) {
- //Take backup of original files (.container and .chksm files)
- containerBkpFile = new File(containerFile + ".bkp");
- checkSumBkpFile = new File(containerCheckSumFile + ".bkp");
- NativeIO.renameTo(containerFile, containerBkpFile);
- NativeIO.renameTo(containerCheckSumFile, checkSumBkpFile);
- createContainerFile(containerFile, containerCheckSumFile);
- } else {
- containerData.setState(ContainerProtos.ContainerLifeCycleState.INVALID);
- throw new StorageContainerException("Container is an Inconsistent " +
- "state, missing required files(.container, .chksm). ContainerID: " +
- containerId, INVALID_CONTAINER_STATE);
- }
- } catch (StorageContainerException ex) {
- throw ex;
- } catch (IOException ex) {
- // Restore from back up files.
+ if (containerFile.exists() && checksumFile.exists()) {
try {
- if (containerBkpFile != null && containerBkpFile
- .exists() && containerFile.delete()) {
- LOG.info("update failed for container Name: {}, restoring container" +
- " file", containerId);
- NativeIO.renameTo(containerBkpFile, containerFile);
- }
- if (checkSumBkpFile != null && checkSumBkpFile.exists() &&
- containerCheckSumFile.delete()) {
- LOG.info("update failed for container Name: {}, restoring checksum" +
- " file", containerId);
- NativeIO.renameTo(checkSumBkpFile, containerCheckSumFile);
- }
- throw new StorageContainerException("Error during updating of " +
- "required files(.container, .chksm) for container. Container Name: "
- + containerId, ex, CONTAINER_FILES_CREATE_ERROR);
+ writeToContainerFile(containerFile, checksumFile, false);
} catch (IOException e) {
- containerData.setState(ContainerProtos.ContainerLifeCycleState.INVALID);
- LOG.error("During restore failed for container Name: " +
- containerId);
- throw new StorageContainerException(
- "Failed to restore container data from the backup. ID: "
- + containerId, CONTAINER_FILES_CREATE_ERROR);
- }
- } finally {
- if (containerBkpFile != null && containerBkpFile
- .exists()) {
- if(!containerBkpFile.delete()) {
- LOG.warn("Unable to delete container backup file: {}",
- containerBkpFile);
- }
- }
- if (checkSumBkpFile != null && checkSumBkpFile.exists()) {
- if(!checkSumBkpFile.delete()) {
- LOG.warn("Unable to delete container checksum backup file: {}",
- checkSumBkpFile);
- }
+ //TODO : Container update failure is not handled currently. Might
+ // lead to loss of .container file. When Update container feature
+ // support is added, this failure should also be handled.
+ throw new StorageContainerException("Container update failed. " +
+ "ContainerID: " + containerId, CONTAINER_FILES_CREATE_ERROR);
}
+ } else {
+ throw new StorageContainerException("Container is an Inconsistent " +
+ "state, missing required files(.container, .chksm). ContainerID: " +
+ containerId, INVALID_CONTAINER_STATE);
}
}
@@ -393,22 +362,21 @@ public class KeyValueContainer implements Container {
"Updating a closed container without force option is not allowed. " +
"ContainerID: " + containerId, UNSUPPORTED_REQUEST);
}
+
+ Map<String, String> oldMetadata = containerData.getMetadata();
try {
+ writeLock();
for (Map.Entry<String, String> entry : metadata.entrySet()) {
containerData.addMetadata(entry.getKey(), entry.getValue());
}
- } catch (IOException ex) {
- throw new StorageContainerException("Container Metadata update error" +
- ". Container Name:" + containerId, ex, CONTAINER_METADATA_ERROR);
- }
- try {
- writeLock();
- String containerName = String.valueOf(containerId);
File containerFile = getContainerFile();
File containerCheckSumFile = getContainerCheckSumFile();
// update the new container data to .container File
updateContainerFile(containerFile, containerCheckSumFile);
} catch (StorageContainerException ex) {
+ // TODO:
+ // On error, reset the metadata.
+ containerData.setMetadata(oldMetadata);
throw ex;
} finally {
writeUnlock();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/44e19fc7/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java
index b90efdc..06e49f0 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerReader.java
@@ -109,25 +109,28 @@ public class ContainerReader implements Runnable {
for (File containerTopDir : containerTopDirs) {
if (containerTopDir.isDirectory()) {
File[] containerDirs = containerTopDir.listFiles();
- for (File containerDir : containerDirs) {
- File metadataPath = new File(containerDir + File.separator +
- OzoneConsts.CONTAINER_META_PATH);
- String containerName = containerDir.getName();
- if (metadataPath.exists()) {
- File containerFile = KeyValueContainerLocationUtil
- .getContainerFile(metadataPath, containerName);
- File checksumFile = KeyValueContainerLocationUtil
- .getContainerCheckSumFile(metadataPath, containerName);
- if (containerFile.exists() && checksumFile.exists()) {
- verifyContainerFile(containerName, containerFile,
- checksumFile);
+ if (containerDirs != null) {
+ for (File containerDir : containerDirs) {
+ File metadataPath = new File(containerDir + File.separator +
+ OzoneConsts.CONTAINER_META_PATH);
+ String containerName = containerDir.getName();
+ if (metadataPath.exists()) {
+ File containerFile = KeyValueContainerLocationUtil
+ .getContainerFile(metadataPath, containerName);
+ File checksumFile = KeyValueContainerLocationUtil
+ .getContainerCheckSumFile(metadataPath, containerName);
+ if (containerFile.exists() && checksumFile.exists()) {
+ verifyContainerFile(containerName, containerFile,
+ checksumFile);
+ } else {
+ LOG.error(
+ "Missing container metadata files for Container: " +
+ "{}", containerName);
+ }
} else {
- LOG.error("Missing container metadata files for Container: " +
- "{}", containerName);
+ LOG.error("Missing container metadata directory for " +
+ "Container: {}", containerName);
}
- } else {
- LOG.error("Missing container metadata directory for " +
- "Container: {}", containerName);
}
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[31/50] [abbrv] hadoop git commit: HDFS-12837. Intermittent failure
in TestReencryptionWithKMS.
Posted by bo...@apache.org.
HDFS-12837. Intermittent failure in TestReencryptionWithKMS.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b37074be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b37074be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b37074be
Branch: refs/heads/YARN-7402
Commit: b37074be5ab35c238e18bb9c3b89db6d7f8d0986
Parents: 632aca5
Author: Xiao Chen <xi...@apache.org>
Authored: Wed Jul 11 20:54:37 2018 -0700
Committer: Xiao Chen <xi...@apache.org>
Committed: Wed Jul 11 21:03:19 2018 -0700
----------------------------------------------------------------------
.../server/namenode/ReencryptionHandler.java | 4 +-
.../hdfs/server/namenode/TestReencryption.java | 61 +++++++++++---------
2 files changed, 37 insertions(+), 28 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b37074be/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
index 5b52c82..b92fe9f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
@@ -616,7 +616,9 @@ public class ReencryptionHandler implements Runnable {
while (shouldPauseForTesting) {
LOG.info("Sleeping in the re-encrypt handler for unit test.");
synchronized (reencryptionHandler) {
- reencryptionHandler.wait(30000);
+ if (shouldPauseForTesting) {
+ reencryptionHandler.wait(30000);
+ }
}
LOG.info("Continuing re-encrypt handler after pausing.");
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b37074be/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
index 5409f0d..5d34d3c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
@@ -68,6 +68,7 @@ import static org.apache.hadoop.test.GenericTestUtils.assertExceptionContains;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
@@ -207,8 +208,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertNotEquals(fei0.getEzKeyVersionName(), zs.getEzKeyVersionName());
assertEquals(fei1.getEzKeyVersionName(), zs.getEzKeyVersionName());
assertEquals(10, zs.getFilesReencrypted());
@@ -600,14 +600,27 @@ public class TestReencryption {
final ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
if (fei != null) {
assertNotEquals(fei.getEzKeyVersionName(), zs.getEzKeyVersionName());
}
assertEquals(expectedFiles, zs.getFilesReencrypted());
}
+ /**
+ * Verify the zone status' completion time is larger than 0, and is no less
+ * than submission time.
+ */
+ private void verifyZoneCompletionTime(final ZoneReencryptionStatus zs) {
+ assertNotNull(zs);
+ assertTrue("Completion time should be positive. " + zs.getCompletionTime(),
+ zs.getCompletionTime() > 0);
+ assertTrue("Completion time " + zs.getCompletionTime()
+ + " should be no less than submission time "
+ + zs.getSubmissionTime(),
+ zs.getCompletionTime() >= zs.getSubmissionTime());
+ }
+
@Test
public void testReencryptLoadedFromFsimage() throws Exception {
/*
@@ -1476,7 +1489,7 @@ public class TestReencryption {
}
@Override
- public void reencryptEncryptedKeys() throws IOException {
+ public synchronized void reencryptEncryptedKeys() throws IOException {
if (exceptionCount > 0) {
exceptionCount--;
try {
@@ -1537,8 +1550,7 @@ public class TestReencryption {
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
assertTrue(zs.isCanceled());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(0, zs.getFilesReencrypted());
assertTrue(getUpdater().isRunning());
@@ -1560,8 +1572,7 @@ public class TestReencryption {
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
assertFalse(zs.isCanceled());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
}
@@ -1579,8 +1590,7 @@ public class TestReencryption {
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
assertTrue(zs.isCanceled());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(0, zs.getFilesReencrypted());
// verify re-encryption works after restart.
@@ -1592,8 +1602,7 @@ public class TestReencryption {
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
assertFalse(zs.isCanceled());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
}
@@ -1679,8 +1688,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
}
@@ -1736,7 +1744,7 @@ public class TestReencryption {
}
@Override
- public void reencryptEncryptedKeys() throws IOException {
+ public synchronized void reencryptEncryptedKeys() throws IOException {
if (exceptionCount > 0) {
--exceptionCount;
throw new IOException("Injected KMS failure");
@@ -1772,8 +1780,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(5, zs.getFilesReencrypted());
assertEquals(5, zs.getNumReencryptionFailures());
}
@@ -1788,7 +1795,8 @@ public class TestReencryption {
}
@Override
- public void reencryptUpdaterProcessOneTask() throws IOException {
+ public synchronized void reencryptUpdaterProcessOneTask()
+ throws IOException {
if (exceptionCount > 0) {
--exceptionCount;
throw new IOException("Injected process task failure");
@@ -1824,8 +1832,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(5, zs.getFilesReencrypted());
assertEquals(1, zs.getNumReencryptionFailures());
}
@@ -1841,7 +1848,8 @@ public class TestReencryption {
}
@Override
- public void reencryptUpdaterProcessCheckpoint() throws IOException {
+ public synchronized void reencryptUpdaterProcessCheckpoint()
+ throws IOException {
if (exceptionCount > 0) {
--exceptionCount;
throw new IOException("Injected process checkpoint failure");
@@ -1877,8 +1885,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
assertEquals(1, zs.getNumReencryptionFailures());
}
@@ -1893,7 +1900,8 @@ public class TestReencryption {
}
@Override
- public void reencryptUpdaterProcessOneTask() throws IOException {
+ public synchronized void reencryptUpdaterProcessOneTask()
+ throws IOException {
if (exceptionCount > 0) {
--exceptionCount;
throw new RetriableException("Injected process task failure");
@@ -1930,8 +1938,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
assertEquals(0, zs.getNumReencryptionFailures());
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[34/50] [abbrv] hadoop git commit: HDDS-228. Add the ReplicaMaps to
ContainerStateManager. Contributed by Ajay Kumar.
Posted by bo...@apache.org.
HDDS-228. Add the ReplicaMaps to ContainerStateManager.
Contributed by Ajay Kumar.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ee90efe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ee90efe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ee90efe
Branch: refs/heads/YARN-7402
Commit: 5ee90efed385db4bf235816145b30a0f691fc91b
Parents: a08812a
Author: Anu Engineer <ae...@apache.org>
Authored: Thu Jul 12 10:43:24 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Thu Jul 12 10:43:24 2018 -0700
----------------------------------------------------------------------
.../scm/container/ContainerStateManager.java | 34 ++++++++
.../scm/container/states/ContainerStateMap.java | 86 ++++++++++++++++++++
.../container/TestContainerStateManager.java | 79 ++++++++++++++++++
3 files changed, 199 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ee90efe/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
index 870ab1d..223deac 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hdds.scm.container;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.ScmConfigKeys;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
@@ -488,4 +489,37 @@ public class ContainerStateManager implements Closeable {
public void close() throws IOException {
}
+ /**
+ * Returns the latest list of DataNodes where replica for given containerId
+ * exist. Throws an SCMException if no entry is found for given containerId.
+ *
+ * @param containerID
+ * @return Set<DatanodeDetails>
+ */
+ public Set<DatanodeDetails> getContainerReplicas(ContainerID containerID)
+ throws SCMException {
+ return containers.getContainerReplicas(containerID);
+ }
+
+ /**
+ * Add a container Replica for given DataNode.
+ *
+ * @param containerID
+ * @param dn
+ */
+ public void addContainerReplica(ContainerID containerID, DatanodeDetails dn) {
+ containers.addContainerReplica(containerID, dn);
+ }
+
+ /**
+ * Remove a container Replica for given DataNode.
+ *
+ * @param containerID
+ * @param dn
+ * @return True of dataNode is removed successfully else false.
+ */
+ public boolean removeContainerReplica(ContainerID containerID,
+ DatanodeDetails dn) throws SCMException {
+ return containers.removeContainerReplica(containerID, dn);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ee90efe/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
index c23b1fd..1c92861 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
@@ -18,13 +18,18 @@
package org.apache.hadoop.hdds.scm.container.states;
+import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
+import java.util.HashSet;
+import java.util.Set;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.container.ContainerID;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.exceptions.SCMException;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes;
import org.apache.hadoop.util.AutoCloseableLock;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -83,6 +88,8 @@ public class ContainerStateMap {
private final ContainerAttribute<ReplicationType> typeMap;
private final Map<ContainerID, ContainerInfo> containerMap;
+ // Map to hold replicas of given container.
+ private final Map<ContainerID, Set<DatanodeDetails>> contReplicaMap;
private final static NavigableSet<ContainerID> EMPTY_SET =
Collections.unmodifiableNavigableSet(new TreeSet<>());
@@ -101,6 +108,7 @@ public class ContainerStateMap {
typeMap = new ContainerAttribute<>();
containerMap = new HashMap<>();
autoLock = new AutoCloseableLock();
+ contReplicaMap = new HashMap<>();
// new InstrumentedLock(getClass().getName(), LOG,
// new ReentrantLock(),
// 1000,
@@ -158,6 +166,84 @@ public class ContainerStateMap {
}
/**
+ * Returns the latest list of DataNodes where replica for given containerId
+ * exist. Throws an SCMException if no entry is found for given containerId.
+ *
+ * @param containerID
+ * @return Set<DatanodeDetails>
+ */
+ public Set<DatanodeDetails> getContainerReplicas(ContainerID containerID)
+ throws SCMException {
+ Preconditions.checkNotNull(containerID);
+ try (AutoCloseableLock lock = autoLock.acquire()) {
+ if (contReplicaMap.containsKey(containerID)) {
+ return Collections
+ .unmodifiableSet(contReplicaMap.get(containerID));
+ }
+ }
+ throw new SCMException(
+ "No entry exist for containerId: " + containerID + " in replica map.",
+ ResultCodes.FAILED_TO_FIND_CONTAINER);
+ }
+
+ /**
+ * Adds given datanodes as nodes where replica for given containerId exist.
+ * Logs a debug entry if a datanode is already added as replica for given
+ * ContainerId.
+ *
+ * @param containerID
+ * @param dnList
+ */
+ public void addContainerReplica(ContainerID containerID,
+ DatanodeDetails... dnList) {
+ Preconditions.checkNotNull(containerID);
+ // Take lock to avoid race condition around insertion.
+ try (AutoCloseableLock lock = autoLock.acquire()) {
+ for (DatanodeDetails dn : dnList) {
+ Preconditions.checkNotNull(dn);
+ if (contReplicaMap.containsKey(containerID)) {
+ if(!contReplicaMap.get(containerID).add(dn)) {
+ LOG.debug("ReplicaMap already contains entry for container Id: "
+ + "{},DataNode: {}", containerID, dn);
+ }
+ } else {
+ Set<DatanodeDetails> dnSet = new HashSet<>();
+ dnSet.add(dn);
+ contReplicaMap.put(containerID, dnSet);
+ }
+ }
+ }
+ }
+
+ /**
+ * Remove a container Replica for given DataNode.
+ *
+ * @param containerID
+ * @param dn
+ * @return True of dataNode is removed successfully else false.
+ */
+ public boolean removeContainerReplica(ContainerID containerID,
+ DatanodeDetails dn) throws SCMException {
+ Preconditions.checkNotNull(containerID);
+ Preconditions.checkNotNull(dn);
+
+ // Take lock to avoid race condition.
+ try (AutoCloseableLock lock = autoLock.acquire()) {
+ if (contReplicaMap.containsKey(containerID)) {
+ return contReplicaMap.get(containerID).remove(dn);
+ }
+ }
+ throw new SCMException(
+ "No entry exist for containerId: " + containerID + " in replica map.",
+ ResultCodes.FAILED_TO_FIND_CONTAINER);
+ }
+
+ @VisibleForTesting
+ public static Logger getLOG() {
+ return LOG;
+ }
+
+ /**
* Returns the full container Map.
*
* @return - Map
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ee90efe/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java
index bb85650..9e209af 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java
@@ -17,14 +17,22 @@
package org.apache.hadoop.hdds.scm.container;
import com.google.common.primitives.Longs;
+import java.util.Set;
+import java.util.UUID;
+import org.apache.commons.lang3.RandomUtils;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
+import org.apache.hadoop.hdds.scm.container.states.ContainerStateMap;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
import org.apache.hadoop.ozone.MiniOzoneCluster;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
import org.apache.hadoop.hdds.scm.XceiverClientManager;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
@@ -35,6 +43,7 @@ import java.util.ArrayList;
import java.util.List;
import java.util.NavigableSet;
import java.util.Random;
+import org.slf4j.event.Level;
/**
* Tests for ContainerStateManager.
@@ -333,4 +342,74 @@ public class TestContainerStateManager {
Assert.assertEquals(allocatedSize, currentInfo.getAllocatedBytes());
}
}
+
+ @Test
+ public void testReplicaMap() throws Exception {
+ GenericTestUtils.setLogLevel(ContainerStateMap.getLOG(), Level.DEBUG);
+ GenericTestUtils.LogCapturer logCapturer = GenericTestUtils.LogCapturer
+ .captureLogs(ContainerStateMap.getLOG());
+ DatanodeDetails dn1 = DatanodeDetails.newBuilder().setHostName("host1")
+ .setIpAddress("1.1.1.1")
+ .setUuid(UUID.randomUUID().toString()).build();
+ DatanodeDetails dn2 = DatanodeDetails.newBuilder().setHostName("host2")
+ .setIpAddress("2.2.2.2")
+ .setUuid(UUID.randomUUID().toString()).build();
+
+ // Test 1: no replica's exist
+ ContainerID containerID = ContainerID.valueof(RandomUtils.nextLong());
+ Set<DatanodeDetails> replicaSet;
+ LambdaTestUtils.intercept(SCMException.class, "", () -> {
+ containerStateManager.getContainerReplicas(containerID);
+ });
+
+ // Test 2: Add replica nodes and then test
+ containerStateManager.addContainerReplica(containerID, dn1);
+ containerStateManager.addContainerReplica(containerID, dn2);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(2, replicaSet.size());
+ Assert.assertTrue(replicaSet.contains(dn1));
+ Assert.assertTrue(replicaSet.contains(dn2));
+
+ // Test 3: Remove one replica node and then test
+ containerStateManager.removeContainerReplica(containerID, dn1);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(1, replicaSet.size());
+ Assert.assertFalse(replicaSet.contains(dn1));
+ Assert.assertTrue(replicaSet.contains(dn2));
+
+ // Test 3: Remove second replica node and then test
+ containerStateManager.removeContainerReplica(containerID, dn2);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(0, replicaSet.size());
+ Assert.assertFalse(replicaSet.contains(dn1));
+ Assert.assertFalse(replicaSet.contains(dn2));
+
+ // Test 4: Re-insert dn1
+ containerStateManager.addContainerReplica(containerID, dn1);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(1, replicaSet.size());
+ Assert.assertTrue(replicaSet.contains(dn1));
+ Assert.assertFalse(replicaSet.contains(dn2));
+
+ // Re-insert dn2
+ containerStateManager.addContainerReplica(containerID, dn2);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(2, replicaSet.size());
+ Assert.assertTrue(replicaSet.contains(dn1));
+ Assert.assertTrue(replicaSet.contains(dn2));
+
+ Assert.assertFalse(logCapturer.getOutput().contains(
+ "ReplicaMap already contains entry for container Id: " + containerID
+ .toString() + ",DataNode: " + dn1.toString()));
+ // Re-insert dn1
+ containerStateManager.addContainerReplica(containerID, dn1);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(2, replicaSet.size());
+ Assert.assertTrue(replicaSet.contains(dn1));
+ Assert.assertTrue(replicaSet.contains(dn2));
+ Assert.assertTrue(logCapturer.getOutput().contains(
+ "ReplicaMap already contains entry for container Id: " + containerID
+ .toString() + ",DataNode: " + dn1.toString()));
+ }
+
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[39/50] [abbrv] hadoop git commit: HDDS-238. Add Node2Pipeline Map in
SCM to track ratis/standalone pipelines. Contributed by Mukul Kumar Singh.
Posted by bo...@apache.org.
HDDS-238. Add Node2Pipeline Map in SCM to track ratis/standalone pipelines. Contributed by Mukul Kumar Singh.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3f3f7222
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3f3f7222
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3f3f7222
Branch: refs/heads/YARN-7402
Commit: 3f3f72221ffd11cc6bfa0e010e3c5b0e14911102
Parents: f89e265
Author: Xiaoyu Yao <xy...@apache.org>
Authored: Thu Jul 12 22:02:57 2018 -0700
Committer: Xiaoyu Yao <xy...@apache.org>
Committed: Thu Jul 12 22:14:03 2018 -0700
----------------------------------------------------------------------
.../container/common/helpers/ContainerInfo.java | 11 ++
.../hdds/scm/container/ContainerMapping.java | 11 +-
.../scm/container/ContainerStateManager.java | 6 +
.../scm/container/states/ContainerStateMap.java | 36 +++++-
.../hdds/scm/pipelines/Node2PipelineMap.java | 121 +++++++++++++++++++
.../hdds/scm/pipelines/PipelineManager.java | 22 ++--
.../hdds/scm/pipelines/PipelineSelector.java | 24 +++-
.../scm/pipelines/ratis/RatisManagerImpl.java | 11 +-
.../standalone/StandaloneManagerImpl.java | 7 +-
.../hdds/scm/pipeline/TestNode2PipelineMap.java | 117 ++++++++++++++++++
10 files changed, 343 insertions(+), 23 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
index 9593717..4074b21 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
@@ -456,4 +456,15 @@ public class ContainerInfo implements Comparator<ContainerInfo>,
replicationFactor, replicationType);
}
}
+
+ /**
+ * Check if a container is in open state, this will check if the
+ * container is either open or allocated or creating. Any containers in
+ * these states is managed as an open container by SCM.
+ */
+ public boolean isContainerOpen() {
+ return state == HddsProtos.LifeCycleState.ALLOCATED ||
+ state == HddsProtos.LifeCycleState.CREATING ||
+ state == HddsProtos.LifeCycleState.OPEN;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
index abad32c..26f4d86 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
@@ -477,7 +477,7 @@ public class ContainerMapping implements Mapping {
List<StorageContainerDatanodeProtocolProtos.ContainerInfo>
containerInfos = reports.getReportsList();
- for (StorageContainerDatanodeProtocolProtos.ContainerInfo datanodeState :
+ for (StorageContainerDatanodeProtocolProtos.ContainerInfo datanodeState :
containerInfos) {
byte[] dbKey = Longs.toByteArray(datanodeState.getContainerID());
lock.lock();
@@ -498,7 +498,9 @@ public class ContainerMapping implements Mapping {
containerStore.put(dbKey, newState.toByteArray());
// If the container is closed, then state is already written to SCM
- Pipeline pipeline = pipelineSelector.getPipeline(newState.getPipelineName(), newState.getReplicationType());
+ Pipeline pipeline =
+ pipelineSelector.getPipeline(newState.getPipelineName(),
+ newState.getReplicationType());
if(pipeline == null) {
pipeline = pipelineSelector
.getReplicationPipeline(newState.getReplicationType(),
@@ -713,4 +715,9 @@ public class ContainerMapping implements Mapping {
public MetadataStore getContainerStore() {
return containerStore;
}
+
+ @VisibleForTesting
+ public PipelineSelector getPipelineSelector() {
+ return pipelineSelector;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
index 223deac..b2431dc 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
@@ -17,6 +17,7 @@
package org.apache.hadoop.hdds.scm.container;
+import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -522,4 +523,9 @@ public class ContainerStateManager implements Closeable {
DatanodeDetails dn) throws SCMException {
return containers.removeContainerReplica(containerID, dn);
}
+
+ @VisibleForTesting
+ public ContainerStateMap getContainerStateMap() {
+ return containers;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
index 1c92861..46fe2ab 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
@@ -51,7 +51,7 @@ import static org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes
* Container State Map acts like a unified map for various attributes that are
* used to select containers when we need allocated blocks.
* <p>
- * This class provides the ability to query 4 classes of attributes. They are
+ * This class provides the ability to query 5 classes of attributes. They are
* <p>
* 1. LifeCycleStates - LifeCycle States of container describe in which state
* a container is. For example, a container needs to be in Open State for a
@@ -72,6 +72,9 @@ import static org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes
* Replica and THREE Replica. User can specify how many copies should be made
* for a ozone key.
* <p>
+ * 5.Pipeline - The pipeline constitute the set of Datanodes on which the
+ * open container resides physically.
+ * <p>
* The most common access pattern of this class is to select a container based
* on all these parameters, for example, when allocating a block we will
* select a container that belongs to user1, with Ratis replication which can
@@ -86,6 +89,14 @@ public class ContainerStateMap {
private final ContainerAttribute<String> ownerMap;
private final ContainerAttribute<ReplicationFactor> factorMap;
private final ContainerAttribute<ReplicationType> typeMap;
+ // This map constitutes the pipeline to open container mappings.
+ // This map will be queried for the list of open containers on a particular
+ // pipeline and issue a close on corresponding containers in case of
+ // following events:
+ //1. Dead datanode.
+ //2. Datanode out of space.
+ //3. Volume loss or volume out of space.
+ private final ContainerAttribute<String> openPipelineMap;
private final Map<ContainerID, ContainerInfo> containerMap;
// Map to hold replicas of given container.
@@ -106,6 +117,7 @@ public class ContainerStateMap {
ownerMap = new ContainerAttribute<>();
factorMap = new ContainerAttribute<>();
typeMap = new ContainerAttribute<>();
+ openPipelineMap = new ContainerAttribute<>();
containerMap = new HashMap<>();
autoLock = new AutoCloseableLock();
contReplicaMap = new HashMap<>();
@@ -140,6 +152,9 @@ public class ContainerStateMap {
ownerMap.insert(info.getOwner(), id);
factorMap.insert(info.getReplicationFactor(), id);
typeMap.insert(info.getReplicationType(), id);
+ if (info.isContainerOpen()) {
+ openPipelineMap.insert(info.getPipelineName(), id);
+ }
LOG.trace("Created container with {} successfully.", id);
}
}
@@ -329,6 +344,11 @@ public class ContainerStateMap {
throw new SCMException("Updating the container map failed.", ex,
FAILED_TO_CHANGE_CONTAINER_STATE);
}
+ // In case the container is set to closed state, it needs to be removed from
+ // the pipeline Map.
+ if (newState == LifeCycleState.CLOSED) {
+ openPipelineMap.remove(info.getPipelineName(), id);
+ }
}
/**
@@ -360,6 +380,20 @@ public class ContainerStateMap {
}
/**
+ * Returns Open containers in the SCM by the Pipeline
+ *
+ * @param pipeline - Pipeline name.
+ * @return NavigableSet<ContainerID>
+ */
+ public NavigableSet<ContainerID> getOpenContainerIDsByPipeline(String pipeline) {
+ Preconditions.checkNotNull(pipeline);
+
+ try (AutoCloseableLock lock = autoLock.acquire()) {
+ return openPipelineMap.getCollection(pipeline);
+ }
+ }
+
+ /**
* Returns Containers by replication factor.
*
* @param factor - Replication Factor.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/Node2PipelineMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/Node2PipelineMap.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/Node2PipelineMap.java
new file mode 100644
index 0000000..2e89616
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/Node2PipelineMap.java
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ *
+ */
+
+package org.apache.hadoop.hdds.scm.pipelines;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+
+import java.util.Set;
+import java.util.UUID;
+import java.util.Map;
+import java.util.HashSet;
+import java.util.Collections;
+
+import java.util.concurrent.ConcurrentHashMap;
+
+import static org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes
+ .DUPLICATE_DATANODE;
+
+
+/**
+ * This data structure maintains the list of pipelines which the given datanode
+ * is a part of.
+ * This information will be added whenever a new pipeline allocation happens.
+ *
+ * TODO: this information needs to be regenerated from pipeline reports on
+ * SCM restart
+ */
+public class Node2PipelineMap {
+ private final Map<UUID, Set<Pipeline>> dn2PipelineMap;
+
+ /**
+ * Constructs a Node2PipelineMap Object.
+ */
+ public Node2PipelineMap() {
+ dn2PipelineMap = new ConcurrentHashMap<>();
+ }
+
+ /**
+ * Returns true if this a datanode that is already tracked by
+ * Node2PipelineMap.
+ *
+ * @param datanodeID - UUID of the Datanode.
+ * @return True if this is tracked, false if this map does not know about it.
+ */
+ private boolean isKnownDatanode(UUID datanodeID) {
+ Preconditions.checkNotNull(datanodeID);
+ return dn2PipelineMap.containsKey(datanodeID);
+ }
+
+ /**
+ * Insert a new datanode into Node2Pipeline Map.
+ *
+ * @param datanodeID -- Datanode UUID
+ * @param pipelines - set of pipelines.
+ */
+ private void insertNewDatanode(UUID datanodeID, Set<Pipeline> pipelines)
+ throws SCMException {
+ Preconditions.checkNotNull(pipelines);
+ Preconditions.checkNotNull(datanodeID);
+ if(dn2PipelineMap.putIfAbsent(datanodeID, pipelines) != null) {
+ throw new SCMException("Node already exists in the map",
+ DUPLICATE_DATANODE);
+ }
+ }
+
+ /**
+ * Removes datanode Entry from the map.
+ * @param datanodeID - Datanode ID.
+ */
+ public synchronized void removeDatanode(UUID datanodeID) {
+ Preconditions.checkNotNull(datanodeID);
+ dn2PipelineMap.computeIfPresent(datanodeID, (k, v) -> null);
+ }
+
+ /**
+ * Returns null if there no pipelines associated with this datanode ID.
+ *
+ * @param datanode - UUID
+ * @return Set of pipelines or Null.
+ */
+ public Set<Pipeline> getPipelines(UUID datanode) {
+ Preconditions.checkNotNull(datanode);
+ return dn2PipelineMap.computeIfPresent(datanode, (k, v) ->
+ Collections.unmodifiableSet(v));
+ }
+
+/**
+ * Adds a pipeline entry to a given dataNode in the map.
+ * @param pipeline Pipeline to be added
+ */
+ public synchronized void addPipeline(Pipeline pipeline) throws SCMException {
+ for (DatanodeDetails details : pipeline.getDatanodes().values()) {
+ UUID dnId = details.getUuid();
+ dn2PipelineMap
+ .computeIfAbsent(dnId,k->Collections.synchronizedSet(new HashSet<>()))
+ .add(pipeline);
+ }
+ }
+
+ public Map<UUID, Set<Pipeline>> getDn2PipelineMap() {
+ return Collections.unmodifiableMap(dn2PipelineMap);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
index a1fbce6..a041973 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
@@ -40,11 +40,13 @@ public abstract class PipelineManager {
private final List<Pipeline> activePipelines;
private final Map<String, Pipeline> activePipelineMap;
private final AtomicInteger pipelineIndex;
+ private final Node2PipelineMap node2PipelineMap;
- public PipelineManager() {
+ public PipelineManager(Node2PipelineMap map) {
activePipelines = new LinkedList<>();
pipelineIndex = new AtomicInteger(0);
activePipelineMap = new WeakHashMap<>();
+ node2PipelineMap = map;
}
/**
@@ -66,24 +68,23 @@ public abstract class PipelineManager {
*
* 2. This allows all nodes to part of a pipeline quickly.
*
- * 3. if there are not enough free nodes, return conduits in a
+ * 3. if there are not enough free nodes, return pipeline in a
* round-robin fashion.
*
* TODO: Might have to come up with a better algorithm than this.
- * Create a new placement policy that returns conduits in round robin
+ * Create a new placement policy that returns pipelines in round robin
* fashion.
*/
- Pipeline pipeline =
- allocatePipeline(replicationFactor);
+ Pipeline pipeline = allocatePipeline(replicationFactor);
if (pipeline != null) {
LOG.debug("created new pipeline:{} for container with " +
"replicationType:{} replicationFactor:{}",
pipeline.getPipelineName(), replicationType, replicationFactor);
activePipelines.add(pipeline);
activePipelineMap.put(pipeline.getPipelineName(), pipeline);
+ node2PipelineMap.addPipeline(pipeline);
} else {
- pipeline =
- findOpenPipeline(replicationType, replicationFactor);
+ pipeline = findOpenPipeline(replicationType, replicationFactor);
if (pipeline != null) {
LOG.debug("re-used pipeline:{} for container with " +
"replicationType:{} replicationFactor:{}",
@@ -133,6 +134,11 @@ public abstract class PipelineManager {
public abstract Pipeline allocatePipeline(
ReplicationFactor replicationFactor) throws IOException;
+ public void removePipeline(Pipeline pipeline) {
+ activePipelines.remove(pipeline);
+ activePipelineMap.remove(pipeline.getPipelineName());
+ }
+
/**
* Find a Pipeline that is operational.
*
@@ -143,7 +149,7 @@ public abstract class PipelineManager {
Pipeline pipeline = null;
final int sentinal = -1;
if (activePipelines.size() == 0) {
- LOG.error("No Operational conduits found. Returning null.");
+ LOG.error("No Operational pipelines found. Returning null.");
return null;
}
int startIndex = getNextIndex();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java
index 3846a84..2955af5 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java
@@ -19,7 +19,6 @@ package org.apache.hadoop.hdds.scm.pipelines;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.scm.ScmConfigKeys;
-import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
import org.apache.hadoop.hdds.scm.container.placement.algorithms
.ContainerPlacementPolicy;
@@ -41,6 +40,8 @@ import java.io.IOException;
import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
import java.util.List;
+import java.util.Set;
+import java.util.UUID;
import java.util.stream.Collectors;
/**
@@ -55,7 +56,7 @@ public class PipelineSelector {
private final RatisManagerImpl ratisManager;
private final StandaloneManagerImpl standaloneManager;
private final long containerSize;
-
+ private final Node2PipelineMap node2PipelineMap;
/**
* Constructs a pipeline Selector.
*
@@ -69,12 +70,13 @@ public class PipelineSelector {
this.containerSize = OzoneConsts.GB * this.conf.getInt(
ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_GB,
ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT);
+ node2PipelineMap = new Node2PipelineMap();
this.standaloneManager =
new StandaloneManagerImpl(this.nodeManager, placementPolicy,
- containerSize);
+ containerSize, node2PipelineMap);
this.ratisManager =
new RatisManagerImpl(this.nodeManager, placementPolicy, containerSize,
- conf);
+ conf, node2PipelineMap);
}
/**
@@ -243,4 +245,18 @@ public class PipelineSelector {
.collect(Collectors.joining(",")));
manager.updatePipeline(pipelineID, newDatanodes);
}
+
+ public Node2PipelineMap getNode2PipelineMap() {
+ return node2PipelineMap;
+ }
+
+ public void removePipeline(UUID dnId) {
+ Set<Pipeline> pipelineChannelSet =
+ node2PipelineMap.getPipelines(dnId);
+ for (Pipeline pipelineChannel : pipelineChannelSet) {
+ getPipelineManager(pipelineChannel.getType())
+ .removePipeline(pipelineChannel);
+ }
+ node2PipelineMap.removeDatanode(dnId);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
index 189060e..a8f8b20 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
@@ -19,11 +19,11 @@ package org.apache.hadoop.hdds.scm.pipelines.ratis;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.scm.XceiverClientRatis;
-import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
import org.apache.hadoop.hdds.scm.container.placement.algorithms
.ContainerPlacementPolicy;
import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipelines.Node2PipelineMap;
import org.apache.hadoop.hdds.scm.pipelines.PipelineManager;
import org.apache.hadoop.hdds.scm.pipelines.PipelineSelector;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -60,8 +60,9 @@ public class RatisManagerImpl extends PipelineManager {
* @param nodeManager
*/
public RatisManagerImpl(NodeManager nodeManager,
- ContainerPlacementPolicy placementPolicy, long size, Configuration conf) {
- super();
+ ContainerPlacementPolicy placementPolicy, long size, Configuration conf,
+ Node2PipelineMap map) {
+ super(map);
this.conf = conf;
this.nodeManager = nodeManager;
ratisMembers = new HashSet<>();
@@ -89,11 +90,11 @@ public class RatisManagerImpl extends PipelineManager {
ratisMembers.addAll(newNodesList);
LOG.info("Allocating a new ratis pipeline of size: {}", count);
// Start all channel names with "Ratis", easy to grep the logs.
- String conduitName = PREFIX +
+ String pipelineName = PREFIX +
UUID.randomUUID().toString().substring(PREFIX.length());
Pipeline pipeline=
PipelineSelector.newPipelineFromNodes(newNodesList,
- LifeCycleState.OPEN, ReplicationType.RATIS, factor, conduitName);
+ LifeCycleState.OPEN, ReplicationType.RATIS, factor, pipelineName);
try (XceiverClientRatis client =
XceiverClientRatis.newXceiverClientRatis(pipeline, conf)) {
client.createPipeline(pipeline.getPipelineName(), newNodesList);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
index 579a3a2..cf691bf 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
@@ -17,11 +17,11 @@
package org.apache.hadoop.hdds.scm.pipelines.standalone;
import com.google.common.base.Preconditions;
-import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
import org.apache.hadoop.hdds.scm.container.placement.algorithms
.ContainerPlacementPolicy;
import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipelines.Node2PipelineMap;
import org.apache.hadoop.hdds.scm.pipelines.PipelineManager;
import org.apache.hadoop.hdds.scm.pipelines.PipelineSelector;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -58,8 +58,9 @@ public class StandaloneManagerImpl extends PipelineManager {
* @param containerSize - Container Size.
*/
public StandaloneManagerImpl(NodeManager nodeManager,
- ContainerPlacementPolicy placementPolicy, long containerSize) {
- super();
+ ContainerPlacementPolicy placementPolicy, long containerSize,
+ Node2PipelineMap map) {
+ super(map);
this.nodeManager = nodeManager;
this.placementPolicy = placementPolicy;
this.containerSize = containerSize;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java
new file mode 100644
index 0000000..bc3505f
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ *
+ */
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerMapping;
+import org.apache.hadoop.hdds.scm.container.common.helpers
+ .ContainerWithPipeline;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
+import org.apache.hadoop.hdds.scm.container.states.ContainerStateMap;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.Set;
+
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+ .ReplicationType.RATIS;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+ .ReplicationFactor.THREE;
+
+public class TestNode2PipelineMap {
+
+ private static MiniOzoneCluster cluster;
+ private static OzoneConfiguration conf;
+ private static StorageContainerManager scm;
+ private static ContainerWithPipeline ratisContainer;
+ private static ContainerStateMap stateMap;
+ private static ContainerMapping mapping;
+
+ /**
+ * Create a MiniDFSCluster for testing.
+ *
+ * @throws IOException
+ */
+ @BeforeClass
+ public static void init() throws Exception {
+ conf = new OzoneConfiguration();
+ cluster = MiniOzoneCluster.newBuilder(conf).setNumDatanodes(5).build();
+ cluster.waitForClusterToBeReady();
+ scm = cluster.getStorageContainerManager();
+ mapping = (ContainerMapping)scm.getScmContainerManager();
+ stateMap = mapping.getStateManager().getContainerStateMap();
+ ratisContainer = mapping.allocateContainer(RATIS, THREE, "testOwner");
+ }
+
+ /**
+ * Shutdown MiniDFSCluster.
+ */
+ @AfterClass
+ public static void shutdown() {
+ if (cluster != null) {
+ cluster.shutdown();
+ }
+ }
+
+
+ @Test
+ public void testPipelineMap() throws IOException {
+
+ NavigableSet<ContainerID> set = stateMap.getOpenContainerIDsByPipeline(
+ ratisContainer.getPipeline().getPipelineName());
+
+ long cId = ratisContainer.getContainerInfo().getContainerID();
+ Assert.assertEquals(1, set.size());
+ Assert.assertEquals(cId, set.first().getId());
+
+ List<DatanodeDetails> dns = ratisContainer.getPipeline().getMachines();
+ Assert.assertEquals(3, dns.size());
+
+ // get pipeline details by dnid
+ Set<Pipeline> pipelines = mapping.getPipelineSelector()
+ .getNode2PipelineMap().getPipelines(dns.get(0).getUuid());
+ Assert.assertEquals(1, pipelines.size());
+ pipelines.forEach(p -> Assert.assertEquals(p.getPipelineName(),
+ ratisContainer.getPipeline().getPipelineName()));
+
+
+ // Now close the container and it should not show up while fetching
+ // containers by pipeline
+ mapping
+ .updateContainerState(cId, HddsProtos.LifeCycleEvent.CREATE);
+ mapping
+ .updateContainerState(cId, HddsProtos.LifeCycleEvent.CREATED);
+ mapping
+ .updateContainerState(cId, HddsProtos.LifeCycleEvent.FINALIZE);
+ mapping
+ .updateContainerState(cId, HddsProtos.LifeCycleEvent.CLOSE);
+ NavigableSet<ContainerID> set2 = stateMap.getOpenContainerIDsByPipeline(
+ ratisContainer.getPipeline().getPipelineName());
+ Assert.assertEquals(0, set2.size());
+ }
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[33/50] [abbrv] hadoop git commit: HADOOP-15316. GenericTestUtils can
exceed maxSleepTime. Contributed by Adam Antal.
Posted by bo...@apache.org.
HADOOP-15316. GenericTestUtils can exceed maxSleepTime. Contributed by Adam Antal.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4f3f9391
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4f3f9391
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4f3f9391
Branch: refs/heads/YARN-7402
Commit: 4f3f9391b035d7f7e285c332770c6c1ede9a5a85
Parents: b37074b
Author: Sean Mackrory <ma...@apache.org>
Authored: Thu Jul 12 16:45:07 2018 +0200
Committer: Sean Mackrory <ma...@apache.org>
Committed: Thu Jul 12 17:24:01 2018 +0200
----------------------------------------------------------------------
.../src/test/java/org/apache/hadoop/test/GenericTestUtils.java | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f3f9391/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index 3e9da1b..0112894 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -661,7 +661,7 @@ public abstract class GenericTestUtils {
public Object answer(InvocationOnMock invocation) throws Throwable {
boolean interrupted = false;
try {
- Thread.sleep(r.nextInt(maxSleepTime) + minSleepTime);
+ Thread.sleep(r.nextInt(maxSleepTime - minSleepTime) + minSleepTime);
} catch (InterruptedException ie) {
interrupted = true;
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[04/50] [abbrv] hadoop git commit: YARN-8506. Make
GetApplicationsRequestPBImpl thread safe. (wangda)
Posted by bo...@apache.org.
YARN-8506. Make GetApplicationsRequestPBImpl thread safe. (wangda)
Change-Id: If304567abb77a01b686d82c769bdf50728484163
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/83cd84b7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/83cd84b7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/83cd84b7
Branch: refs/heads/YARN-7402
Commit: 83cd84b70bac7b613eb4b2901d5ffe40098692eb
Parents: 0838fe8
Author: Wangda Tan <wa...@apache.org>
Authored: Mon Jul 9 11:30:08 2018 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Mon Jul 9 11:30:08 2018 -0700
----------------------------------------------------------------------
.../impl/pb/GetApplicationsRequestPBImpl.java | 44 ++++++++++----------
1 file changed, 22 insertions(+), 22 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/83cd84b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
index a6abb99..4c5fee0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
@@ -65,7 +65,7 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
viaProto = true;
}
- public GetApplicationsRequestProto getProto() {
+ public synchronized GetApplicationsRequestProto getProto() {
mergeLocalToProto();
proto = viaProto ? proto : builder.build();
viaProto = true;
@@ -175,13 +175,13 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public Set<String> getApplicationTypes() {
+ public synchronized Set<String> getApplicationTypes() {
initApplicationTypes();
return this.applicationTypes;
}
@Override
- public void setApplicationTypes(Set<String> applicationTypes) {
+ public synchronized void setApplicationTypes(Set<String> applicationTypes) {
maybeInitBuilder();
if (applicationTypes == null)
builder.clearApplicationTypes();
@@ -198,13 +198,13 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public Set<String> getApplicationTags() {
+ public synchronized Set<String> getApplicationTags() {
initApplicationTags();
return this.applicationTags;
}
@Override
- public void setApplicationTags(Set<String> tags) {
+ public synchronized void setApplicationTags(Set<String> tags) {
maybeInitBuilder();
if (tags == null || tags.isEmpty()) {
builder.clearApplicationTags();
@@ -219,7 +219,7 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public EnumSet<YarnApplicationState> getApplicationStates() {
+ public synchronized EnumSet<YarnApplicationState> getApplicationStates() {
initApplicationStates();
return this.applicationStates;
}
@@ -233,12 +233,12 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public ApplicationsRequestScope getScope() {
+ public synchronized ApplicationsRequestScope getScope() {
initScope();
return this.scope;
}
- public void setScope(ApplicationsRequestScope scope) {
+ public synchronized void setScope(ApplicationsRequestScope scope) {
maybeInitBuilder();
if (scope == null) {
builder.clearScope();
@@ -247,7 +247,7 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public void setApplicationStates(EnumSet<YarnApplicationState> applicationStates) {
+ public synchronized void setApplicationStates(EnumSet<YarnApplicationState> applicationStates) {
maybeInitBuilder();
if (applicationStates == null) {
builder.clearApplicationStates();
@@ -256,7 +256,7 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public void setApplicationStates(Set<String> applicationStates) {
+ public synchronized void setApplicationStates(Set<String> applicationStates) {
EnumSet<YarnApplicationState> appStates = null;
for (YarnApplicationState state : YarnApplicationState.values()) {
if (applicationStates.contains(
@@ -272,12 +272,12 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public Set<String> getUsers() {
+ public synchronized Set<String> getUsers() {
initUsers();
return this.users;
}
- public void setUsers(Set<String> users) {
+ public synchronized void setUsers(Set<String> users) {
maybeInitBuilder();
if (users == null) {
builder.clearUsers();
@@ -286,13 +286,13 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public Set<String> getQueues() {
+ public synchronized Set<String> getQueues() {
initQueues();
return this.queues;
}
@Override
- public void setQueues(Set<String> queues) {
+ public synchronized void setQueues(Set<String> queues) {
maybeInitBuilder();
if (queues == null) {
builder.clearQueues();
@@ -301,7 +301,7 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public long getLimit() {
+ public synchronized long getLimit() {
if (this.limit == Long.MAX_VALUE) {
GetApplicationsRequestProtoOrBuilder p = viaProto ? proto : builder;
this.limit = p.hasLimit() ? p.getLimit() : Long.MAX_VALUE;
@@ -310,13 +310,13 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public void setLimit(long limit) {
+ public synchronized void setLimit(long limit) {
maybeInitBuilder();
this.limit = limit;
}
@Override
- public Range<Long> getStartRange() {
+ public synchronized Range<Long> getStartRange() {
if (this.start == null) {
GetApplicationsRequestProtoOrBuilder p = viaProto ? proto: builder;
if (p.hasStartBegin() || p.hasStartEnd()) {
@@ -329,12 +329,12 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public void setStartRange(Range<Long> range) {
+ public synchronized void setStartRange(Range<Long> range) {
this.start = range;
}
@Override
- public void setStartRange(long begin, long end)
+ public synchronized void setStartRange(long begin, long end)
throws IllegalArgumentException {
if (begin > end) {
throw new IllegalArgumentException("begin > end in range (begin, " +
@@ -344,7 +344,7 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public Range<Long> getFinishRange() {
+ public synchronized Range<Long> getFinishRange() {
if (this.finish == null) {
GetApplicationsRequestProtoOrBuilder p = viaProto ? proto: builder;
if (p.hasFinishBegin() || p.hasFinishEnd()) {
@@ -357,12 +357,12 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
}
@Override
- public void setFinishRange(Range<Long> range) {
+ public synchronized void setFinishRange(Range<Long> range) {
this.finish = range;
}
@Override
- public void setFinishRange(long begin, long end) {
+ public synchronized void setFinishRange(long begin, long end) {
if (begin > end) {
throw new IllegalArgumentException("begin > end in range (begin, " +
"end): (" + begin + ", " + end + ")");
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[13/50] [abbrv] hadoop git commit: HDDS-240. Implement metrics for
EventQueue. Contributed by Elek, Marton.
Posted by bo...@apache.org.
HDDS-240. Implement metrics for EventQueue.
Contributed by Elek, Marton.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2403231c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2403231c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2403231c
Branch: refs/heads/YARN-7402
Commit: 2403231c8c3685ba08cd6bdf715d281cae611e45
Parents: 3c0a66a
Author: Anu Engineer <ae...@apache.org>
Authored: Mon Jul 9 13:04:44 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Mon Jul 9 13:04:44 2018 -0700
----------------------------------------------------------------------
.../hadoop/hdds/server/events/EventQueue.java | 108 +++++++++++--------
.../server/events/SingleThreadExecutor.java | 35 ++++--
.../hdds/server/events/TestEventQueue.java | 35 +-----
3 files changed, 91 insertions(+), 87 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2403231c/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
index 44d85f5..7e29223 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
@@ -18,7 +18,11 @@
package org.apache.hadoop.hdds.server.events;
import com.google.common.annotations.VisibleForTesting;
+
+import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.util.Time;
+
+import com.google.common.base.Preconditions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -42,6 +46,8 @@ public class EventQueue implements EventPublisher, AutoCloseable {
private static final Logger LOG =
LoggerFactory.getLogger(EventQueue.class);
+ private static final String EXECUTOR_NAME_SEPARATOR = "For";
+
private final Map<Event, Map<EventExecutor, List<EventHandler>>> executors =
new HashMap<>();
@@ -51,38 +57,74 @@ public class EventQueue implements EventPublisher, AutoCloseable {
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
EVENT_TYPE event, EventHandler<PAYLOAD> handler) {
-
- this.addHandler(event, new SingleThreadExecutor<>(
- event.getName()), handler);
+ this.addHandler(event, handler, generateHandlerName(handler));
}
+ /**
+ * Add new handler to the event queue.
+ * <p>
+ * By default a separated single thread executor will be dedicated to
+ * deliver the events to the registered event handler.
+ *
+ * @param event Triggering event.
+ * @param handler Handler of event (will be called from a separated
+ * thread)
+ * @param handlerName The name of handler (should be unique together with
+ * the event name)
+ * @param <PAYLOAD> The type of the event payload.
+ * @param <EVENT_TYPE> The type of the event identifier.
+ */
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
- EVENT_TYPE event,
- EventExecutor<PAYLOAD> executor,
- EventHandler<PAYLOAD> handler) {
+ EVENT_TYPE event, EventHandler<PAYLOAD> handler, String handlerName) {
+ validateEvent(event);
+ Preconditions.checkNotNull(handler, "Handler name should not be null.");
+ String executorName =
+ StringUtils.camelize(event.getName()) + EXECUTOR_NAME_SEPARATOR
+ + handlerName;
+ this.addHandler(event, new SingleThreadExecutor<>(executorName), handler);
+ }
- executors.putIfAbsent(event, new HashMap<>());
- executors.get(event).putIfAbsent(executor, new ArrayList<>());
+ private <EVENT_TYPE extends Event<?>> void validateEvent(EVENT_TYPE event) {
+ Preconditions
+ .checkArgument(!event.getName().contains(EXECUTOR_NAME_SEPARATOR),
+ "Event name should not contain " + EXECUTOR_NAME_SEPARATOR
+ + " string.");
- executors.get(event)
- .get(executor)
- .add(handler);
+ }
+
+ private <PAYLOAD> String generateHandlerName(EventHandler<PAYLOAD> handler) {
+ if (!"".equals(handler.getClass().getSimpleName())) {
+ return handler.getClass().getSimpleName();
+ } else {
+ return handler.getClass().getName();
+ }
}
/**
- * Creates one executor with multiple event handlers.
+ * Add event handler with custom executor.
+ *
+ * @param event Triggering event.
+ * @param executor The executor imlementation to deliver events from a
+ * separated threads. Please keep in your mind that
+ * registering metrics is the responsibility of the
+ * caller.
+ * @param handler Handler of event (will be called from a separated
+ * thread)
+ * @param <PAYLOAD> The type of the event payload.
+ * @param <EVENT_TYPE> The type of the event identifier.
*/
- public void addHandlerGroup(String name, HandlerForEvent<?>...
- eventsAndHandlers) {
- SingleThreadExecutor sharedExecutor =
- new SingleThreadExecutor(name);
- for (HandlerForEvent handlerForEvent : eventsAndHandlers) {
- addHandler(handlerForEvent.event, sharedExecutor,
- handlerForEvent.handler);
- }
+ public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
+ EVENT_TYPE event, EventExecutor<PAYLOAD> executor,
+ EventHandler<PAYLOAD> handler) {
+ validateEvent(event);
+ executors.putIfAbsent(event, new HashMap<>());
+ executors.get(event).putIfAbsent(executor, new ArrayList<>());
+ executors.get(event).get(executor).add(handler);
}
+
+
/**
* Route an event with payload to the right listener(s).
*
@@ -183,31 +225,5 @@ public class EventQueue implements EventPublisher, AutoCloseable {
});
}
- /**
- * Event identifier together with the handler.
- *
- * @param <PAYLOAD>
- */
- public static class HandlerForEvent<PAYLOAD> {
-
- private final Event<PAYLOAD> event;
-
- private final EventHandler<PAYLOAD> handler;
-
- public HandlerForEvent(
- Event<PAYLOAD> event,
- EventHandler<PAYLOAD> handler) {
- this.event = event;
- this.handler = handler;
- }
-
- public Event<PAYLOAD> getEvent() {
- return event;
- }
-
- public EventHandler<PAYLOAD> getHandler() {
- return handler;
- }
- }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2403231c/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
index a64e3d7..3253f2d 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
@@ -23,13 +23,18 @@ import org.slf4j.LoggerFactory;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
/**
* Simple EventExecutor to call all the event handler one-by-one.
*
* @param <T>
*/
+@Metrics(context = "EventQueue")
public class SingleThreadExecutor<T> implements EventExecutor<T> {
public static final String THREAD_NAME_PREFIX = "EventQueue";
@@ -41,14 +46,24 @@ public class SingleThreadExecutor<T> implements EventExecutor<T> {
private final ThreadPoolExecutor executor;
- private final AtomicLong queuedCount = new AtomicLong(0);
+ @Metric
+ private MutableCounterLong queued;
- private final AtomicLong successfulCount = new AtomicLong(0);
+ @Metric
+ private MutableCounterLong done;
- private final AtomicLong failedCount = new AtomicLong(0);
+ @Metric
+ private MutableCounterLong failed;
+ /**
+ * Create SingleThreadExecutor.
+ *
+ * @param name Unique name used in monitoring and metrics.
+ */
public SingleThreadExecutor(String name) {
this.name = name;
+ DefaultMetricsSystem.instance()
+ .register("EventQueue" + name, "Event Executor metrics ", this);
LinkedBlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<>();
executor =
@@ -64,31 +79,31 @@ public class SingleThreadExecutor<T> implements EventExecutor<T> {
@Override
public void onMessage(EventHandler<T> handler, T message, EventPublisher
publisher) {
- queuedCount.incrementAndGet();
+ queued.incr();
executor.execute(() -> {
try {
handler.onMessage(message, publisher);
- successfulCount.incrementAndGet();
+ done.incr();
} catch (Exception ex) {
LOG.error("Error on execution message {}", message, ex);
- failedCount.incrementAndGet();
+ failed.incr();
}
});
}
@Override
public long failedEvents() {
- return failedCount.get();
+ return failed.value();
}
@Override
public long successfulEvents() {
- return successfulCount.get();
+ return done.value();
}
@Override
public long queuedEvents() {
- return queuedCount.get();
+ return queued.value();
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2403231c/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
index 3944409..2bdf705 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
@@ -25,6 +25,8 @@ import org.junit.Test;
import java.util.Set;
import java.util.stream.Collectors;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+
/**
* Testing the basic functionality of the event queue.
*/
@@ -44,11 +46,13 @@ public class TestEventQueue {
@Before
public void startEventQueue() {
+ DefaultMetricsSystem.initialize(getClass().getSimpleName());
queue = new EventQueue();
}
@After
public void stopEventQueue() {
+ DefaultMetricsSystem.shutdown();
queue.close();
}
@@ -79,35 +83,4 @@ public class TestEventQueue {
}
- @Test
- public void handlerGroup() {
- final long[] result = new long[2];
- queue.addHandlerGroup(
- "group",
- new EventQueue.HandlerForEvent<>(EVENT3, (payload, publisher) ->
- result[0] = payload),
- new EventQueue.HandlerForEvent<>(EVENT4, (payload, publisher) ->
- result[1] = payload)
- );
-
- queue.fireEvent(EVENT3, 23L);
- queue.fireEvent(EVENT4, 42L);
-
- queue.processAll(1000);
-
- Assert.assertEquals(23, result[0]);
- Assert.assertEquals(42, result[1]);
-
- Set<String> eventQueueThreadNames =
- Thread.getAllStackTraces().keySet()
- .stream()
- .filter(t -> t.getName().startsWith(SingleThreadExecutor
- .THREAD_NAME_PREFIX))
- .map(Thread::getName)
- .collect(Collectors.toSet());
- System.out.println(eventQueueThreadNames);
- Assert.assertEquals(1, eventQueueThreadNames.size());
-
- }
-
}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[18/50] [abbrv] hadoop git commit: HADOOP-15541. [s3a] Shouldn't try
to drain stream before aborting connection in case of timeout.
Posted by bo...@apache.org.
HADOOP-15541. [s3a] Shouldn't try to drain stream before aborting
connection in case of timeout.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d503f65b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d503f65b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d503f65b
Branch: refs/heads/YARN-7402
Commit: d503f65b6689b19278ec2a0cf9da5a8762539de8
Parents: 705e2c1
Author: Sean Mackrory <ma...@apache.org>
Authored: Thu Jul 5 13:52:00 2018 -0600
Committer: Sean Mackrory <ma...@apache.org>
Committed: Tue Jul 10 17:52:57 2018 +0200
----------------------------------------------------------------------
.../apache/hadoop/fs/s3a/S3AInputStream.java | 24 +++++++++++++-------
1 file changed, 16 insertions(+), 8 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d503f65b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
index 440739d..68f98e4 100644
--- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
+++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java
@@ -36,6 +36,7 @@ import org.slf4j.LoggerFactory;
import java.io.EOFException;
import java.io.IOException;
+import java.net.SocketTimeoutException;
import static org.apache.commons.lang3.StringUtils.isNotEmpty;
@@ -155,11 +156,11 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead {
* @throws IOException on any failure to open the object
*/
@Retries.OnceTranslated
- private synchronized void reopen(String reason, long targetPos, long length)
- throws IOException {
+ private synchronized void reopen(String reason, long targetPos, long length,
+ boolean forceAbort) throws IOException {
if (wrappedStream != null) {
- closeStream("reopen(" + reason + ")", contentRangeFinish, false);
+ closeStream("reopen(" + reason + ")", contentRangeFinish, forceAbort);
}
contentRangeFinish = calculateRequestLimit(inputPolicy, targetPos,
@@ -324,7 +325,7 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead {
//re-open at specific location if needed
if (wrappedStream == null) {
- reopen("read from new offset", targetPos, len);
+ reopen("read from new offset", targetPos, len, false);
}
});
}
@@ -367,8 +368,11 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead {
b = wrappedStream.read();
} catch (EOFException e) {
return -1;
+ } catch (SocketTimeoutException e) {
+ onReadFailure(e, 1, true);
+ b = wrappedStream.read();
} catch (IOException e) {
- onReadFailure(e, 1);
+ onReadFailure(e, 1, false);
b = wrappedStream.read();
}
return b;
@@ -393,12 +397,13 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead {
* @throws IOException any exception thrown on the re-open attempt.
*/
@Retries.OnceTranslated
- private void onReadFailure(IOException ioe, int length) throws IOException {
+ private void onReadFailure(IOException ioe, int length, boolean forceAbort)
+ throws IOException {
LOG.info("Got exception while trying to read from stream {}" +
" trying to recover: " + ioe, uri);
streamStatistics.readException();
- reopen("failure recovery", pos, length);
+ reopen("failure recovery", pos, length, forceAbort);
}
/**
@@ -446,8 +451,11 @@ public class S3AInputStream extends FSInputStream implements CanSetReadahead {
} catch (EOFException e) {
// the base implementation swallows EOFs.
return -1;
+ } catch (SocketTimeoutException e) {
+ onReadFailure(e, len, true);
+ bytes = wrappedStream.read(buf, off, len);
} catch (IOException e) {
- onReadFailure(e, len);
+ onReadFailure(e, len, false);
bytes= wrappedStream.read(buf, off, len);
}
return bytes;
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[23/50] [abbrv] hadoop git commit: YARN-8512. ATSv2 entities are not
published to HBase from second attempt onwards. Contributed by Rohith Sharma
K S.
Posted by bo...@apache.org.
YARN-8512. ATSv2 entities are not published to HBase from second attempt onwards. Contributed by Rohith Sharma K S.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7f1d3d0e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7f1d3d0e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7f1d3d0e
Branch: refs/heads/YARN-7402
Commit: 7f1d3d0e9dbe328fae0d43421665e0b6907b33fe
Parents: a47ec5d
Author: Sunil G <su...@apache.org>
Authored: Wed Jul 11 12:26:32 2018 +0530
Committer: Sunil G <su...@apache.org>
Committed: Wed Jul 11 12:26:32 2018 +0530
----------------------------------------------------------------------
.../containermanager/ContainerManagerImpl.java | 69 ++++++++----
.../application/ApplicationImpl.java | 7 +-
.../BaseContainerManagerTest.java | 25 +++++
.../TestContainerManagerRecovery.java | 106 +++++++++++++++++--
4 files changed, 180 insertions(+), 27 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f1d3d0e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
index 3470910..ad63720 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
@@ -1102,24 +1102,8 @@ public class ContainerManagerImpl extends CompositeService implements
// Create the application
// populate the flow context from the launch context if the timeline
// service v.2 is enabled
- FlowContext flowContext = null;
- if (YarnConfiguration.timelineServiceV2Enabled(getConfig())) {
- String flowName = launchContext.getEnvironment()
- .get(TimelineUtils.FLOW_NAME_TAG_PREFIX);
- String flowVersion = launchContext.getEnvironment()
- .get(TimelineUtils.FLOW_VERSION_TAG_PREFIX);
- String flowRunIdStr = launchContext.getEnvironment()
- .get(TimelineUtils.FLOW_RUN_ID_TAG_PREFIX);
- long flowRunId = 0L;
- if (flowRunIdStr != null && !flowRunIdStr.isEmpty()) {
- flowRunId = Long.parseLong(flowRunIdStr);
- }
- flowContext = new FlowContext(flowName, flowVersion, flowRunId);
- if (LOG.isDebugEnabled()) {
- LOG.debug("Flow context: " + flowContext
- + " created for an application " + applicationID);
- }
- }
+ FlowContext flowContext =
+ getFlowContext(launchContext, applicationID);
Application application =
new ApplicationImpl(dispatcher, user, flowContext,
@@ -1138,6 +1122,31 @@ public class ContainerManagerImpl extends CompositeService implements
dispatcher.getEventHandler().handle(new ApplicationInitEvent(
applicationID, appAcls, logAggregationContext));
}
+ } else if (containerTokenIdentifier.getContainerType()
+ == ContainerType.APPLICATION_MASTER) {
+ FlowContext flowContext =
+ getFlowContext(launchContext, applicationID);
+ if (flowContext != null) {
+ ApplicationImpl application =
+ (ApplicationImpl) context.getApplications().get(applicationID);
+
+ // update flowContext reference in ApplicationImpl
+ application.setFlowContext(flowContext);
+
+ // Required to update state store for recovery.
+ context.getNMStateStore().storeApplication(applicationID,
+ buildAppProto(applicationID, user, credentials,
+ container.getLaunchContext().getApplicationACLs(),
+ containerTokenIdentifier.getLogAggregationContext(),
+ flowContext));
+
+ LOG.info(
+ "Updated application reference with flowContext " + flowContext
+ + " for app " + applicationID);
+ } else {
+ LOG.info("TimelineService V2.0 is not enabled. Skipping updating "
+ + "flowContext for application " + applicationID);
+ }
}
this.context.getNMStateStore().storeContainer(containerId,
@@ -1163,6 +1172,30 @@ public class ContainerManagerImpl extends CompositeService implements
}
}
+ private FlowContext getFlowContext(ContainerLaunchContext launchContext,
+ ApplicationId applicationID) {
+ FlowContext flowContext = null;
+ if (YarnConfiguration.timelineServiceV2Enabled(getConfig())) {
+ String flowName = launchContext.getEnvironment()
+ .get(TimelineUtils.FLOW_NAME_TAG_PREFIX);
+ String flowVersion = launchContext.getEnvironment()
+ .get(TimelineUtils.FLOW_VERSION_TAG_PREFIX);
+ String flowRunIdStr = launchContext.getEnvironment()
+ .get(TimelineUtils.FLOW_RUN_ID_TAG_PREFIX);
+ long flowRunId = 0L;
+ if (flowRunIdStr != null && !flowRunIdStr.isEmpty()) {
+ flowRunId = Long.parseLong(flowRunIdStr);
+ }
+ flowContext = new FlowContext(flowName, flowVersion, flowRunId);
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(
+ "Flow context: " + flowContext + " created for an application "
+ + applicationID);
+ }
+ }
+ return flowContext;
+ }
+
protected ContainerTokenIdentifier verifyAndGetContainerTokenIdentifier(
org.apache.hadoop.yarn.api.records.Token token,
ContainerTokenIdentifier containerTokenIdentifier) throws YarnException,
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f1d3d0e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
index 6d84fb2..ad995fb 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
@@ -25,6 +25,8 @@ import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;
+
+import com.google.common.annotations.VisibleForTesting;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -66,7 +68,6 @@ import org.apache.hadoop.yarn.state.MultipleArcTransition;
import org.apache.hadoop.yarn.state.SingleArcTransition;
import org.apache.hadoop.yarn.state.StateMachine;
import org.apache.hadoop.yarn.state.StateMachineFactory;
-import com.google.common.annotations.VisibleForTesting;
/**
* The state machine for the representation of an Application
@@ -688,4 +689,8 @@ public class ApplicationImpl implements Application {
public long getFlowRunId() {
return flowContext == null ? 0L : flowContext.getFlowRunId();
}
+
+ public void setFlowContext(FlowContext fc) {
+ this.flowContext = fc;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f1d3d0e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
index 93d0afb..b31601c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
@@ -429,6 +429,16 @@ public abstract class BaseContainerManagerTest {
}
public static Token createContainerToken(ContainerId cId, long rmIdentifier,
+ NodeId nodeId, String user,
+ NMContainerTokenSecretManager containerTokenSecretManager,
+ LogAggregationContext logAggregationContext, ContainerType containerType)
+ throws IOException {
+ Resource r = BuilderUtils.newResource(1024, 1);
+ return createContainerToken(cId, rmIdentifier, nodeId, user, r,
+ containerTokenSecretManager, logAggregationContext, containerType);
+ }
+
+ public static Token createContainerToken(ContainerId cId, long rmIdentifier,
NodeId nodeId, String user, Resource resource,
NMContainerTokenSecretManager containerTokenSecretManager,
LogAggregationContext logAggregationContext)
@@ -442,6 +452,21 @@ public abstract class BaseContainerManagerTest {
containerTokenIdentifier);
}
+ public static Token createContainerToken(ContainerId cId, long rmIdentifier,
+ NodeId nodeId, String user, Resource resource,
+ NMContainerTokenSecretManager containerTokenSecretManager,
+ LogAggregationContext logAggregationContext, ContainerType continerType)
+ throws IOException {
+ ContainerTokenIdentifier containerTokenIdentifier =
+ new ContainerTokenIdentifier(cId, nodeId.toString(), user, resource,
+ System.currentTimeMillis() + 100000L, 123, rmIdentifier,
+ Priority.newInstance(0), 0, logAggregationContext, null,
+ continerType);
+ return BuilderUtils.newContainerToken(nodeId,
+ containerTokenSecretManager.retrievePassword(containerTokenIdentifier),
+ containerTokenIdentifier);
+ }
+
public static Token createContainerToken(ContainerId cId, int version,
long rmIdentifier, NodeId nodeId, String user, Resource resource,
NMContainerTokenSecretManager containerTokenSecretManager,
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7f1d3d0e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java
index bf8b500..0a834af 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.yarn.server.nodemanager.containermanager;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.isA;
import static org.mockito.Mockito.mock;
@@ -74,6 +75,7 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
import org.apache.hadoop.yarn.security.NMTokenIdentifier;
+import org.apache.hadoop.yarn.server.api.ContainerType;
import org.apache.hadoop.yarn.server.api.records.MasterKey;
import org.apache.hadoop.yarn.server.api.records.impl.pb.MasterKeyPBImpl;
import org.apache.hadoop.yarn.server.nodemanager.CMgrCompletedAppsEvent;
@@ -205,7 +207,7 @@ public class TestContainerManagerRecovery extends BaseContainerManagerTest {
"includePatternInRollingAggregation",
"excludePatternInRollingAggregation");
StartContainersResponse startResponse = startContainer(context, cm, cid,
- clc, logAggregationContext);
+ clc, logAggregationContext, ContainerType.TASK);
assertTrue(startResponse.getFailedRequests().isEmpty());
assertEquals(1, context.getApplications().size());
Application app = context.getApplications().get(appId);
@@ -342,7 +344,7 @@ public class TestContainerManagerRecovery extends BaseContainerManagerTest {
null, null);
StartContainersResponse startResponse = startContainer(context, cm, cid,
- clc, null);
+ clc, null, ContainerType.TASK);
assertTrue(startResponse.getFailedRequests().isEmpty());
assertEquals(1, context.getApplications().size());
Application app = context.getApplications().get(appId);
@@ -579,7 +581,7 @@ public class TestContainerManagerRecovery extends BaseContainerManagerTest {
cm.init(conf);
cm.start();
StartContainersResponse startResponse = startContainer(context, cm, cid,
- clc, logAggregationContext);
+ clc, logAggregationContext, ContainerType.TASK);
assertEquals(1, startResponse.getSuccessfullyStartedContainers().size());
cm.stop();
verify(cm).handle(isA(CMgrCompletedAppsEvent.class));
@@ -595,7 +597,7 @@ public class TestContainerManagerRecovery extends BaseContainerManagerTest {
cm.init(conf);
cm.start();
startResponse = startContainer(context, cm, cid,
- clc, logAggregationContext);
+ clc, logAggregationContext, ContainerType.TASK);
assertEquals(1, startResponse.getSuccessfullyStartedContainers().size());
cm.stop();
memStore.close();
@@ -612,7 +614,7 @@ public class TestContainerManagerRecovery extends BaseContainerManagerTest {
cm.init(conf);
cm.start();
startResponse = startContainer(context, cm, cid,
- clc, logAggregationContext);
+ clc, logAggregationContext, ContainerType.TASK);
assertEquals(1, startResponse.getSuccessfullyStartedContainers().size());
cm.stop();
memStore.close();
@@ -661,7 +663,7 @@ public class TestContainerManagerRecovery extends BaseContainerManagerTest {
localResources, containerEnv, commands, serviceData,
containerTokens, acls);
StartContainersResponse startResponse = startContainer(
- context, cm, cid, clc, null);
+ context, cm, cid, clc, null, ContainerType.TASK);
assertTrue(startResponse.getFailedRequests().isEmpty());
assertEquals(1, context.getApplications().size());
// make sure the container reaches RUNNING state
@@ -736,14 +738,15 @@ public class TestContainerManagerRecovery extends BaseContainerManagerTest {
private StartContainersResponse startContainer(Context context,
final ContainerManagerImpl cm, ContainerId cid,
- ContainerLaunchContext clc, LogAggregationContext logAggregationContext)
+ ContainerLaunchContext clc, LogAggregationContext logAggregationContext,
+ ContainerType containerType)
throws Exception {
UserGroupInformation user = UserGroupInformation.createRemoteUser(
cid.getApplicationAttemptId().toString());
StartContainerRequest scReq = StartContainerRequest.newInstance(
clc, TestContainerManager.createContainerToken(cid, 0,
context.getNodeId(), user.getShortUserName(),
- context.getContainerTokenSecretManager(), logAggregationContext));
+ context.getContainerTokenSecretManager(), logAggregationContext, containerType));
final List<StartContainerRequest> scReqList =
new ArrayList<StartContainerRequest>();
scReqList.add(scReq);
@@ -910,4 +913,91 @@ public class TestContainerManagerRecovery extends BaseContainerManagerTest {
}
}
+ @Test
+ public void testApplicationRecoveryAfterFlowContextUpdated()
+ throws Exception {
+ conf.setBoolean(YarnConfiguration.NM_RECOVERY_ENABLED, true);
+ conf.setBoolean(YarnConfiguration.NM_RECOVERY_SUPERVISED, true);
+ conf.setBoolean(YarnConfiguration.YARN_ACL_ENABLE, true);
+ conf.set(YarnConfiguration.YARN_ADMIN_ACL, "yarn_admin_user");
+ NMStateStoreService stateStore = new NMMemoryStateStoreService();
+ stateStore.init(conf);
+ stateStore.start();
+ Context context = createContext(conf, stateStore);
+ ContainerManagerImpl cm = createContainerManager(context);
+ cm.init(conf);
+ cm.start();
+
+ // add an application by starting a container
+ String appName = "app_name1";
+ ApplicationId appId = ApplicationId.newInstance(0, 1);
+ ApplicationAttemptId attemptId = ApplicationAttemptId.newInstance(appId, 1);
+
+ // create 1nd attempt container with containerId 2
+ ContainerId cid = ContainerId.newContainerId(attemptId, 2);
+ Map<String, LocalResource> localResources = Collections.emptyMap();
+ Map<String, String> containerEnv = new HashMap<>();
+
+ List<String> containerCmds = Collections.emptyList();
+ Map<String, ByteBuffer> serviceData = Collections.emptyMap();
+ Credentials containerCreds = new Credentials();
+ DataOutputBuffer dob = new DataOutputBuffer();
+ containerCreds.writeTokenStorageToStream(dob);
+ ByteBuffer containerTokens =
+ ByteBuffer.wrap(dob.getData(), 0, dob.getLength());
+ Map<ApplicationAccessType, String> acls =
+ new HashMap<ApplicationAccessType, String>();
+ ContainerLaunchContext clc = ContainerLaunchContext
+ .newInstance(localResources, containerEnv, containerCmds, serviceData,
+ containerTokens, acls);
+ // create the logAggregationContext
+ LogAggregationContext logAggregationContext = LogAggregationContext
+ .newInstance("includePattern", "excludePattern",
+ "includePatternInRollingAggregation",
+ "excludePatternInRollingAggregation");
+
+ StartContainersResponse startResponse =
+ startContainer(context, cm, cid, clc, logAggregationContext,
+ ContainerType.TASK);
+ assertTrue(startResponse.getFailedRequests().isEmpty());
+ assertEquals(1, context.getApplications().size());
+ ApplicationImpl app =
+ (ApplicationImpl) context.getApplications().get(appId);
+ assertNotNull(app);
+ waitForAppState(app, ApplicationState.INITING);
+ assertNull(app.getFlowName());
+
+ // 2nd attempt
+ ApplicationAttemptId attemptId2 =
+ ApplicationAttemptId.newInstance(appId, 2);
+ // create 2nd attempt master container
+ ContainerId cid2 = ContainerId.newContainerId(attemptId, 1);
+ setFlowContext(containerEnv, appName, appId);
+ // once again create for updating launch context
+ clc = ContainerLaunchContext
+ .newInstance(localResources, containerEnv, containerCmds, serviceData,
+ containerTokens, acls);
+ // start container with container type AM.
+ startResponse =
+ startContainer(context, cm, cid2, clc, logAggregationContext,
+ ContainerType.APPLICATION_MASTER);
+ assertTrue(startResponse.getFailedRequests().isEmpty());
+ assertEquals(1, context.getApplications().size());
+ waitForAppState(app, ApplicationState.INITING);
+ assertEquals(appName, app.getFlowName());
+
+ // reset container manager and verify flow context information
+ cm.stop();
+ context = createContext(conf, stateStore);
+ cm = createContainerManager(context);
+ cm.init(conf);
+ cm.start();
+ assertEquals(1, context.getApplications().size());
+ app = (ApplicationImpl) context.getApplications().get(appId);
+ assertNotNull(app);
+ assertEquals(appName, app.getFlowName());
+ waitForAppState(app, ApplicationState.INITING);
+
+ cm.stop();
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[14/50] [abbrv] hadoop git commit: HDDS-48. Fix branch after merging
from trunk.
Posted by bo...@apache.org.
HDDS-48. Fix branch after merging from trunk.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3584baf2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3584baf2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3584baf2
Branch: refs/heads/YARN-7402
Commit: 3584baf2642816a453402a717a05d16754a6ac52
Parents: c275a9a
Author: Bharat Viswanadham <bh...@apache.org>
Authored: Mon Jul 9 12:30:59 2018 -0700
Committer: Arpit Agarwal <ar...@apache.org>
Committed: Mon Jul 9 13:22:30 2018 -0700
----------------------------------------------------------------------
.../commandhandler/TestBlockDeletion.java | 32 +++++++++++---------
.../org/apache/hadoop/ozone/scm/TestSCMCli.java | 4 +--
2 files changed, 19 insertions(+), 17 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3584baf2/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
index 62059ec..c60c6c4 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
@@ -34,9 +34,10 @@ import org.apache.hadoop.ozone.client.OzoneBucket;
import org.apache.hadoop.ozone.client.OzoneClientFactory;
import org.apache.hadoop.ozone.client.OzoneVolume;
import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
-import org.apache.hadoop.ozone.container.common.helpers.KeyUtils;
-import org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl;
+import org.apache.hadoop.ozone.container.common.impl.ContainerData;
+import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
+import org.apache.hadoop.ozone.container.keyvalue.helpers.KeyUtils;
import org.apache.hadoop.ozone.om.OzoneManager;
import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
@@ -46,6 +47,7 @@ import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.utils.MetadataStore;
import org.junit.Assert;
import org.junit.BeforeClass;
+import org.junit.Ignore;
import org.junit.Test;
import java.io.File;
@@ -56,10 +58,11 @@ import java.util.function.Consumer;
import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_INTERVAL;
+@Ignore("Need to be fixed according to ContainerIO")
public class TestBlockDeletion {
private static OzoneConfiguration conf = null;
private static ObjectStore store;
- private static ContainerManagerImpl dnContainerManager = null;
+ private static ContainerSet dnContainerManager = null;
private static StorageContainerManager scm = null;
private static OzoneManager om = null;
private static Set<Long> containerIdsWithDeletedBlocks;
@@ -85,9 +88,8 @@ public class TestBlockDeletion {
MiniOzoneCluster.newBuilder(conf).setNumDatanodes(1).build();
cluster.waitForClusterToBeReady();
store = OzoneClientFactory.getRpcClient(conf).getObjectStore();
- dnContainerManager =
- (ContainerManagerImpl) cluster.getHddsDatanodes().get(0)
- .getDatanodeStateMachine().getContainer().getContainerManager();
+ dnContainerManager = cluster.getHddsDatanodes().get(0)
+ .getDatanodeStateMachine().getContainer().getContainerSet();
om = cluster.getOzoneManager();
scm = cluster.getStorageContainerManager();
containerIdsWithDeletedBlocks = new HashSet<>();
@@ -148,8 +150,8 @@ public class TestBlockDeletion {
Assert.assertEquals(
scm.getContainerInfo(containerId).getDeleteTransactionId(), 0);
}
- Assert.assertEquals(dnContainerManager.readContainer(containerId)
- .getDeleteTransactionId(),
+ Assert.assertEquals(dnContainerManager.getContainer(containerId)
+ .getContainerData().getDeleteTransactionId(),
scm.getContainerInfo(containerId).getDeleteTransactionId());
}
}
@@ -159,9 +161,9 @@ public class TestBlockDeletion {
throws IOException {
return performOperationOnKeyContainers((blockID) -> {
try {
- MetadataStore db = KeyUtils.getDB(
- dnContainerManager.getContainerMap().get(blockID.getContainerID()),
- conf);
+ MetadataStore db = KeyUtils.getDB((KeyValueContainerData)
+ dnContainerManager.getContainer(blockID.getContainerID())
+ .getContainerData(), conf);
Assert.assertNotNull(db.get(Longs.toByteArray(blockID.getLocalID())));
} catch (IOException e) {
e.printStackTrace();
@@ -174,9 +176,9 @@ public class TestBlockDeletion {
throws IOException {
return performOperationOnKeyContainers((blockID) -> {
try {
- MetadataStore db = KeyUtils.getDB(
- dnContainerManager.getContainerMap().get(blockID.getContainerID()),
- conf);
+ MetadataStore db = KeyUtils.getDB((KeyValueContainerData)
+ dnContainerManager.getContainer(blockID.getContainerID())
+ .getContainerData(), conf);
Assert.assertNull(db.get(Longs.toByteArray(blockID.getLocalID())));
Assert.assertNull(db.get(DFSUtil.string2Bytes(
OzoneConsts.DELETING_KEY_PREFIX + blockID.getLocalID())));
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3584baf2/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
index cc11feb..722c1a5 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
@@ -338,8 +338,8 @@ public class TestSCMCli {
openStatus = data.isOpen() ? "OPEN" : "CLOSED";
expected = String
- .format(formatStr, container.getContainerID(), openStatus,
- data.getDbFile().getPath(), data.getContainerPath(), "",
+ .format(formatStr, container.getContainerInfo().getContainerID(),
+ openStatus, data.getDbFile().getPath(), data.getContainerPath(), "",
datanodeDetails.getHostName(), datanodeDetails.getHostName());
assertEquals(expected, out.toString());
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[08/50] [abbrv] hadoop git commit: Merge trunk into HDDS-48
Posted by bo...@apache.org.
Merge trunk into HDDS-48
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c275a9a6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c275a9a6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c275a9a6
Branch: refs/heads/YARN-7402
Commit: c275a9a6a07b2bd889bdba4d05b420027f430b34
Parents: 44e19fc 83cd84b
Author: Bharat Viswanadham <bh...@apache.org>
Authored: Mon Jul 9 12:13:03 2018 -0700
Committer: Bharat Viswanadham <bh...@apache.org>
Committed: Mon Jul 9 12:13:03 2018 -0700
----------------------------------------------------------------------
.gitignore | 4 +
dev-support/bin/ozone-dist-layout-stitching | 2 +-
...ExcludePrivateAnnotationsStandardDoclet.java | 6 +-
.../hadoop-common/src/main/conf/hadoop-env.sh | 6 +-
.../org/apache/hadoop/conf/Configuration.java | 458 +++---
.../java/org/apache/hadoop/fs/FileContext.java | 9 +-
.../org/apache/hadoop/fs/LocalDirAllocator.java | 7 +-
.../hadoop-common/src/site/markdown/Metrics.md | 39 +-
.../org/apache/hadoop/fs/TestFileContext.java | 44 +-
.../apache/hadoop/fs/TestLocalDirAllocator.java | 59 +
.../src/main/compose/ozone/docker-compose.yaml | 6 +-
.../src/main/compose/ozone/docker-config | 2 +-
.../src/main/compose/ozoneperf/README.md | 4 +-
.../main/compose/ozoneperf/docker-compose.yaml | 6 +-
.../src/main/compose/ozoneperf/docker-config | 2 +-
.../scm/client/ContainerOperationClient.java | 117 +-
hadoop-hdds/common/pom.xml | 18 +
.../hadoop/hdds/protocol/DatanodeDetails.java | 13 +-
.../apache/hadoop/hdds/scm/ScmConfigKeys.java | 6 +-
.../hadoop/hdds/scm/client/ScmClient.java | 43 +-
.../container/common/helpers/ContainerInfo.java | 167 ++-
.../common/helpers/ContainerWithPipeline.java | 131 ++
.../StorageContainerLocationProtocol.java | 18 +-
...rLocationProtocolClientSideTranslatorPB.java | 34 +-
.../org/apache/hadoop/ozone/OzoneConsts.java | 22 +-
.../apache/hadoop/ozone/audit/AuditAction.java | 30 +
.../hadoop/ozone/audit/AuditEventStatus.java | 36 +
.../apache/hadoop/ozone/audit/AuditLogger.java | 128 ++
.../hadoop/ozone/audit/AuditLoggerType.java | 37 +
.../apache/hadoop/ozone/audit/AuditMarker.java | 38 +
.../apache/hadoop/ozone/audit/Auditable.java | 32 +
.../apache/hadoop/ozone/audit/package-info.java | 123 ++
.../org/apache/hadoop/ozone/common/Storage.java | 6 +-
...rLocationProtocolServerSideTranslatorPB.java | 33 +-
.../main/proto/ScmBlockLocationProtocol.proto | 10 +-
.../StorageContainerLocationProtocol.proto | 34 +-
hadoop-hdds/common/src/main/proto/hdds.proto | 28 +-
.../common/src/main/resources/ozone-default.xml | 131 +-
.../apache/hadoop/ozone/audit/DummyAction.java | 51 +
.../apache/hadoop/ozone/audit/DummyEntity.java | 57 +
.../ozone/audit/TestOzoneAuditLogger.java | 147 ++
.../apache/hadoop/ozone/audit/package-info.java | 23 +
.../common/src/test/resources/log4j2.properties | 76 +
.../apache/hadoop/hdds/scm/HddsServerUtil.java | 11 -
.../DeleteBlocksCommandHandler.java | 30 +-
.../protocol/StorageContainerNodeProtocol.java | 4 +-
.../src/main/resources/webapps/static/ozone.js | 4 +-
.../webapps/static/templates/config.html | 4 +-
.../hadoop/hdds/scm/block/BlockManagerImpl.java | 80 +-
.../block/DatanodeDeletedBlockTransactions.java | 11 +-
.../hadoop/hdds/scm/block/DeletedBlockLog.java | 2 +-
.../container/CloseContainerEventHandler.java | 35 +-
.../hdds/scm/container/ContainerMapping.java | 128 +-
.../scm/container/ContainerStateManager.java | 30 +-
.../hadoop/hdds/scm/container/Mapping.java | 26 +-
.../scm/container/closer/ContainerCloser.java | 15 +-
.../scm/container/states/ContainerStateMap.java | 13 +-
.../hadoop/hdds/scm/events/SCMEvents.java | 80 ++
.../hadoop/hdds/scm/events/package-info.java | 23 +
.../hadoop/hdds/scm/node/CommandQueue.java | 2 +-
.../hadoop/hdds/scm/node/DatanodeInfo.java | 109 ++
.../hdds/scm/node/HeartbeatQueueItem.java | 98 --
.../hadoop/hdds/scm/node/NodeManager.java | 16 +-
.../hadoop/hdds/scm/node/NodeStateManager.java | 575 ++++++++
.../hadoop/hdds/scm/node/SCMNodeManager.java | 511 +------
.../node/states/NodeAlreadyExistsException.java | 45 +
.../hdds/scm/node/states/NodeException.java | 44 +
.../scm/node/states/NodeNotFoundException.java | 49 +
.../hdds/scm/node/states/NodeStateMap.java | 281 ++++
.../hdds/scm/pipelines/PipelineManager.java | 27 +-
.../hdds/scm/pipelines/PipelineSelector.java | 16 +
.../scm/pipelines/ratis/RatisManagerImpl.java | 1 +
.../standalone/StandaloneManagerImpl.java | 1 +
.../hdds/scm/server/SCMBlockProtocolServer.java | 2 +-
.../scm/server/SCMClientProtocolServer.java | 74 +-
.../server/SCMDatanodeHeartbeatDispatcher.java | 13 +-
.../scm/server/SCMDatanodeProtocolServer.java | 2 +-
.../scm/server/StorageContainerManager.java | 7 +-
.../hdds/scm/block/TestDeletedBlockLog.java | 15 +-
.../hdds/scm/container/MockNodeManager.java | 58 +-
.../TestCloseContainerEventHandler.java | 54 +-
.../scm/container/TestContainerMapping.java | 27 +-
.../container/closer/TestContainerCloser.java | 18 +-
.../hdds/scm/node/TestContainerPlacement.java | 16 +-
.../hadoop/hdds/scm/node/TestNodeManager.java | 186 +--
.../TestSCMDatanodeHeartbeatDispatcher.java | 20 +-
.../testutils/ReplicationNodeManagerMock.java | 37 +-
.../hadoop/hdds/scm/cli/OzoneBaseCLI.java | 2 +-
.../cli/container/CloseContainerHandler.java | 10 +-
.../cli/container/DeleteContainerHandler.java | 9 +-
.../scm/cli/container/InfoContainerHandler.java | 11 +-
.../java/org/apache/hadoop/hdfs/DFSClient.java | 19 -
.../org/apache/hadoop/hdfs/DFSInputStream.java | 46 +-
.../org/apache/hadoop/hdfs/DFSUtilClient.java | 15 +-
.../hdfs/client/HdfsClientConfigKeys.java | 3 +
.../hdfs/client/impl/BlockReaderFactory.java | 21 +-
.../hdfs/client/impl/BlockReaderLocal.java | 93 +-
.../client/impl/BlockReaderLocalLegacy.java | 44 +-
.../hdfs/client/impl/BlockReaderRemote.java | 33 +-
.../datanode/ReplicaNotFoundException.java | 2 +-
.../ha/ConfiguredFailoverProxyProvider.java | 9 +-
.../InMemoryAliasMapFailoverProxyProvider.java | 38 +
.../hdfs/server/federation/router/Quota.java | 10 +-
.../router/RouterQuotaUpdateService.java | 43 +-
.../federation/router/RouterRpcServer.java | 1 -
.../router/TestDisableRouterQuota.java | 94 ++
.../federation/router/TestRouterQuota.java | 212 ++-
.../org/apache/hadoop/hdfs/DFSConfigKeys.java | 5 +-
.../java/org/apache/hadoop/hdfs/DFSUtil.java | 37 +-
.../org/apache/hadoop/hdfs/NameNodeProxies.java | 15 +-
...yAliasMapProtocolClientSideTranslatorPB.java | 95 +-
.../aliasmap/InMemoryAliasMapProtocol.java | 5 +
.../aliasmap/InMemoryLevelDBAliasMapServer.java | 19 +-
.../impl/InMemoryLevelDBAliasMapClient.java | 80 +-
.../impl/TextFileRegionAliasMap.java | 5 +-
.../hadoop/hdfs/server/datanode/DataNode.java | 21 +-
.../hdfs/server/datanode/DiskBalancer.java | 29 +-
.../erasurecode/StripedBlockReader.java | 2 +-
.../datanode/fsdataset/impl/FsDatasetImpl.java | 8 +
.../hdfs/server/namenode/NamenodeFsck.java | 1 -
.../src/main/resources/hdfs-default.xml | 35 +-
.../org/apache/hadoop/hdfs/MiniDFSCluster.java | 13 +-
.../apache/hadoop/hdfs/MiniDFSNNTopology.java | 2 +-
.../hdfs/client/impl/BlockReaderTestUtil.java | 2 -
.../hdfs/client/impl/TestBlockReaderLocal.java | 2 -
.../blockmanagement/TestBlockTokenWithDFS.java | 2 -
.../TestNameNodePrunesMissingStorages.java | 5 +-
.../impl/TestInMemoryLevelDBAliasMapClient.java | 7 +
.../datanode/TestDataNodeVolumeFailure.java | 2 -
.../server/diskbalancer/TestDiskBalancer.java | 80 +-
.../shortcircuit/TestShortCircuitCache.java | 89 ++
.../src/test/acceptance/basic/basic.robot | 6 +-
.../test/acceptance/basic/docker-compose.yaml | 8 +-
.../src/test/acceptance/basic/docker-config | 4 +-
.../src/test/acceptance/basic/ozone-shell.robot | 18 +-
.../src/test/acceptance/commonlib.robot | 4 +-
.../test/acceptance/ozonefs/docker-compose.yaml | 8 +-
.../src/test/acceptance/ozonefs/docker-config | 4 +-
.../src/test/acceptance/ozonefs/ozonefs.robot | 6 +-
.../apache/hadoop/ozone/client/BucketArgs.java | 4 +-
.../hadoop/ozone/client/OzoneClientFactory.java | 89 +-
.../apache/hadoop/ozone/client/OzoneKey.java | 2 +-
.../apache/hadoop/ozone/client/VolumeArgs.java | 4 +-
.../ozone/client/io/ChunkGroupInputStream.java | 33 +-
.../ozone/client/io/ChunkGroupOutputStream.java | 63 +-
.../client/rest/DefaultRestServerSelector.java | 2 +-
.../hadoop/ozone/client/rest/RestClient.java | 15 +-
.../ozone/client/rest/RestServerSelector.java | 2 +-
.../hadoop/ozone/client/rpc/RpcClient.java | 142 +-
.../ozone/client/TestHddsClientUtils.java | 24 +-
hadoop-ozone/common/pom.xml | 2 +-
hadoop-ozone/common/src/main/bin/ozone | 9 +-
hadoop-ozone/common/src/main/bin/start-ozone.sh | 16 +-
hadoop-ozone/common/src/main/bin/stop-ozone.sh | 16 +-
.../java/org/apache/hadoop/ozone/KsmUtils.java | 87 --
.../java/org/apache/hadoop/ozone/OmUtils.java | 94 ++
.../org/apache/hadoop/ozone/audit/OMAction.java | 51 +
.../apache/hadoop/ozone/audit/package-info.java | 22 +
.../apache/hadoop/ozone/freon/OzoneGetConf.java | 16 +-
.../apache/hadoop/ozone/ksm/KSMConfigKeys.java | 81 --
.../hadoop/ozone/ksm/helpers/KsmBucketArgs.java | 233 ---
.../hadoop/ozone/ksm/helpers/KsmBucketInfo.java | 235 ---
.../hadoop/ozone/ksm/helpers/KsmKeyArgs.java | 119 --
.../hadoop/ozone/ksm/helpers/KsmKeyInfo.java | 277 ----
.../ozone/ksm/helpers/KsmKeyLocationInfo.java | 129 --
.../ksm/helpers/KsmKeyLocationInfoGroup.java | 118 --
.../ozone/ksm/helpers/KsmOzoneAclMap.java | 110 --
.../hadoop/ozone/ksm/helpers/KsmVolumeArgs.java | 223 ---
.../ozone/ksm/helpers/OpenKeySession.java | 50 -
.../hadoop/ozone/ksm/helpers/ServiceInfo.java | 237 ---
.../hadoop/ozone/ksm/helpers/VolumeArgs.java | 140 --
.../hadoop/ozone/ksm/helpers/package-info.java | 18 -
.../apache/hadoop/ozone/ksm/package-info.java | 21 -
.../ksm/protocol/KeySpaceManagerProtocol.java | 252 ----
.../hadoop/ozone/ksm/protocol/package-info.java | 19 -
...ceManagerProtocolClientSideTranslatorPB.java | 769 ----------
.../protocolPB/KeySpaceManagerProtocolPB.java | 34 -
.../ozone/ksm/protocolPB/package-info.java | 19 -
.../apache/hadoop/ozone/om/OMConfigKeys.java | 81 ++
.../hadoop/ozone/om/helpers/OmBucketArgs.java | 233 +++
.../hadoop/ozone/om/helpers/OmBucketInfo.java | 235 +++
.../hadoop/ozone/om/helpers/OmKeyArgs.java | 119 ++
.../hadoop/ozone/om/helpers/OmKeyInfo.java | 277 ++++
.../ozone/om/helpers/OmKeyLocationInfo.java | 129 ++
.../om/helpers/OmKeyLocationInfoGroup.java | 118 ++
.../hadoop/ozone/om/helpers/OmOzoneAclMap.java | 110 ++
.../hadoop/ozone/om/helpers/OmVolumeArgs.java | 223 +++
.../hadoop/ozone/om/helpers/OpenKeySession.java | 50 +
.../hadoop/ozone/om/helpers/ServiceInfo.java | 237 +++
.../hadoop/ozone/om/helpers/VolumeArgs.java | 140 ++
.../hadoop/ozone/om/helpers/package-info.java | 18 +
.../apache/hadoop/ozone/om/package-info.java | 21 +
.../ozone/om/protocol/OzoneManagerProtocol.java | 252 ++++
.../hadoop/ozone/om/protocol/package-info.java | 19 +
...neManagerProtocolClientSideTranslatorPB.java | 769 ++++++++++
.../om/protocolPB/OzoneManagerProtocolPB.java | 34 +
.../ozone/om/protocolPB/package-info.java | 19 +
.../hadoop/ozone/protocolPB/KSMPBHelper.java | 113 --
.../hadoop/ozone/protocolPB/OMPBHelper.java | 113 ++
.../hadoop/ozone/protocolPB/OzonePBHelper.java | 30 +
.../main/proto/KeySpaceManagerProtocol.proto | 474 ------
.../src/main/proto/OzoneManagerProtocol.proto | 480 +++++++
hadoop-ozone/docs/content/GettingStarted.md | 18 +-
hadoop-ozone/docs/content/Metrics.md | 10 +-
hadoop-ozone/docs/content/_index.md | 12 +-
hadoop-ozone/docs/static/OzoneOverview.svg | 2 +-
.../container/TestContainerStateManager.java | 161 ++-
.../apache/hadoop/ozone/MiniOzoneCluster.java | 24 +-
.../hadoop/ozone/MiniOzoneClusterImpl.java | 66 +-
.../hadoop/ozone/TestContainerOperations.java | 11 +-
.../ozone/TestOzoneConfigurationFields.java | 4 +-
.../ozone/TestStorageContainerManager.java | 28 +-
.../TestStorageContainerManagerHelper.java | 22 +-
.../ozone/client/rest/TestOzoneRestClient.java | 6 +-
.../ozone/client/rpc/TestOzoneRpcClient.java | 22 +-
.../commandhandler/TestBlockDeletion.java | 212 +++
.../TestCloseContainerByPipeline.java | 97 +-
.../TestCloseContainerHandler.java | 14 +-
.../ozone/ksm/TestContainerReportWithKeys.java | 143 --
.../apache/hadoop/ozone/ksm/TestKSMMetrcis.java | 306 ----
.../apache/hadoop/ozone/ksm/TestKSMSQLCli.java | 284 ----
.../hadoop/ozone/ksm/TestKeySpaceManager.java | 1350 ------------------
.../ksm/TestKeySpaceManagerRestInterface.java | 135 --
.../ozone/ksm/TestKsmBlockVersioning.java | 253 ----
.../ksm/TestMultipleContainerReadWrite.java | 215 ---
.../ozone/om/TestContainerReportWithKeys.java | 143 ++
.../om/TestMultipleContainerReadWrite.java | 215 +++
.../hadoop/ozone/om/TestOmBlockVersioning.java | 253 ++++
.../apache/hadoop/ozone/om/TestOmMetrics.java | 313 ++++
.../apache/hadoop/ozone/om/TestOmSQLCli.java | 284 ++++
.../hadoop/ozone/om/TestOzoneManager.java | 1349 +++++++++++++++++
.../ozone/om/TestOzoneManagerRestInterface.java | 135 ++
.../hadoop/ozone/ozShell/TestOzoneShell.java | 14 +-
.../hadoop/ozone/scm/TestAllocateContainer.java | 6 +-
.../hadoop/ozone/scm/TestContainerSQLCli.java | 3 +-
.../ozone/scm/TestContainerSmallFile.java | 36 +-
.../org/apache/hadoop/ozone/scm/TestSCMCli.java | 127 +-
.../ozone/scm/TestXceiverClientManager.java | 62 +-
.../ozone/scm/TestXceiverClientMetrics.java | 14 +-
.../hadoop/ozone/scm/node/TestQueryNode.java | 19 +-
.../ozone/web/TestDistributedOzoneVolumes.java | 12 +-
.../hadoop/ozone/web/client/TestKeys.java | 58 +-
.../src/test/resources/webapps/ksm/.gitkeep | 15 -
.../resources/webapps/ozoneManager/.gitkeep | 15 +
.../server/datanode/ObjectStoreHandler.java | 33 +-
.../ozone/web/handlers/KeyProcessTemplate.java | 4 +-
.../web/handlers/VolumeProcessTemplate.java | 4 +-
.../web/storage/DistributedStorageHandler.java | 153 +-
.../apache/hadoop/ozone/ksm/BucketManager.java | 79 -
.../hadoop/ozone/ksm/BucketManagerImpl.java | 315 ----
.../org/apache/hadoop/ozone/ksm/KSMMXBean.java | 31 -
.../hadoop/ozone/ksm/KSMMetadataManager.java | 253 ----
.../ozone/ksm/KSMMetadataManagerImpl.java | 526 -------
.../org/apache/hadoop/ozone/ksm/KSMMetrics.java | 459 ------
.../org/apache/hadoop/ozone/ksm/KSMStorage.java | 90 --
.../hadoop/ozone/ksm/KeyDeletingService.java | 142 --
.../org/apache/hadoop/ozone/ksm/KeyManager.java | 175 ---
.../apache/hadoop/ozone/ksm/KeyManagerImpl.java | 566 --------
.../hadoop/ozone/ksm/KeySpaceManager.java | 914 ------------
.../ozone/ksm/KeySpaceManagerHttpServer.java | 78 -
.../hadoop/ozone/ksm/OpenKeyCleanupService.java | 117 --
.../ozone/ksm/ServiceListJSONServlet.java | 103 --
.../apache/hadoop/ozone/ksm/VolumeManager.java | 100 --
.../hadoop/ozone/ksm/VolumeManagerImpl.java | 391 -----
.../ozone/ksm/exceptions/KSMException.java | 118 --
.../ozone/ksm/exceptions/package-info.java | 19 -
.../apache/hadoop/ozone/ksm/package-info.java | 21 -
.../apache/hadoop/ozone/om/BucketManager.java | 79 +
.../hadoop/ozone/om/BucketManagerImpl.java | 315 ++++
.../hadoop/ozone/om/KeyDeletingService.java | 142 ++
.../org/apache/hadoop/ozone/om/KeyManager.java | 175 +++
.../apache/hadoop/ozone/om/KeyManagerImpl.java | 566 ++++++++
.../org/apache/hadoop/ozone/om/OMMXBean.java | 31 +
.../hadoop/ozone/om/OMMetadataManager.java | 253 ++++
.../org/apache/hadoop/ozone/om/OMMetrics.java | 459 ++++++
.../org/apache/hadoop/ozone/om/OMStorage.java | 90 ++
.../hadoop/ozone/om/OmMetadataManagerImpl.java | 526 +++++++
.../hadoop/ozone/om/OpenKeyCleanupService.java | 117 ++
.../apache/hadoop/ozone/om/OzoneManager.java | 911 ++++++++++++
.../hadoop/ozone/om/OzoneManagerHttpServer.java | 78 +
.../hadoop/ozone/om/ServiceListJSONServlet.java | 103 ++
.../apache/hadoop/ozone/om/VolumeManager.java | 100 ++
.../hadoop/ozone/om/VolumeManagerImpl.java | 390 +++++
.../hadoop/ozone/om/exceptions/OMException.java | 118 ++
.../ozone/om/exceptions/package-info.java | 19 +
.../apache/hadoop/ozone/om/package-info.java | 21 +
...ceManagerProtocolServerSideTranslatorPB.java | 559 --------
...neManagerProtocolServerSideTranslatorPB.java | 571 ++++++++
.../hadoop/ozone/protocolPB/package-info.java | 2 +-
.../src/main/webapps/ksm/index.html | 70 -
.../src/main/webapps/ksm/ksm-metrics.html | 44 -
.../ozone-manager/src/main/webapps/ksm/ksm.js | 110 --
.../ozone-manager/src/main/webapps/ksm/main.css | 23 -
.../src/main/webapps/ksm/main.html | 18 -
.../src/main/webapps/ozoneManager/index.html | 70 +
.../src/main/webapps/ozoneManager/main.css | 23 +
.../src/main/webapps/ozoneManager/main.html | 18 +
.../main/webapps/ozoneManager/om-metrics.html | 44 +
.../main/webapps/ozoneManager/ozoneManager.js | 110 ++
.../hadoop/ozone/ksm/TestBucketManagerImpl.java | 395 -----
.../hadoop/ozone/ksm/TestChunkStreams.java | 234 ---
.../ksm/TestKeySpaceManagerHttpServer.java | 141 --
.../apache/hadoop/ozone/ksm/package-info.java | 21 -
.../hadoop/ozone/om/TestBucketManagerImpl.java | 394 +++++
.../hadoop/ozone/om/TestChunkStreams.java | 234 +++
.../ozone/om/TestOzoneManagerHttpServer.java | 141 ++
.../apache/hadoop/ozone/om/package-info.java | 21 +
.../hadoop/fs/ozone/contract/OzoneContract.java | 4 +-
.../genesis/BenchMarkContainerStateMap.java | 16 +-
.../org/apache/hadoop/ozone/scm/cli/SQLCLI.java | 111 +-
.../hadoop/fs/s3a/s3guard/S3GuardTool.java | 10 +
.../s3guard/AbstractS3GuardToolTestBase.java | 18 +
.../namenode/ITestProvidedImplementation.java | 373 ++++-
.../dev-support/findbugs-exclude.xml | 17 +-
.../hadoop/yarn/api/records/Resource.java | 13 +
.../api/records/impl/LightWeightResource.java | 23 +-
.../hadoop/yarn/conf/YarnConfiguration.java | 7 +
.../impl/pb/GetApplicationsRequestPBImpl.java | 44 +-
.../logaggregation/AggregatedLogFormat.java | 6 +-
.../timeline/RollingLevelDBTimelineStore.java | 6 +
.../server/timeline/TimelineDataManager.java | 7 +-
.../timeline/webapp/TimelineWebServices.java | 4 +
.../webapp/TestTimelineWebServices.java | 2 +-
.../amrmproxy/BroadcastAMRMProxyPolicy.java | 11 -
.../amrmproxy/RejectAMRMProxyPolicy.java | 4 -
.../TestBroadcastAMRMProxyFederationPolicy.java | 11 +-
.../yarn/server/nodemanager/NodeManager.java | 66 +-
.../runtime/DockerLinuxContainerRuntime.java | 4 +-
.../runtime/ContainerExecutionException.java | 6 +
.../impl/container-executor.c | 30 +-
.../container-executor/impl/utils/docker-util.c | 2 +-
.../test/test-container-executor.c | 20 +
.../nodemanager/TestNodeManagerResync.java | 56 +
.../runtime/TestDockerContainerRuntime.java | 10 +-
.../conf/capacity-scheduler.xml | 10 +
.../scheduler/capacity/CapacityScheduler.java | 45 +-
.../CapacitySchedulerConfiguration.java | 10 +
.../scheduler/capacity/ParentQueue.java | 36 +-
.../allocator/AbstractContainerAllocator.java | 13 +-
.../scheduler/common/fica/FiCaSchedulerApp.java | 5 +
.../scheduler/fair/ConfigurableResource.java | 69 +-
.../fair/FairSchedulerConfiguration.java | 174 ++-
.../allocation/AllocationFileQueueParser.java | 2 +-
.../resourcemanager/webapp/dao/AppInfo.java | 2 +-
.../webapp/dao/SchedulerInfo.java | 8 +-
.../TestWorkPreservingRMRestart.java | 2 +
.../fair/TestFairSchedulerConfiguration.java | 160 ++-
.../webapp/TestRMWebServices.java | 31 +-
.../webapp/TestRMWebServicesApps.java | 14 +-
...estRMWebServicesAppsCustomResourceTypes.java | 242 ++++
.../webapp/TestRMWebServicesCapacitySched.java | 30 +-
.../TestRMWebServicesConfigurationMutation.java | 5 +
.../webapp/TestRMWebServicesFairScheduler.java | 95 +-
.../TestRMWebServicesSchedulerActivities.java | 2 +-
...ustomResourceTypesConfigurationProvider.java | 138 ++
.../FairSchedulerJsonVerifications.java | 139 ++
.../FairSchedulerXmlVerifications.java | 153 ++
...ervicesFairSchedulerCustomResourceTypes.java | 271 ++++
.../webapp/helper/AppInfoJsonVerifications.java | 123 ++
.../webapp/helper/AppInfoXmlVerifications.java | 132 ++
.../webapp/helper/BufferedClientResponse.java | 57 +
.../helper/JsonCustomResourceTypeTestcase.java | 77 +
.../ResourceRequestsJsonVerifications.java | 252 ++++
.../ResourceRequestsXmlVerifications.java | 215 +++
.../helper/XmlCustomResourceTypeTestCase.java | 112 ++
.../router/clientrm/RouterClientRMService.java | 53 +-
.../router/rmadmin/RouterRMAdminService.java | 51 +-
.../server/router/webapp/RouterWebServices.java | 48 +-
.../clientrm/TestRouterClientRMService.java | 60 +
.../rmadmin/TestRouterRMAdminService.java | 60 +
.../router/webapp/TestRouterWebServices.java | 65 +
.../pom.xml | 10 +
.../storage/TestTimelineReaderHBaseDown.java | 220 +++
.../storage/HBaseTimelineReaderImpl.java | 93 ++
.../reader/TimelineFromIdConverter.java | 93 ++
.../reader/TimelineReaderWebServices.java | 198 ++-
.../TestTimelineReaderWebServicesBasicAcl.java | 154 ++
.../src/site/markdown/FairScheduler.md | 6 +-
.../src/main/webapp/app/initializers/loader.js | 10 +-
379 files changed, 22363 insertions(+), 15606 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
----------------------------------------------------------------------
diff --cc hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
index 82d67b7,4fad5d8..0db5993
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
@@@ -98,11 -93,8 +98,11 @@@ public final class OzoneConsts
public static final String BLOCK_DB = "block.db";
public static final String OPEN_CONTAINERS_DB = "openContainers.db";
public static final String DELETED_BLOCK_DB = "deletedBlock.db";
- public static final String KSM_DB_NAME = "ksm.db";
+ public static final String OM_DB_NAME = "om.db";
+ public static final String STORAGE_DIR_CHUNKS = "chunks";
+ public static final String CONTAINER_FILE_CHECKSUM_EXTENSION = ".chksm";
+
/**
* Supports Bucket Versioning.
*/
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/Storage.java
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-hdds/common/src/main/resources/ozone-default.xml
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
----------------------------------------------------------------------
diff --cc hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
index 4fc1cd9,d215da9..c3d1596
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
@@@ -31,13 -29,11 +31,12 @@@ import org.apache.hadoop.hdds.protocol.
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
import org.apache.hadoop.ozone.OzoneConsts;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
import org.apache.hadoop.ozone.container.common.helpers
.DeletedContainerBlocksSummary;
-import org.apache.hadoop.ozone.container.common.helpers.KeyUtils;
-import org.apache.hadoop.ozone.container.common.interfaces.ContainerManager;
+import org.apache.hadoop.ozone.container.common.interfaces.Container;
- import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
+import org.apache.hadoop.ozone.container.keyvalue.helpers.KeyUtils;
+import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
import org.apache.hadoop.ozone.container.common.statemachine
.EndpointStateMachine;
import org.apache.hadoop.ozone.container.common.statemachine
@@@ -167,21 -145,28 +166,28 @@@ public class DeleteBlocksCommandHandle
* Move a bunch of blocks from a container to deleting state.
* This is a meta update, the actual deletes happen in async mode.
*
+ * @param containerData - KeyValueContainerData
* @param delTX a block deletion transaction.
- * @param config configuration.
* @throws IOException if I/O error occurs.
*/
- private void deleteContainerBlocks(DeletedBlocksTransaction delTX,
- Configuration config) throws IOException {
+ private void deleteKeyValueContainerBlocks(
+ KeyValueContainerData containerData, DeletedBlocksTransaction delTX)
+ throws IOException {
long containerId = delTX.getContainerID();
- ContainerData containerInfo = containerManager.readContainer(containerId);
if (LOG.isDebugEnabled()) {
LOG.debug("Processing Container : {}, DB path : {}", containerId,
- containerInfo.getDBPath());
+ containerData.getMetadataPath());
}
- if (delTX.getTxID() < containerInfo.getDeleteTransactionId()) {
++ if (delTX.getTxID() < containerData.getDeleteTransactionId()) {
+ LOG.debug(String.format("Ignoring delete blocks for containerId: %d."
+ + " Outdated delete transactionId %d < %d", containerId,
- delTX.getTxID(), containerInfo.getDeleteTransactionId()));
++ delTX.getTxID(), containerData.getDeleteTransactionId()));
+ return;
+ }
+
int newDeletionBlocks = 0;
- MetadataStore containerDB = KeyUtils.getDB(containerInfo, config);
+ MetadataStore containerDB = KeyUtils.getDB(containerData, conf);
for (Long blk : delTX.getLocalIDList()) {
BatchOperation batch = new BatchOperation();
byte[] blkBytes = Longs.toByteArray(blk);
@@@ -208,13 -203,15 +224,15 @@@
LOG.debug("Block {} not found or already under deletion in"
+ " container {}, skip deleting it.", blk, containerId);
}
- containerDB.put(DFSUtil.string2Bytes(
- OzoneConsts.DELETE_TRANSACTION_KEY_PREFIX + containerId),
- Longs.toByteArray(delTX.getTxID()));
}
+ containerDB.put(DFSUtil.string2Bytes(
+ OzoneConsts.DELETE_TRANSACTION_KEY_PREFIX + delTX.getContainerID()),
+ Longs.toByteArray(delTX.getTxID()));
- containerManager
- .updateDeleteTransactionId(delTX.getContainerID(), delTX.getTxID());
++ containerData
++ .updateDeleteTransactionId(delTX.getTxID());
// update pending deletion blocks count in in-memory container status
- containerManager.incrPendingDeletionBlocks(newDeletionBlocks, containerId);
+ containerData.incrPendingDeletionBlocks(newDeletionBlocks);
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupInputStream.java
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
----------------------------------------------------------------------
diff --cc hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
index ad1e706,a30c6f4..fff8611
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java
@@@ -27,13 -27,11 +27,13 @@@ import org.apache.hadoop.hdds.scm.conta
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.server.datanode.ObjectStoreHandler;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
-import org.apache.hadoop.ozone.container.common.helpers.KeyUtils;
+import org.apache.hadoop.ozone.container.common.impl.ContainerData;
+import org.apache.hadoop.ozone.container.keyvalue.helpers.KeyUtils;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyArgs;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyInfo;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
import org.apache.hadoop.ozone.web.handlers.BucketArgs;
import org.apache.hadoop.ozone.web.handlers.KeyArgs;
import org.apache.hadoop.ozone.web.handlers.UserArgs;
@@@ -160,14 -158,15 +160,16 @@@ public class TestStorageContainerManage
private MetadataStore getContainerMetadata(Long containerID)
throws IOException {
- ContainerInfo container = cluster.getStorageContainerManager()
- .getClientProtocolServer().getContainer(containerID);
- DatanodeDetails leadDN = container.getPipeline().getLeader();
+ ContainerWithPipeline containerWithPipeline = cluster
+ .getStorageContainerManager().getClientProtocolServer()
+ .getContainerWithPipeline(containerID);
+
+ DatanodeDetails leadDN = containerWithPipeline.getPipeline().getLeader();
OzoneContainer containerServer =
getContainerServerByDatanodeUuid(leadDN.getUuidString());
- ContainerData containerData = containerServer.getContainerManager()
- .readContainer(containerID);
+ KeyValueContainerData containerData = (KeyValueContainerData) containerServer
+ .getContainerSet()
+ .getContainer(containerID).getContainerData();
return KeyUtils.getDB(containerData, conf);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
----------------------------------------------------------------------
diff --cc hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
index b832dd2,58b831b..30b18c2
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
@@@ -32,10 -32,10 +32,10 @@@ import org.apache.hadoop.ozone.client.O
import org.apache.hadoop.ozone.client.OzoneClient;
import org.apache.hadoop.ozone.client.OzoneClientFactory;
import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
+import org.apache.hadoop.ozone.container.common.impl.ContainerData;
import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyArgs;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyLocationInfo;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
import org.apache.hadoop.test.GenericTestUtils;
import org.junit.AfterClass;
@@@ -204,14 -257,8 +257,8 @@@ public class TestCloseContainerByPipeli
if (datanode.equals(datanodeService.getDatanodeDetails())) {
containerData =
datanodeService.getDatanodeStateMachine().getContainer()
- .getContainerManager().readContainer(containerID);
+ .getContainerSet().getContainer(containerID).getContainerData();
- if (!containerData.isOpen()) {
- // make sure the closeContainerHandler on the Datanode is invoked
- Assert.assertTrue(
- datanodeService.getDatanodeStateMachine().getCommandDispatcher()
- .getCloseContainerHandler().getInvocationCount() > 0);
- return true;
- }
+ return !containerData.isOpen();
}
} catch (StorageContainerException e) {
throw new AssertionError(e);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
----------------------------------------------------------------------
diff --cc hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
index 114bd04,58a5154..682bd63
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
@@@ -27,9 -27,9 +27,9 @@@ import org.apache.hadoop.hdds.client.Re
import org.apache.hadoop.hdds.client.ReplicationType;
import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
import org.apache.hadoop.ozone.client.rest.OzoneException;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
+import org.apache.hadoop.ozone.container.common.impl.ContainerData;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyArgs;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyLocationInfo;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_GB;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
----------------------------------------------------------------------
diff --cc hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
index 0000000,5481506..c25b00e
mode 000000,100644..100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
@@@ -1,0 -1,143 +1,143 @@@
+ /**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+ package org.apache.hadoop.ozone.om;
+
+ import org.apache.commons.lang3.RandomStringUtils;
+
+ import org.apache.hadoop.hdds.client.ReplicationFactor;
+ import org.apache.hadoop.hdds.client.ReplicationType;
+ import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+ import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+ import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
+ import org.apache.hadoop.ozone.MiniOzoneCluster;
+ import org.apache.hadoop.ozone.OzoneConfigKeys;
+ import org.apache.hadoop.ozone.OzoneConsts;
+ import org.apache.hadoop.ozone.client.*;
+ import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
-import org.apache.hadoop.ozone.container.common.interfaces.ContainerManager;
++import org.apache.hadoop.ozone.container.common.impl.ContainerData;
++import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+ import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+ import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+ import org.junit.AfterClass;
+ import org.junit.BeforeClass;
+ import org.junit.Rule;
+ import org.junit.Test;
+ import org.junit.rules.ExpectedException;
+ import org.slf4j.Logger;
+ import org.slf4j.LoggerFactory;
+
+ import java.io.IOException;
+
+ /**
+ * This class tests container report with DN container state info.
+ */
+ public class TestContainerReportWithKeys {
+ private static final Logger LOG = LoggerFactory.getLogger(
+ TestContainerReportWithKeys.class);
+ private static MiniOzoneCluster cluster = null;
+ private static OzoneConfiguration conf;
+ private static StorageContainerManager scm;
+
+ @Rule
+ public ExpectedException exception = ExpectedException.none();
+
+ /**
+ * Create a MiniDFSCluster for testing.
+ * <p>
+ * Ozone is made active by setting OZONE_ENABLED = true and
+ * OZONE_HANDLER_TYPE_KEY = "distributed"
+ *
+ * @throws IOException
+ */
+ @BeforeClass
+ public static void init() throws Exception {
+ conf = new OzoneConfiguration();
+ conf.set(OzoneConfigKeys.OZONE_HANDLER_TYPE_KEY,
+ OzoneConsts.OZONE_HANDLER_DISTRIBUTED);
+ cluster = MiniOzoneCluster.newBuilder(conf).build();
+ cluster.waitForClusterToBeReady();
+ scm = cluster.getStorageContainerManager();
+ }
+
+ /**
+ * Shutdown MiniDFSCluster.
+ */
+ @AfterClass
+ public static void shutdown() {
+ if (cluster != null) {
+ cluster.shutdown();
+ }
+ }
+
+ @Test
+ public void testContainerReportKeyWrite() throws Exception {
+ final String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
+ final String bucketName = "bucket" + RandomStringUtils.randomNumeric(5);
+ final String keyName = "key" + RandomStringUtils.randomNumeric(5);
+ final int keySize = 100;
+
+ OzoneClient client = OzoneClientFactory.getClient(conf);
+ ObjectStore objectStore = client.getObjectStore();
+ objectStore.createVolume(volumeName);
+ objectStore.getVolume(volumeName).createBucket(bucketName);
+ OzoneOutputStream key =
+ objectStore.getVolume(volumeName).getBucket(bucketName)
+ .createKey(keyName, keySize, ReplicationType.STAND_ALONE,
+ ReplicationFactor.ONE);
+ String dataString = RandomStringUtils.randomAlphabetic(keySize);
+ key.write(dataString.getBytes());
+ key.close();
+
+ OmKeyArgs keyArgs = new OmKeyArgs.Builder()
+ .setVolumeName(volumeName)
+ .setBucketName(bucketName)
+ .setKeyName(keyName)
+ .setType(HddsProtos.ReplicationType.STAND_ALONE)
+ .setFactor(HddsProtos.ReplicationFactor.ONE).setDataSize(keySize)
+ .build();
+
+
+ OmKeyLocationInfo keyInfo =
+ cluster.getOzoneManager().lookupKey(keyArgs).getKeyLocationVersions()
+ .get(0).getBlocksLatestVersionOnly().get(0);
+
+ ContainerData cd = getContainerData(keyInfo.getContainerID());
+
- LOG.info("DN Container Data: keyCount: {} used: {} ",
- cd.getKeyCount(), cd.getBytesUsed());
++/* LOG.info("DN Container Data: keyCount: {} used: {} ",
++ cd.getKeyCount(), cd.getBytesUsed());*/
+
+ ContainerInfo cinfo = scm.getContainerInfo(keyInfo.getContainerID());
+
+ LOG.info("SCM Container Info keyCount: {} usedBytes: {}",
+ cinfo.getNumberOfKeys(), cinfo.getUsedBytes());
+ }
+
+
+ private static ContainerData getContainerData(long containerID) {
+ ContainerData containerData;
+ try {
- ContainerManager containerManager = cluster.getHddsDatanodes().get(0)
- .getDatanodeStateMachine().getContainer().getContainerManager();
- containerData = containerManager.readContainer(containerID);
++ ContainerSet containerManager = cluster.getHddsDatanodes().get(0)
++ .getDatanodeStateMachine().getContainer().getContainerSet();
++ containerData = containerManager.getContainer(containerID).getContainerData();
+ } catch (StorageContainerException e) {
+ throw new AssertionError(e);
+ }
+ return containerData;
+ }
+ }
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
----------------------------------------------------------------------
diff --cc hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
index 5c62803,42bb936..a2d95e8
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
@@@ -141,9 -144,8 +144,8 @@@ public class TestContainerSmallFile
ContainerProtocolCalls.writeSmallFile(client, blockID,
"data123".getBytes(), traceID);
-
thrown.expect(StorageContainerException.class);
- thrown.expectMessage("Unable to find the container");
+ thrown.expectMessage("ContainerID 8888 does not exist");
// Try to read a invalid key
ContainerProtos.GetSmallFileResponseProto response =
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
----------------------------------------------------------------------
diff --cc hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
index 12d444a,a6bb586..cc11feb
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestSCMCli.java
@@@ -162,21 -158,22 +163,22 @@@ public class TestSCMCli
// 1. Test to delete a non-empty container.
// ****************************************
// Create an non-empty container
- ContainerInfo container = containerOperationClient
+ ContainerWithPipeline container = containerOperationClient
.createContainer(xceiverClientManager.getType(),
HddsProtos.ReplicationFactor.ONE, containerOwner);
--
- ContainerData cdata = ContainerData
- .getFromProtBuf(containerOperationClient.readContainer(
- container.getContainerInfo().getContainerID()), conf);
- KeyUtils.getDB(cdata, conf)
+ KeyValueContainerData kvData = KeyValueContainerData
+ .getFromProtoBuf(containerOperationClient.readContainer(
- container.getContainerID(), container.getPipeline()));
++ container.getContainerInfo().getContainerID(), container
++ .getPipeline()));
+ KeyUtils.getDB(kvData, conf)
- .put(Longs.toByteArray(container.getContainerID()),
+ .put(Longs.toByteArray(container.getContainerInfo().getContainerID()),
"someKey".getBytes());
- Assert.assertTrue(containerExist(container.getContainerID()));
- Assert.assertTrue(
- containerExist(container.getContainerInfo().getContainerID()));
++ Assert.assertTrue(containerExist(container.getContainerInfo()
++ .getContainerID()));
// Gracefully delete a container should fail because it is open.
- delCmd = new String[] {"-container", "-delete", "-c",
- Long.toString(container.getContainerID())};
+ delCmd = new String[]{"-container", "-delete", "-c",
+ Long.toString(container.getContainerInfo().getContainerID())};
testErr = new ByteArrayOutputStream();
ByteArrayOutputStream out = new ByteArrayOutputStream();
exitCode = runCommandAndGetOutput(delCmd, out, testErr);
@@@ -275,26 -267,24 +272,27 @@@
EXECUTION_ERROR, exitCode);
// Create an empty container.
- ContainerInfo container = containerOperationClient
+ ContainerWithPipeline container = containerOperationClient
.createContainer(xceiverClientManager.getType(),
HddsProtos.ReplicationFactor.ONE, containerOwner);
- ContainerData data = ContainerData.getFromProtBuf(containerOperationClient
- .readContainer(container.getContainerInfo().getContainerID()), conf);
-
+ KeyValueContainerData data = KeyValueContainerData
+ .getFromProtoBuf(containerOperationClient.
- readContainer(container.getContainerID(),
++ readContainer(container.getContainerInfo().getContainerID(),
+ container.getPipeline()));
-
- info = new String[] { "-container", "-info", "-c",
- Long.toString(container.getContainerID()) };
+ info = new String[]{"-container", "-info", "-c",
+ Long.toString(container.getContainerInfo().getContainerID())};
ByteArrayOutputStream out = new ByteArrayOutputStream();
exitCode = runCommandAndGetOutput(info, out, null);
assertEquals("Expected Success, did not find it.", ResultCode.SUCCESS,
- exitCode);
+ exitCode);
String openStatus = data.isOpen() ? "OPEN" : "CLOSED";
- String expected = String.format(formatStr, container.getContainerInfo()
- .getContainerID(), openStatus, data.getDBPath(),
- data.getContainerPath(), "", datanodeDetails.getHostName(),
- datanodeDetails.getHostName());
+ String expected =
- String.format(formatStr, container.getContainerID(), openStatus,
- data.getDbFile().getPath(), data.getContainerPath(), "",
- datanodeDetails.getHostName(), datanodeDetails.getHostName());
++ String.format(formatStr, container.getContainerInfo().getContainerID
++ (), openStatus, data.getDbFile().getPath(), data
++ .getContainerPath(), "", datanodeDetails.getHostName(),
++ datanodeDetails.getHostName());
++
assertEquals(expected, out.toString());
out.reset();
@@@ -303,9 -293,9 +301,10 @@@
container = containerOperationClient
.createContainer(xceiverClientManager.getType(),
HddsProtos.ReplicationFactor.ONE, containerOwner);
- data = ContainerData
- .getFromProtBuf(containerOperationClient.readContainer(
- container.getContainerInfo().getContainerID()), conf);
+ data = KeyValueContainerData
+ .getFromProtoBuf(containerOperationClient.readContainer(
- container.getContainerID(), container.getPipeline()));
++ container.getContainerInfo().getContainerID(), container
++ .getPipeline()));
KeyUtils.getDB(data, conf)
.put(containerID.getBytes(), "someKey".getBytes());
@@@ -315,25 -305,24 +314,27 @@@
assertEquals(ResultCode.SUCCESS, exitCode);
openStatus = data.isOpen() ? "OPEN" : "CLOSED";
- expected = String.format(formatStr, container.getContainerID(), openStatus,
- data.getDbFile().getPath(), data.getContainerPath(), "",
- datanodeDetails.getHostName(), datanodeDetails.getHostName());
- expected = String.format(formatStr, container.getContainerInfo().
- getContainerID(), openStatus, data.getDBPath(),
- data.getContainerPath(), "", datanodeDetails.getHostName(),
++
++ expected = String.format(formatStr, container.getContainerInfo()
++ .getContainerID(), openStatus, data.getDbFile().getPath(), data
++ .getContainerPath(), "", datanodeDetails.getHostName(),
+ datanodeDetails.getHostName());
assertEquals(expected, out.toString());
out.reset();
-
// Close last container and test info again.
- containerOperationClient.closeContainer(
- container.getContainerID(), container.getPipeline());
+ containerOperationClient
+ .closeContainer(container.getContainerInfo().getContainerID());
- info = new String[] { "-container", "-info", "-c",
- Long.toString(container.getContainerID()) };
+ info = new String[]{"-container", "-info", "-c",
+ Long.toString(container.getContainerInfo().getContainerID())};
exitCode = runCommandAndGetOutput(info, out, null);
assertEquals(ResultCode.SUCCESS, exitCode);
- data = ContainerData.getFromProtBuf(containerOperationClient
- .readContainer(container.getContainerInfo().getContainerID()), conf);
+ data = KeyValueContainerData
+ .getFromProtoBuf(containerOperationClient.readContainer(
- container.getContainerID(), container.getPipeline()));
++ container.getContainerInfo().getContainerID(), container
++ .getPipeline()));
openStatus = data.isOpen() ? "OPEN" : "CLOSED";
expected = String
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c275a9a6/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
----------------------------------------------------------------------
diff --cc hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
index 2f592c1,a95bd0e..c144db2
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
@@@ -44,18 -44,17 +44,18 @@@ import org.apache.hadoop.ozone.client.i
import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
import org.apache.hadoop.ozone.client.rpc.RpcClient;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerData;
-import org.apache.hadoop.ozone.container.common.helpers.ContainerUtils;
import org.apache.hadoop.ozone.container.common.helpers.KeyData;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
+import org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler;
import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
- import org.apache.hadoop.ozone.ksm.KeySpaceManager;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyArgs;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyInfo;
- import org.apache.hadoop.ozone.ksm.helpers.KsmVolumeArgs;
- import org.apache.hadoop.ozone.ksm.helpers.KsmBucketInfo;
- import org.apache.hadoop.ozone.ksm.helpers.KsmKeyLocationInfo;
- import org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos
+ import org.apache.hadoop.ozone.om.OzoneManager;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyArgs;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+ import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+ import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+ import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+ import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
.Status;
import org.apache.hadoop.ozone.client.rest.OzoneException;
import org.apache.hadoop.ozone.web.utils.OzoneUtils;
@@@ -663,12 -661,11 +663,12 @@@ public class TestKeys
}
@Test
+ @Ignore("Needs to be fixed for new SCM and Storage design")
public void testDeleteKey() throws Exception {
- KeySpaceManager ksm = ozoneCluster.getKeySpaceManager();
+ OzoneManager ozoneManager = ozoneCluster.getOzoneManager();
// To avoid interference from other test cases,
// we collect number of existing keys at the beginning
- int numOfExistedKeys = countKsmKeys(ksm);
+ int numOfExistedKeys = countOmKeys(ozoneManager);
// Keep tracking bucket keys info while creating them
PutHelper helper = new PutHelper(client, path);
@@@ -697,20 -694,17 +697,20 @@@
// Memorize chunks that has been created,
// so we can verify actual deletions at DN side later.
- for (KsmKeyInfo keyInfo : createdKeys) {
- List<KsmKeyLocationInfo> locations =
+ for (OmKeyInfo keyInfo : createdKeys) {
+ List<OmKeyLocationInfo> locations =
keyInfo.getLatestVersionLocations().getLocationList();
- for (KsmKeyLocationInfo location : locations) {
+ for (OmKeyLocationInfo location : locations) {
- KeyData keyData = new KeyData(location.getBlockID());
- KeyData blockInfo = cm.getContainerManager()
- .getKeyManager().getKey(keyData);
- ContainerData containerData = cm.getContainerManager()
- .readContainer(keyData.getContainerID());
- File dataDir = ContainerUtils
- .getDataDirectory(containerData).toFile();
+ KeyValueHandler keyValueHandler = (KeyValueHandler) cm
+ .getDispatcher().getHandler(ContainerProtos.ContainerType
+ .KeyValueContainer);
+ KeyValueContainer container = (KeyValueContainer) cm.getContainerSet()
+ .getContainer(location.getBlockID().getContainerID());
+ KeyData blockInfo = keyValueHandler
+ .getKeyManager().getKey(container, location.getBlockID());
+ KeyValueContainerData containerData = (KeyValueContainerData) container
+ .getContainerData();
+ File dataDir = new File(containerData.getChunksPath());
for (ContainerProtos.ChunkInfo chunkInfo : blockInfo.getChunks()) {
File chunkFile = dataDir.toPath()
.resolve(chunkInfo.getChunkName()).toFile();
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[37/50] [abbrv] hadoop git commit: HDFS-13663. Should throw exception
when incorrect block size is set. Contributed by Shweta.
Posted by bo...@apache.org.
HDFS-13663. Should throw exception when incorrect block size is set. Contributed by Shweta.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/87eeb26e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/87eeb26e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/87eeb26e
Branch: refs/heads/YARN-7402
Commit: 87eeb26e7200fa3be0ca62ebf163985b58ad309e
Parents: 1bc106a
Author: Xiao Chen <xi...@apache.org>
Authored: Thu Jul 12 20:19:14 2018 -0700
Committer: Xiao Chen <xi...@apache.org>
Committed: Thu Jul 12 20:24:11 2018 -0700
----------------------------------------------------------------------
.../apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/87eeb26e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
index 94835e2..34f6c33 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
@@ -275,7 +275,9 @@ public class BlockRecoveryWorker {
}
// recover() guarantees syncList will have at least one replica with RWR
// or better state.
- assert minLength != Long.MAX_VALUE : "wrong minLength";
+ if (minLength == Long.MAX_VALUE) {
+ throw new IOException("Incorrect block size");
+ }
newBlock.setNumBytes(minLength);
break;
case RUR:
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[24/50] [abbrv] hadoop git commit: YARN-8491.
TestServiceCLI#testEnableFastLaunch fail when umask is 077. Contributed by K
G Bakthavachalam.
Posted by bo...@apache.org.
YARN-8491. TestServiceCLI#testEnableFastLaunch fail when umask is 077. Contributed by K G Bakthavachalam.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/52e1bc85
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/52e1bc85
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/52e1bc85
Branch: refs/heads/YARN-7402
Commit: 52e1bc8539ce769f47743d8b2d318a54c3887ba0
Parents: 7f1d3d0
Author: bibinchundatt <bi...@apache.org>
Authored: Wed Jul 11 16:19:51 2018 +0530
Committer: bibinchundatt <bi...@apache.org>
Committed: Wed Jul 11 16:20:29 2018 +0530
----------------------------------------------------------------------
.../org/apache/hadoop/yarn/service/client/TestServiceCLI.java | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/52e1bc85/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
index 78a8198..363fe91 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
@@ -121,12 +121,16 @@ public class TestServiceCLI {
basedir = new File("target", "apps");
basedirProp = YARN_SERVICE_BASE_PATH + "=" + basedir.getAbsolutePath();
conf.set(YARN_SERVICE_BASE_PATH, basedir.getAbsolutePath());
+ fs = new SliderFileSystem(conf);
dependencyTarGzBaseDir = tmpFolder.getRoot();
+ fs.getFileSystem()
+ .setPermission(new Path(dependencyTarGzBaseDir.getAbsolutePath()),
+ new FsPermission("755"));
dependencyTarGz = getDependencyTarGz(dependencyTarGzBaseDir);
dependencyTarGzProp = DEPENDENCY_TARBALL_PATH + "=" + dependencyTarGz
.toString();
conf.set(DEPENDENCY_TARBALL_PATH, dependencyTarGz.toString());
- fs = new SliderFileSystem(conf);
+
if (basedir.exists()) {
FileUtils.deleteDirectory(basedir);
} else {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[46/50] [abbrv] hadoop git commit: YARN-7707. [GPG] Policy generator
framework. Contributed by Young Chen
Posted by bo...@apache.org.
YARN-7707. [GPG] Policy generator framework. Contributed by Young Chen
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0bbe70ce
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0bbe70ce
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0bbe70ce
Branch: refs/heads/YARN-7402
Commit: 0bbe70ced1a3a895473436e5f7d328e373b1d4ca
Parents: fa3ee34
Author: Botong Huang <bo...@apache.org>
Authored: Fri Mar 23 17:07:10 2018 -0700
Committer: Botong Huang <bo...@apache.org>
Committed: Fri Jul 13 17:42:58 2018 -0700
----------------------------------------------------------------------
.../hadoop/yarn/conf/YarnConfiguration.java | 36 +-
.../src/main/resources/yarn-default.xml | 40 +++
.../utils/FederationStateStoreFacade.java | 13 +
.../pom.xml | 18 +
.../globalpolicygenerator/GPGContext.java | 4 +
.../globalpolicygenerator/GPGContextImpl.java | 10 +
.../globalpolicygenerator/GPGPolicyFacade.java | 220 ++++++++++++
.../server/globalpolicygenerator/GPGUtils.java | 80 +++++
.../GlobalPolicyGenerator.java | 17 +
.../policygenerator/GlobalPolicy.java | 76 +++++
.../policygenerator/NoOpGlobalPolicy.java | 36 ++
.../policygenerator/PolicyGenerator.java | 261 ++++++++++++++
.../UniformWeightedLocalityGlobalPolicy.java | 71 ++++
.../policygenerator/package-info.java | 24 ++
.../TestGPGPolicyFacade.java | 202 +++++++++++
.../policygenerator/TestPolicyGenerator.java | 338 +++++++++++++++++++
.../src/test/resources/schedulerInfo1.json | 134 ++++++++
.../src/test/resources/schedulerInfo2.json | 196 +++++++++++
18 files changed, 1775 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index b3a4ccb..fe7cb8f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -3335,7 +3335,7 @@ public class YarnConfiguration extends Configuration {
public static final boolean DEFAULT_ROUTER_WEBAPP_PARTIAL_RESULTS_ENABLED =
false;
- private static final String FEDERATION_GPG_PREFIX =
+ public static final String FEDERATION_GPG_PREFIX =
FEDERATION_PREFIX + "gpg.";
// The number of threads to use for the GPG scheduled executor service
@@ -3353,6 +3353,40 @@ public class YarnConfiguration extends Configuration {
FEDERATION_GPG_PREFIX + "subcluster.heartbeat.expiration-ms";
public static final long DEFAULT_GPG_SUBCLUSTER_EXPIRATION_MS = 1800000;
+ public static final String FEDERATION_GPG_POLICY_PREFIX =
+ FEDERATION_GPG_PREFIX + "policy.generator.";
+
+ /** The interval at which the policy generator runs, default is one hour. */
+ public static final String GPG_POLICY_GENERATOR_INTERVAL_MS =
+ FEDERATION_GPG_POLICY_PREFIX + "interval-ms";
+ public static final long DEFAULT_GPG_POLICY_GENERATOR_INTERVAL_MS = -1;
+
+ /**
+ * The configured policy generator class, runs NoOpGlobalPolicy by
+ * default.
+ */
+ public static final String GPG_GLOBAL_POLICY_CLASS =
+ FEDERATION_GPG_POLICY_PREFIX + "class";
+ public static final String DEFAULT_GPG_GLOBAL_POLICY_CLASS =
+ "org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator."
+ + "NoOpGlobalPolicy";
+
+ /**
+ * Whether or not the policy generator is running in read only (won't modify
+ * policies), default is false.
+ */
+ public static final String GPG_POLICY_GENERATOR_READONLY =
+ FEDERATION_GPG_POLICY_PREFIX + "readonly";
+ public static final boolean DEFAULT_GPG_POLICY_GENERATOR_READONLY =
+ false;
+
+ /**
+ * Which sub-clusters the policy generator should blacklist.
+ */
+ public static final String GPG_POLICY_GENERATOR_BLACKLIST =
+ FEDERATION_GPG_POLICY_PREFIX + "blacklist";
+
+
////////////////////////////////
// Other Configs
////////////////////////////////
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 66493f3..755f3e5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3557,6 +3557,46 @@
<property>
<description>
+ The interval at which the policy generator runs, default is one hour
+ </description>
+ <name>yarn.federation.gpg.policy.generator.interval-ms</name>
+ <value>3600000</value>
+ </property>
+
+ <property>
+ <description>
+ The configured policy generator class, runs NoOpGlobalPolicy by default
+ </description>
+ <name>yarn.federation.gpg.policy.generator.class</name>
+ <value>org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator.NoOpGlobalPolicy</value>
+ </property>
+
+ <property>
+ <description>
+ Whether or not the policy generator is running in read only (won't modify policies), default is false
+ </description>
+ <name>yarn.federation.gpg.policy.generator.readonly</name>
+ <value>false</value>
+ </property>
+
+ <property>
+ <description>
+ Whether or not the policy generator is running in read only (won't modify policies), default is false
+ </description>
+ <name>yarn.federation.gpg.policy.generator.readonly</name>
+ <value>false</value>
+ </property>
+
+ <property>
+ <description>
+ Which subclusters the gpg should blacklist, default is none
+ </description>
+ <name>yarn.federation.gpg.policy.generator.blacklist</name>
+ <value></value>
+ </property>
+
+ <property>
+ <description>
It is TimelineClient 1.5 configuration whether to store active
application’s timeline data with in user directory i.e
${yarn.timeline-service.entity-group-fs-store.active-dir}/${user.name}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
index 4c3bed0..25a9e52 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
@@ -62,6 +62,7 @@ import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolic
import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationResponse;
import org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoRequest;
import org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoResponse;
+import org.apache.hadoop.yarn.server.federation.store.records.SetSubClusterPolicyConfigurationRequest;
import org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest;
import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
@@ -373,6 +374,18 @@ public final class FederationStateStoreFacade {
}
/**
+ * Set a policy configuration into the state store.
+ *
+ * @param policyConf the policy configuration to set
+ * @throws YarnException if the request is invalid/fails
+ */
+ public void setPolicyConfiguration(SubClusterPolicyConfiguration policyConf)
+ throws YarnException {
+ stateStore.setPolicyConfiguration(
+ SetSubClusterPolicyConfigurationRequest.newInstance(policyConf));
+ }
+
+ /**
* Adds the home {@link SubClusterId} for the specified {@link ApplicationId}.
*
* @param appHomeSubCluster the mapping of the application to it's home
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
index 9bbb936..9398b0b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
@@ -63,6 +63,12 @@
<dependency>
<groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-server-timelineservice</artifactId>
+ <scope>provided</scope>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-yarn-server-resourcemanager</artifactId>
</dependency>
@@ -73,6 +79,12 @@
</dependency>
<dependency>
+ <groupId>org.mockito</groupId>
+ <artifactId>mockito-all</artifactId>
+ <scope>test</scope>
+ </dependency>
+
+ <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-yarn-server-common</artifactId>
<type>test-jar</type>
@@ -92,6 +104,12 @@
<plugin>
<groupId>org.apache.rat</groupId>
<artifactId>apache-rat-plugin</artifactId>
+ <configuration>
+ <excludes>
+ <exclude>src/test/resources/schedulerInfo1.json</exclude>
+ <exclude>src/test/resources/schedulerInfo2.json</exclude>
+ </excludes>
+ </configuration>
</plugin>
</plugins>
</build>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java
index da8a383..6b0a5a4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java
@@ -28,4 +28,8 @@ public interface GPGContext {
FederationStateStoreFacade getStateStoreFacade();
void setStateStoreFacade(FederationStateStoreFacade facade);
+
+ GPGPolicyFacade getPolicyFacade();
+
+ void setPolicyFacade(GPGPolicyFacade facade);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java
index 3884ace..bb49844 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade
public class GPGContextImpl implements GPGContext {
private FederationStateStoreFacade facade;
+ private GPGPolicyFacade policyFacade;
@Override
public FederationStateStoreFacade getStateStoreFacade() {
@@ -38,4 +39,13 @@ public class GPGContextImpl implements GPGContext {
this.facade = federationStateStoreFacade;
}
+ @Override
+ public GPGPolicyFacade getPolicyFacade(){
+ return policyFacade;
+ }
+
+ @Override
+ public void setPolicyFacade(GPGPolicyFacade gpgPolicyfacade){
+ policyFacade = gpgPolicyfacade;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGPolicyFacade.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGPolicyFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGPolicyFacade.java
new file mode 100644
index 0000000..4c61a14
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGPolicyFacade.java
@@ -0,0 +1,220 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.federation.policies.FederationPolicyUtils;
+import org.apache.hadoop.yarn.server.federation.policies.dao.WeightedPolicyInfo;
+import org.apache.hadoop.yarn.server.federation.policies.manager.WeightedLocalityPolicyManager;
+import org.apache.hadoop.yarn.server.federation.policies.router.FederationRouterPolicy;
+import org.apache.hadoop.yarn.server.federation.policies.amrmproxy.FederationAMRMProxyPolicy;
+import org.apache.hadoop.yarn.server.federation.policies.exceptions.FederationPolicyInitializationException;
+import org.apache.hadoop.yarn.server.federation.policies.manager.FederationPolicyManager;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration;
+import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * A utility class for the GPG Policy Generator to read and write policies
+ * into the FederationStateStore. Policy specific logic is abstracted away in
+ * this class, so the PolicyGenerator can avoid dealing with policy
+ * construction, reinitialization, and serialization.
+ *
+ * There are only two exposed methods:
+ *
+ * {@link #getPolicyManager(String)}
+ * Gets the PolicyManager via queue name. Null if there is no policy
+ * configured for the specified queue. The PolicyManager can be used to
+ * extract the {@link FederationRouterPolicy} and
+ * {@link FederationAMRMProxyPolicy}, as well as any policy specific parameters
+ *
+ * {@link #setPolicyManager(FederationPolicyManager)}
+ * Sets the PolicyManager. If the policy configuration is the same, no change
+ * occurs. Otherwise, the internal cache is updated and the new configuration
+ * is written into the FederationStateStore
+ *
+ * This class assumes that the GPG is the only service
+ * writing policies. Thus, the only FederationStateStore reads occur the first
+ * time a queue policy is retrieved - after that, the GPG only writes to the
+ * FederationStateStore.
+ *
+ * The class uses a PolicyManager cache and a SubClusterPolicyConfiguration
+ * cache. The primary use for these caches are to serve reads, and to
+ * identify when the PolicyGenerator has actually changed the policy
+ * so unnecessary FederationStateStore policy writes can be avoided.
+ */
+
+public class GPGPolicyFacade {
+
+ private static final Logger LOG =
+ LoggerFactory.getLogger(GPGPolicyFacade.class);
+
+ private FederationStateStoreFacade stateStore;
+
+ private Map<String, FederationPolicyManager> policyManagerMap;
+ private Map<String, SubClusterPolicyConfiguration> policyConfMap;
+
+ private boolean readOnly;
+
+ public GPGPolicyFacade(FederationStateStoreFacade stateStore,
+ Configuration conf) {
+ this.stateStore = stateStore;
+ this.policyManagerMap = new HashMap<>();
+ this.policyConfMap = new HashMap<>();
+ this.readOnly =
+ conf.getBoolean(YarnConfiguration.GPG_POLICY_GENERATOR_READONLY,
+ YarnConfiguration.DEFAULT_GPG_POLICY_GENERATOR_READONLY);
+ }
+
+ /**
+ * Provides a utility for the policy generator to read the policy manager
+ * from the FederationStateStore. Because the policy generator should be the
+ * only component updating the policy, this implementation does not use the
+ * reinitialization feature.
+ *
+ * @param queueName the name of the queue we want the policy manager for.
+ * @return the policy manager responsible for the queue policy.
+ */
+ public FederationPolicyManager getPolicyManager(String queueName)
+ throws YarnException {
+ FederationPolicyManager policyManager = policyManagerMap.get(queueName);
+ // If we don't have the policy manager cached, pull configuration
+ // from the FederationStateStore to create and cache it
+ if (policyManager == null) {
+ try {
+ // If we don't have the configuration cached, pull it
+ // from the stateStore
+ SubClusterPolicyConfiguration conf = policyConfMap.get(queueName);
+ if (conf == null) {
+ conf = stateStore.getPolicyConfiguration(queueName);
+ }
+ // If configuration is still null, it does not exist in the
+ // FederationStateStore
+ if (conf == null) {
+ LOG.info("Read null policy for queue {}", queueName);
+ return null;
+ }
+ policyManager =
+ FederationPolicyUtils.instantiatePolicyManager(conf.getType());
+ policyManager.setQueue(queueName);
+
+ // TODO there is currently no way to cleanly deserialize a policy
+ // manager sub type from just the configuration
+ if (policyManager instanceof WeightedLocalityPolicyManager) {
+ WeightedPolicyInfo wpinfo =
+ WeightedPolicyInfo.fromByteBuffer(conf.getParams());
+ WeightedLocalityPolicyManager wlpmanager =
+ (WeightedLocalityPolicyManager) policyManager;
+ LOG.info("Updating policy for queue {} to configured weights router: "
+ + "{}, amrmproxy: {}", queueName,
+ wpinfo.getRouterPolicyWeights(),
+ wpinfo.getAMRMPolicyWeights());
+ wlpmanager.setWeightedPolicyInfo(wpinfo);
+ } else {
+ LOG.warn("Warning: FederationPolicyManager of unsupported type {}, "
+ + "initialization may be incomplete ", policyManager.getClass());
+ }
+
+ policyManagerMap.put(queueName, policyManager);
+ policyConfMap.put(queueName, conf);
+ } catch (YarnException e) {
+ LOG.error("Error reading SubClusterPolicyConfiguration from state "
+ + "store for queue: {}", queueName);
+ throw e;
+ }
+ }
+ return policyManager;
+ }
+
+ /**
+ * Provides a utility for the policy generator to write a policy manager
+ * into the FederationStateStore. The facade keeps a cache and will only write
+ * into the FederationStateStore if the policy configuration has changed.
+ *
+ * @param policyManager The policy manager we want to update into the state
+ * store. It contains policy information as well as
+ * the queue name we will update for.
+ */
+ public void setPolicyManager(FederationPolicyManager policyManager)
+ throws YarnException {
+ if (policyManager == null) {
+ LOG.warn("Attempting to set null policy manager");
+ return;
+ }
+ // Extract the configuration from the policy manager
+ String queue = policyManager.getQueue();
+ SubClusterPolicyConfiguration conf;
+ try {
+ conf = policyManager.serializeConf();
+ } catch (FederationPolicyInitializationException e) {
+ LOG.warn("Error serializing policy for queue {}", queue);
+ throw e;
+ }
+ if (conf == null) {
+ // State store does not currently support setting a policy back to null
+ // because it reads the queue name to set from the policy!
+ LOG.warn("Skip setting policy to null for queue {} into state store",
+ queue);
+ return;
+ }
+ // Compare with configuration cache, if different, write the conf into
+ // store and update our conf and manager cache
+ if (!confCacheEqual(queue, conf)) {
+ try {
+ if (readOnly) {
+ LOG.info("[read-only] Skipping policy update for queue {}", queue);
+ return;
+ }
+ LOG.info("Updating policy for queue {} into state store", queue);
+ stateStore.setPolicyConfiguration(conf);
+ policyConfMap.put(queue, conf);
+ policyManagerMap.put(queue, policyManager);
+ } catch (YarnException e) {
+ LOG.warn("Error writing SubClusterPolicyConfiguration to state "
+ + "store for queue: {}", queue);
+ throw e;
+ }
+ } else {
+ LOG.info("Setting unchanged policy - state store write skipped");
+ }
+ }
+
+ /**
+ * @param queue the queue to check the cached policy configuration for
+ * @param conf the new policy configuration
+ * @return whether or not the conf is equal to the cached conf
+ */
+ private boolean confCacheEqual(String queue,
+ SubClusterPolicyConfiguration conf) {
+ SubClusterPolicyConfiguration cachedConf = policyConfMap.get(queue);
+ if (conf == null && cachedConf == null) {
+ return true;
+ } else if (conf != null && cachedConf != null) {
+ if (conf.equals(cachedConf)) {
+ return true;
+ }
+ }
+ return false;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
new file mode 100644
index 0000000..429bec4
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
@@ -0,0 +1,80 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+
+import javax.servlet.http.HttpServletResponse;
+import javax.ws.rs.core.MediaType;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
+
+/**
+ * GPGUtils contains utility functions for the GPG.
+ *
+ */
+public final class GPGUtils {
+
+ // hide constructor
+ private GPGUtils() {
+ }
+
+ /**
+ * Performs an invocation of the the remote RMWebService.
+ */
+ public static <T> T invokeRMWebService(Configuration conf, String webAddr,
+ String path, final Class<T> returnType) {
+ Client client = Client.create();
+ T obj = null;
+
+ WebResource webResource = client.resource(webAddr);
+ ClientResponse response = webResource.path("ws/v1/cluster").path(path)
+ .accept(MediaType.APPLICATION_XML).get(ClientResponse.class);
+ if (response.getStatus() == HttpServletResponse.SC_OK) {
+ obj = response.getEntity(returnType);
+ } else {
+ throw new YarnRuntimeException("Bad response from remote web service: "
+ + response.getStatus());
+ }
+ return obj;
+ }
+
+ /**
+ * Creates a uniform weighting of 1.0 for each sub cluster.
+ */
+ public static Map<SubClusterIdInfo, Float> createUniformWeights(
+ Set<SubClusterId> ids) {
+ Map<SubClusterIdInfo, Float> weights =
+ new HashMap<>();
+ for(SubClusterId id : ids) {
+ weights.put(new SubClusterIdInfo(id), 1.0f);
+ }
+ return weights;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
index f6cfba0..88b9f2b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
@@ -31,6 +31,7 @@ import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator.PolicyGenerator;
import org.apache.hadoop.yarn.server.globalpolicygenerator.subclustercleaner.SubClusterCleaner;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -62,6 +63,7 @@ public class GlobalPolicyGenerator extends CompositeService {
// Scheduler service that runs tasks periodically
private ScheduledThreadPoolExecutor scheduledExecutorService;
private SubClusterCleaner subClusterCleaner;
+ private PolicyGenerator policyGenerator;
public GlobalPolicyGenerator() {
super(GlobalPolicyGenerator.class.getName());
@@ -73,11 +75,15 @@ public class GlobalPolicyGenerator extends CompositeService {
// Set up the context
this.gpgContext
.setStateStoreFacade(FederationStateStoreFacade.getInstance());
+ this.gpgContext
+ .setPolicyFacade(new GPGPolicyFacade(
+ this.gpgContext.getStateStoreFacade(), conf));
this.scheduledExecutorService = new ScheduledThreadPoolExecutor(
conf.getInt(YarnConfiguration.GPG_SCHEDULED_EXECUTOR_THREADS,
YarnConfiguration.DEFAULT_GPG_SCHEDULED_EXECUTOR_THREADS));
this.subClusterCleaner = new SubClusterCleaner(conf, this.gpgContext);
+ this.policyGenerator = new PolicyGenerator(conf, this.gpgContext);
DefaultMetricsSystem.initialize(METRICS_NAME);
@@ -99,6 +105,17 @@ public class GlobalPolicyGenerator extends CompositeService {
LOG.info("Scheduled sub-cluster cleaner with interval: {}",
DurationFormatUtils.formatDurationISO(scCleanerIntervalMs));
}
+
+ // Schedule PolicyGenerator
+ long policyGeneratorIntervalMillis = getConfig().getLong(
+ YarnConfiguration.GPG_POLICY_GENERATOR_INTERVAL_MS,
+ YarnConfiguration.DEFAULT_GPG_POLICY_GENERATOR_INTERVAL_MS);
+ if(policyGeneratorIntervalMillis > 0){
+ this.scheduledExecutorService.scheduleAtFixedRate(this.policyGenerator,
+ 0, policyGeneratorIntervalMillis, TimeUnit.MILLISECONDS);
+ LOG.info("Scheduled policygenerator with interval: {}",
+ DurationFormatUtils.formatDurationISO(policyGeneratorIntervalMillis));
+ }
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/GlobalPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/GlobalPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/GlobalPolicy.java
new file mode 100644
index 0000000..38d762d
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/GlobalPolicy.java
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.server.federation.policies.manager.FederationPolicyManager;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+
+import java.util.Collections;
+import java.util.Map;
+
+/**
+ * This interface defines the plug-able policy that the PolicyGenerator uses
+ * to update policies into the state store.
+ */
+
+public abstract class GlobalPolicy implements Configurable {
+
+ private Configuration conf;
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ }
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ /**
+ * Return a map of the object type and RM path to request it from - the
+ * framework will query these paths and provide the objects to the policy.
+ * Delegating this responsibility to the PolicyGenerator enables us to avoid
+ * duplicate calls to the same * endpoints as the GlobalPolicy is invoked
+ * once per queue.
+ */
+ protected Map<Class, String> registerPaths() {
+ // Default register nothing
+ return Collections.emptyMap();
+ }
+
+ /**
+ * Given a queue, cluster metrics, and policy manager, update the policy
+ * to account for the cluster status. This method defines the policy generator
+ * behavior.
+ *
+ * @param queueName name of the queue
+ * @param clusterInfo subClusterId map to cluster information about the
+ * SubCluster used to make policy decisions
+ * @param manager the FederationPolicyManager for the queue's existing
+ * policy the manager may be null, in which case the policy
+ * will need to be created
+ * @return policy manager that handles the updated (or created) policy
+ */
+ protected abstract FederationPolicyManager updatePolicy(String queueName,
+ Map<SubClusterId, Map<Class, Object>> clusterInfo,
+ FederationPolicyManager manager);
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/NoOpGlobalPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/NoOpGlobalPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/NoOpGlobalPolicy.java
new file mode 100644
index 0000000..c2d578f
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/NoOpGlobalPolicy.java
@@ -0,0 +1,36 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator;
+
+import org.apache.hadoop.yarn.server.federation.policies.manager.FederationPolicyManager;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+
+import java.util.Map;
+
+/**
+ * Default policy that does not update any policy configurations.
+ */
+public class NoOpGlobalPolicy extends GlobalPolicy{
+
+ @Override
+ public FederationPolicyManager updatePolicy(String queueName,
+ Map<SubClusterId, Map<Class, Object>> clusterInfo,
+ FederationPolicyManager manager) {
+ return null;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/PolicyGenerator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/PolicyGenerator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/PolicyGenerator.java
new file mode 100644
index 0000000..5681ff0
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/PolicyGenerator.java
@@ -0,0 +1,261 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.federation.policies.manager.FederationPolicyManager;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
+import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGContext;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGUtils;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerQueueInfo;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerInfo;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * The PolicyGenerator runs periodically and updates the policy configuration
+ * for each queue into the FederationStateStore. The policy update behavior is
+ * defined by the GlobalPolicy instance that is used.
+ */
+
+public class PolicyGenerator implements Runnable, Configurable {
+
+ private static final Logger LOG =
+ LoggerFactory.getLogger(PolicyGenerator.class);
+
+ private GPGContext gpgContext;
+ private Configuration conf;
+
+ // Information request map
+ private Map<Class, String> pathMap = new HashMap<>();
+
+ // Global policy instance
+ @VisibleForTesting
+ protected GlobalPolicy policy;
+
+ /**
+ * The PolicyGenerator periodically reads SubCluster load and updates
+ * policies into the FederationStateStore.
+ */
+ public PolicyGenerator(Configuration conf, GPGContext context) {
+ setConf(conf);
+ init(context);
+ }
+
+ private void init(GPGContext context) {
+ this.gpgContext = context;
+ LOG.info("Initialized PolicyGenerator");
+ }
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ this.policy = FederationStateStoreFacade
+ .createInstance(conf, YarnConfiguration.GPG_GLOBAL_POLICY_CLASS,
+ YarnConfiguration.DEFAULT_GPG_GLOBAL_POLICY_CLASS,
+ GlobalPolicy.class);
+ policy.setConf(conf);
+ pathMap.putAll(policy.registerPaths());
+ }
+
+ @Override
+ public Configuration getConf() {
+ return this.conf;
+ }
+
+ @Override
+ public final void run() {
+ Map<SubClusterId, SubClusterInfo> activeSubClusters;
+ try {
+ activeSubClusters = gpgContext.getStateStoreFacade().getSubClusters(true);
+ } catch (YarnException e) {
+ LOG.error("Error retrieving active sub-clusters", e);
+ return;
+ }
+
+ // Parse the scheduler information from all the SCs
+ Map<SubClusterId, SchedulerInfo> schedInfo =
+ getSchedulerInfo(activeSubClusters);
+
+ // Extract and enforce that all the schedulers have matching type
+ Set<String> queueNames = extractQueues(schedInfo);
+
+ // Remove black listed SubClusters
+ activeSubClusters.keySet().removeAll(getBlackList());
+ LOG.info("Active non-blacklist sub-clusters: {}",
+ activeSubClusters.keySet());
+
+ // Get cluster metrics information from non black listed RMs - later used
+ // to evaluate SubCluster load
+ Map<SubClusterId, Map<Class, Object>> clusterInfo =
+ getInfos(activeSubClusters);
+
+ // Update into the FederationStateStore
+ for (String queueName : queueNames) {
+ // Retrieve the manager from the policy facade
+ FederationPolicyManager manager;
+ try {
+ manager = this.gpgContext.getPolicyFacade().getPolicyManager(queueName);
+ } catch (YarnException e) {
+ LOG.error("GetPolicy for queue {} failed", queueName, e);
+ continue;
+ }
+ LOG.info("Updating policy for queue {}", queueName);
+ manager = policy.updatePolicy(queueName, clusterInfo, manager);
+ try {
+ this.gpgContext.getPolicyFacade().setPolicyManager(manager);
+ } catch (YarnException e) {
+ LOG.error("SetPolicy for queue {} failed", queueName, e);
+ }
+ }
+ }
+
+ /**
+ * Helper to retrieve metrics from the RM REST endpoints.
+ *
+ * @param activeSubClusters A map of active SubCluster IDs to info
+ */
+ @VisibleForTesting
+ protected Map<SubClusterId, Map<Class, Object>> getInfos(
+ Map<SubClusterId, SubClusterInfo> activeSubClusters) {
+
+ Map<SubClusterId, Map<Class, Object>> clusterInfo = new HashMap<>();
+ for (SubClusterInfo sci : activeSubClusters.values()) {
+ for (Map.Entry<Class, String> e : this.pathMap.entrySet()) {
+ if (!clusterInfo.containsKey(sci.getSubClusterId())) {
+ clusterInfo.put(sci.getSubClusterId(), new HashMap<Class, Object>());
+ }
+ Object ret = GPGUtils
+ .invokeRMWebService(conf, sci.getRMWebServiceAddress(),
+ e.getValue(), e.getKey());
+ clusterInfo.get(sci.getSubClusterId()).put(e.getKey(), ret);
+ }
+ }
+
+ return clusterInfo;
+ }
+
+ /**
+ * Helper to retrieve SchedulerInfos.
+ *
+ * @param activeSubClusters A map of active SubCluster IDs to info
+ */
+ @VisibleForTesting
+ protected Map<SubClusterId, SchedulerInfo> getSchedulerInfo(
+ Map<SubClusterId, SubClusterInfo> activeSubClusters) {
+ Map<SubClusterId, SchedulerInfo> schedInfo =
+ new HashMap<>();
+ for (SubClusterInfo sci : activeSubClusters.values()) {
+ SchedulerTypeInfo sti = GPGUtils
+ .invokeRMWebService(conf, sci.getRMWebServiceAddress(),
+ RMWSConsts.SCHEDULER, SchedulerTypeInfo.class);
+ if(sti != null){
+ schedInfo.put(sci.getSubClusterId(), sti.getSchedulerInfo());
+ } else {
+ LOG.warn("Skipped null scheduler info from SubCluster " + sci
+ .getSubClusterId().toString());
+ }
+ }
+ return schedInfo;
+ }
+
+ /**
+ * Helper to get a set of blacklisted SubCluster Ids from configuration.
+ */
+ private Set<SubClusterId> getBlackList() {
+ String blackListParam =
+ conf.get(YarnConfiguration.GPG_POLICY_GENERATOR_BLACKLIST);
+ if(blackListParam == null){
+ return Collections.emptySet();
+ }
+ Set<SubClusterId> blackList = new HashSet<>();
+ for (String id : blackListParam.split(",")) {
+ blackList.add(SubClusterId.newInstance(id));
+ }
+ return blackList;
+ }
+
+ /**
+ * Given the scheduler information for all RMs, extract the union of
+ * queue names - right now we only consider instances of capacity scheduler.
+ *
+ * @param schedInfo the scheduler information
+ * @return a set of queue names
+ */
+ private Set<String> extractQueues(
+ Map<SubClusterId, SchedulerInfo> schedInfo) {
+ Set<String> queueNames = new HashSet<String>();
+ for (Map.Entry<SubClusterId, SchedulerInfo> entry : schedInfo.entrySet()) {
+ if (entry.getValue() instanceof CapacitySchedulerInfo) {
+ // Flatten the queue structure and get only non leaf queues
+ queueNames.addAll(flattenQueue((CapacitySchedulerInfo) entry.getValue())
+ .get(CapacitySchedulerQueueInfo.class));
+ } else {
+ LOG.warn("Skipping SubCluster {}, not configured with capacity "
+ + "scheduler", entry.getKey());
+ }
+ }
+ return queueNames;
+ }
+
+ // Helpers to flatten the queue structure into a multimap of
+ // queue type to set of queue names
+ private Map<Class, Set<String>> flattenQueue(CapacitySchedulerInfo csi) {
+ Map<Class, Set<String>> flattened = new HashMap<Class, Set<String>>();
+ addOrAppend(flattened, csi.getClass(), csi.getQueueName());
+ for (CapacitySchedulerQueueInfo csqi : csi.getQueues().getQueueInfoList()) {
+ flattenQueue(csqi, flattened);
+ }
+ return flattened;
+ }
+
+ private void flattenQueue(CapacitySchedulerQueueInfo csi,
+ Map<Class, Set<String>> flattened) {
+ addOrAppend(flattened, csi.getClass(), csi.getQueueName());
+ if (csi.getQueues() != null) {
+ for (CapacitySchedulerQueueInfo csqi : csi.getQueues()
+ .getQueueInfoList()) {
+ flattenQueue(csqi, flattened);
+ }
+ }
+ }
+
+ private <K, V> void addOrAppend(Map<K, Set<V>> multimap, K key, V value) {
+ if (!multimap.containsKey(key)) {
+ multimap.put(key, new HashSet<V>());
+ }
+ multimap.get(key).add(value);
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/UniformWeightedLocalityGlobalPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/UniformWeightedLocalityGlobalPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/UniformWeightedLocalityGlobalPolicy.java
new file mode 100644
index 0000000..826cb02
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/UniformWeightedLocalityGlobalPolicy.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator;
+
+import org.apache.commons.math3.optim.nonlinear.vector.Weight;
+import org.apache.hadoop.yarn.server.federation.policies.manager.FederationPolicyManager;
+import org.apache.hadoop.yarn.server.federation.policies.manager.WeightedLocalityPolicyManager;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Map;
+
+/**
+ * Simple policy that generates and updates uniform weighted locality
+ * policies.
+ */
+public class UniformWeightedLocalityGlobalPolicy extends GlobalPolicy{
+
+ private static final Logger LOG =
+ LoggerFactory.getLogger(UniformWeightedLocalityGlobalPolicy.class);
+
+ @Override
+ protected FederationPolicyManager updatePolicy(String queueName,
+ Map<SubClusterId, Map<Class, Object>> clusterInfo,
+ FederationPolicyManager currentManager){
+ if(currentManager == null){
+ // Set uniform weights for all SubClusters
+ LOG.info("Creating uniform weighted policy queue {}", queueName);
+ WeightedLocalityPolicyManager manager =
+ new WeightedLocalityPolicyManager();
+ manager.setQueue(queueName);
+ Map<SubClusterIdInfo, Float> policyWeights =
+ GPGUtils.createUniformWeights(clusterInfo.keySet());
+ manager.getWeightedPolicyInfo().setAMRMPolicyWeights(policyWeights);
+ manager.getWeightedPolicyInfo().setRouterPolicyWeights(policyWeights);
+ currentManager = manager;
+ }
+ if(currentManager instanceof WeightedLocalityPolicyManager){
+ LOG.info("Updating policy for queue {} to default weights", queueName);
+ WeightedLocalityPolicyManager wlpmanager =
+ (WeightedLocalityPolicyManager) currentManager;
+ wlpmanager.getWeightedPolicyInfo().setAMRMPolicyWeights(
+ GPGUtils.createUniformWeights(clusterInfo.keySet()));
+ wlpmanager.getWeightedPolicyInfo().setRouterPolicyWeights(
+ GPGUtils.createUniformWeights(clusterInfo.keySet()));
+ } else {
+ LOG.info("Policy for queue {} is of type {}, expected {}",
+ queueName, currentManager.getClass(), Weight.class);
+ }
+ return currentManager;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/package-info.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/package-info.java
new file mode 100644
index 0000000..e8ff436
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/package-info.java
@@ -0,0 +1,24 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Classes comprising the policy generator for the GPG. Responsibilities include
+ * generating and updating policies based on the cluster status.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/TestGPGPolicyFacade.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/TestGPGPolicyFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/TestGPGPolicyFacade.java
new file mode 100644
index 0000000..d78c11f
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/TestGPGPolicyFacade.java
@@ -0,0 +1,202 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator;
+
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.federation.policies.manager.FederationPolicyManager;
+import org.apache.hadoop.yarn.server.federation.policies.manager.WeightedLocalityPolicyManager;
+import org.apache.hadoop.yarn.server.federation.store.FederationStateStore;
+import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore;
+import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationResponse;
+import org.apache.hadoop.yarn.server.federation.store.records.SetSubClusterPolicyConfigurationRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration;
+import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Matchers;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Unit test for GPG Policy Facade.
+ */
+public class TestGPGPolicyFacade {
+
+ private Configuration conf;
+ private FederationStateStore stateStore;
+ private FederationStateStoreFacade facade =
+ FederationStateStoreFacade.getInstance();
+ private GPGPolicyFacade policyFacade;
+
+ private Set<SubClusterId> subClusterIds;
+
+ private SubClusterPolicyConfiguration testConf;
+
+ private static final String TEST_QUEUE = "test-queue";
+
+ public TestGPGPolicyFacade() {
+ conf = new Configuration();
+ conf.setInt(YarnConfiguration.FEDERATION_CACHE_TIME_TO_LIVE_SECS, 0);
+ subClusterIds = new HashSet<>();
+ subClusterIds.add(SubClusterId.newInstance("sc0"));
+ subClusterIds.add(SubClusterId.newInstance("sc1"));
+ subClusterIds.add(SubClusterId.newInstance("sc2"));
+ }
+
+ @Before
+ public void setUp() throws IOException, YarnException {
+ stateStore = new MemoryFederationStateStore();
+ stateStore.init(conf);
+ facade.reinitialize(stateStore, conf);
+ policyFacade = new GPGPolicyFacade(facade, conf);
+ WeightedLocalityPolicyManager manager =
+ new WeightedLocalityPolicyManager();
+ // Add a test policy for test queue
+ manager.setQueue(TEST_QUEUE);
+ manager.getWeightedPolicyInfo().setAMRMPolicyWeights(
+ GPGUtils.createUniformWeights(subClusterIds));
+ manager.getWeightedPolicyInfo().setRouterPolicyWeights(
+ GPGUtils.createUniformWeights(subClusterIds));
+ testConf = manager.serializeConf();
+ stateStore.setPolicyConfiguration(SetSubClusterPolicyConfigurationRequest
+ .newInstance(testConf));
+ }
+
+ @After
+ public void tearDown() throws Exception {
+ stateStore.close();
+ stateStore = null;
+ }
+
+ @Test
+ public void testGetPolicy() throws YarnException {
+ WeightedLocalityPolicyManager manager =
+ (WeightedLocalityPolicyManager) policyFacade
+ .getPolicyManager(TEST_QUEUE);
+ Assert.assertEquals(testConf, manager.serializeConf());
+ }
+
+ /**
+ * Test that new policies are written into the state store.
+ */
+ @Test
+ public void testSetNewPolicy() throws YarnException {
+ WeightedLocalityPolicyManager manager =
+ new WeightedLocalityPolicyManager();
+ manager.setQueue(TEST_QUEUE + 0);
+ manager.getWeightedPolicyInfo().setAMRMPolicyWeights(
+ GPGUtils.createUniformWeights(subClusterIds));
+ manager.getWeightedPolicyInfo().setRouterPolicyWeights(
+ GPGUtils.createUniformWeights(subClusterIds));
+ SubClusterPolicyConfiguration policyConf = manager.serializeConf();
+ policyFacade.setPolicyManager(manager);
+
+ manager =
+ (WeightedLocalityPolicyManager) policyFacade
+ .getPolicyManager(TEST_QUEUE + 0);
+ Assert.assertEquals(policyConf, manager.serializeConf());
+ }
+
+ /**
+ * Test that overwriting policies are updated in the state store.
+ */
+ @Test
+ public void testOverwritePolicy() throws YarnException {
+ subClusterIds.add(SubClusterId.newInstance("sc3"));
+ WeightedLocalityPolicyManager manager =
+ new WeightedLocalityPolicyManager();
+ manager.setQueue(TEST_QUEUE);
+ manager.getWeightedPolicyInfo().setAMRMPolicyWeights(
+ GPGUtils.createUniformWeights(subClusterIds));
+ manager.getWeightedPolicyInfo().setRouterPolicyWeights(
+ GPGUtils.createUniformWeights(subClusterIds));
+ SubClusterPolicyConfiguration policyConf = manager.serializeConf();
+ policyFacade.setPolicyManager(manager);
+
+ manager =
+ (WeightedLocalityPolicyManager) policyFacade
+ .getPolicyManager(TEST_QUEUE);
+ Assert.assertEquals(policyConf, manager.serializeConf());
+ }
+
+ /**
+ * Test that the write through cache works.
+ */
+ @Test
+ public void testWriteCache() throws YarnException {
+ stateStore = mock(MemoryFederationStateStore.class);
+ facade.reinitialize(stateStore, conf);
+ when(stateStore.getPolicyConfiguration(Matchers.any(
+ GetSubClusterPolicyConfigurationRequest.class))).thenReturn(
+ GetSubClusterPolicyConfigurationResponse.newInstance(testConf));
+ policyFacade = new GPGPolicyFacade(facade, conf);
+
+ // Query once to fill the cache
+ FederationPolicyManager manager = policyFacade.getPolicyManager(TEST_QUEUE);
+ // State store should be contacted once
+ verify(stateStore, times(1)).getPolicyConfiguration(
+ Matchers.any(GetSubClusterPolicyConfigurationRequest.class));
+
+ // If we set the same policy, the state store should be untouched
+ policyFacade.setPolicyManager(manager);
+ verify(stateStore, times(0)).setPolicyConfiguration(
+ Matchers.any(SetSubClusterPolicyConfigurationRequest.class));
+ }
+
+ /**
+ * Test that when read only is enabled, the state store is not changed.
+ */
+ @Test
+ public void testReadOnly() throws YarnException {
+ conf.setBoolean(YarnConfiguration.GPG_POLICY_GENERATOR_READONLY, true);
+ stateStore = mock(MemoryFederationStateStore.class);
+ facade.reinitialize(stateStore, conf);
+ when(stateStore.getPolicyConfiguration(Matchers.any(
+ GetSubClusterPolicyConfigurationRequest.class))).thenReturn(
+ GetSubClusterPolicyConfigurationResponse.newInstance(testConf));
+ policyFacade = new GPGPolicyFacade(facade, conf);
+
+ // If we set a policy, the state store should be untouched
+ WeightedLocalityPolicyManager manager =
+ new WeightedLocalityPolicyManager();
+ // Add a test policy for test queue
+ manager.setQueue(TEST_QUEUE);
+ manager.getWeightedPolicyInfo().setAMRMPolicyWeights(
+ GPGUtils.createUniformWeights(subClusterIds));
+ manager.getWeightedPolicyInfo().setRouterPolicyWeights(
+ GPGUtils.createUniformWeights(subClusterIds));
+ policyFacade.setPolicyManager(manager);
+ verify(stateStore, times(0)).setPolicyConfiguration(
+ Matchers.any(SetSubClusterPolicyConfigurationRequest.class));
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/TestPolicyGenerator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/TestPolicyGenerator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/TestPolicyGenerator.java
new file mode 100644
index 0000000..9d27b3b
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/TestPolicyGenerator.java
@@ -0,0 +1,338 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator.policygenerator;
+
+import com.sun.jersey.api.json.JSONConfiguration;
+import com.sun.jersey.api.json.JSONJAXBContext;
+import com.sun.jersey.api.json.JSONUnmarshaller;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.federation.policies.manager.FederationPolicyManager;
+import org.apache.hadoop.yarn.server.federation.policies.manager.WeightedLocalityPolicyManager;
+import org.apache.hadoop.yarn.server.federation.store.FederationStateStore;
+import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationResponse;
+import org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoResponse;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState;
+import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGContext;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGContextImpl;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGPolicyFacade;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGUtils;
+import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerInfo;
+import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo;
+import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.ArgumentCaptor;
+
+import javax.xml.bind.JAXBException;
+import java.io.IOException;
+import java.io.StringReader;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+
+import static org.junit.Assert.assertEquals;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+/**
+ * Unit test for GPG Policy Generator.
+ */
+public class TestPolicyGenerator {
+
+ private static final int NUM_SC = 3;
+
+ private Configuration conf;
+ private FederationStateStore stateStore;
+ private FederationStateStoreFacade facade =
+ FederationStateStoreFacade.getInstance();
+
+ private List<SubClusterId> subClusterIds;
+ private Map<SubClusterId, SubClusterInfo> subClusterInfos;
+ private Map<SubClusterId, Map<Class, Object>> clusterInfos;
+ private Map<SubClusterId, SchedulerInfo> schedulerInfos;
+
+ private GPGContext gpgContext;
+
+ private PolicyGenerator policyGenerator;
+
+ public TestPolicyGenerator() {
+ conf = new Configuration();
+ conf.setInt(YarnConfiguration.FEDERATION_CACHE_TIME_TO_LIVE_SECS, 0);
+
+ gpgContext = new GPGContextImpl();
+ gpgContext.setPolicyFacade(new GPGPolicyFacade(facade, conf));
+ gpgContext.setStateStoreFacade(facade);
+ }
+
+ @Before
+ public void setUp() throws IOException, YarnException, JAXBException {
+ subClusterIds = new ArrayList<>();
+ subClusterInfos = new HashMap<>();
+ clusterInfos = new HashMap<>();
+ schedulerInfos = new HashMap<>();
+
+ CapacitySchedulerInfo sti1 =
+ readJSON("src/test/resources/schedulerInfo1.json",
+ CapacitySchedulerInfo.class);
+ CapacitySchedulerInfo sti2 =
+ readJSON("src/test/resources/schedulerInfo2.json",
+ CapacitySchedulerInfo.class);
+
+ // Set up sub clusters
+ for (int i = 0; i < NUM_SC; ++i) {
+ // Sub cluster Id
+ SubClusterId id = SubClusterId.newInstance("sc" + i);
+ subClusterIds.add(id);
+
+ // Sub cluster info
+ SubClusterInfo cluster = SubClusterInfo
+ .newInstance(id, "amrm:" + i, "clientrm:" + i, "rmadmin:" + i,
+ "rmweb:" + i, SubClusterState.SC_RUNNING, 0, "");
+ subClusterInfos.put(id, cluster);
+
+ // Cluster metrics info
+ ClusterMetricsInfo metricsInfo = new ClusterMetricsInfo();
+ metricsInfo.setAppsPending(2000);
+ if (!clusterInfos.containsKey(id)) {
+ clusterInfos.put(id, new HashMap<Class, Object>());
+ }
+ clusterInfos.get(id).put(ClusterMetricsInfo.class, metricsInfo);
+
+ schedulerInfos.put(id, sti1);
+ }
+
+ // Change one of the sub cluster schedulers
+ schedulerInfos.put(subClusterIds.get(0), sti2);
+
+ stateStore = mock(FederationStateStore.class);
+ when(stateStore.getSubClusters((GetSubClustersInfoRequest) any()))
+ .thenReturn(GetSubClustersInfoResponse.newInstance(
+ new ArrayList<SubClusterInfo>(subClusterInfos.values())));
+ facade.reinitialize(stateStore, conf);
+ }
+
+ @After
+ public void tearDown() throws Exception {
+ stateStore.close();
+ stateStore = null;
+ }
+
+ private <T> T readJSON(String pathname, Class<T> classy)
+ throws IOException, JAXBException {
+
+ JSONJAXBContext jc =
+ new JSONJAXBContext(JSONConfiguration.mapped().build(), classy);
+ JSONUnmarshaller unmarshaller = jc.createJSONUnmarshaller();
+ String contents = new String(Files.readAllBytes(Paths.get(pathname)));
+ return unmarshaller.unmarshalFromJSON(new StringReader(contents), classy);
+
+ }
+
+ @Test
+ public void testPolicyGenerator() throws YarnException {
+ policyGenerator = new TestablePolicyGenerator();
+ policyGenerator.policy = mock(GlobalPolicy.class);
+ policyGenerator.run();
+ verify(policyGenerator.policy, times(1))
+ .updatePolicy("default", clusterInfos, null);
+ verify(policyGenerator.policy, times(1))
+ .updatePolicy("default2", clusterInfos, null);
+ }
+
+ @Test
+ public void testBlacklist() throws YarnException {
+ conf.set(YarnConfiguration.GPG_POLICY_GENERATOR_BLACKLIST,
+ subClusterIds.get(0).toString());
+ Map<SubClusterId, Map<Class, Object>> blacklistedCMI =
+ new HashMap<>(clusterInfos);
+ blacklistedCMI.remove(subClusterIds.get(0));
+ policyGenerator = new TestablePolicyGenerator();
+ policyGenerator.policy = mock(GlobalPolicy.class);
+ policyGenerator.run();
+ verify(policyGenerator.policy, times(1))
+ .updatePolicy("default", blacklistedCMI, null);
+ verify(policyGenerator.policy, times(0))
+ .updatePolicy("default", clusterInfos, null);
+ }
+
+ @Test
+ public void testBlacklistTwo() throws YarnException {
+ conf.set(YarnConfiguration.GPG_POLICY_GENERATOR_BLACKLIST,
+ subClusterIds.get(0).toString() + "," + subClusterIds.get(1)
+ .toString());
+ Map<SubClusterId, Map<Class, Object>> blacklistedCMI =
+ new HashMap<>(clusterInfos);
+ blacklistedCMI.remove(subClusterIds.get(0));
+ blacklistedCMI.remove(subClusterIds.get(1));
+ policyGenerator = new TestablePolicyGenerator();
+ policyGenerator.policy = mock(GlobalPolicy.class);
+ policyGenerator.run();
+ verify(policyGenerator.policy, times(1))
+ .updatePolicy("default", blacklistedCMI, null);
+ verify(policyGenerator.policy, times(0))
+ .updatePolicy("default", clusterInfos, null);
+ }
+
+ @Test
+ public void testExistingPolicy() throws YarnException {
+ WeightedLocalityPolicyManager manager = new WeightedLocalityPolicyManager();
+ // Add a test policy for test queue
+ manager.setQueue("default");
+ manager.getWeightedPolicyInfo().setAMRMPolicyWeights(GPGUtils
+ .createUniformWeights(new HashSet<SubClusterId>(subClusterIds)));
+ manager.getWeightedPolicyInfo().setRouterPolicyWeights(GPGUtils
+ .createUniformWeights(new HashSet<SubClusterId>(subClusterIds)));
+ SubClusterPolicyConfiguration testConf = manager.serializeConf();
+ when(stateStore.getPolicyConfiguration(
+ GetSubClusterPolicyConfigurationRequest.newInstance("default")))
+ .thenReturn(
+ GetSubClusterPolicyConfigurationResponse.newInstance(testConf));
+
+ policyGenerator = new TestablePolicyGenerator();
+ policyGenerator.policy = mock(GlobalPolicy.class);
+ policyGenerator.run();
+
+ ArgumentCaptor<FederationPolicyManager> argCaptor =
+ ArgumentCaptor.forClass(FederationPolicyManager.class);
+ verify(policyGenerator.policy, times(1))
+ .updatePolicy(eq("default"), eq(clusterInfos), argCaptor.capture());
+ assertEquals(argCaptor.getValue().getClass(), manager.getClass());
+ assertEquals(argCaptor.getValue().serializeConf(), manager.serializeConf());
+ }
+
+ @Test
+ public void testCallRM() {
+
+ CapacitySchedulerConfiguration csConf =
+ new CapacitySchedulerConfiguration();
+
+ final String a = CapacitySchedulerConfiguration.ROOT + ".a";
+ final String b = CapacitySchedulerConfiguration.ROOT + ".b";
+ final String a1 = a + ".a1";
+ final String a2 = a + ".a2";
+ final String b1 = b + ".b1";
+ final String b2 = b + ".b2";
+ final String b3 = b + ".b3";
+ float aCapacity = 10.5f;
+ float bCapacity = 89.5f;
+ float a1Capacity = 30;
+ float a2Capacity = 70;
+ float b1Capacity = 79.2f;
+ float b2Capacity = 0.8f;
+ float b3Capacity = 20;
+
+ // Define top-level queues
+ csConf.setQueues(CapacitySchedulerConfiguration.ROOT,
+ new String[] {"a", "b"});
+
+ csConf.setCapacity(a, aCapacity);
+ csConf.setCapacity(b, bCapacity);
+
+ // Define 2nd-level queues
+ csConf.setQueues(a, new String[] {"a1", "a2"});
+ csConf.setCapacity(a1, a1Capacity);
+ csConf.setUserLimitFactor(a1, 100.0f);
+ csConf.setCapacity(a2, a2Capacity);
+ csConf.setUserLimitFactor(a2, 100.0f);
+
+ csConf.setQueues(b, new String[] {"b1", "b2", "b3"});
+ csConf.setCapacity(b1, b1Capacity);
+ csConf.setUserLimitFactor(b1, 100.0f);
+ csConf.setCapacity(b2, b2Capacity);
+ csConf.setUserLimitFactor(b2, 100.0f);
+ csConf.setCapacity(b3, b3Capacity);
+ csConf.setUserLimitFactor(b3, 100.0f);
+
+ YarnConfiguration rmConf = new YarnConfiguration(csConf);
+
+ ResourceManager resourceManager = new ResourceManager();
+ rmConf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
+ ResourceScheduler.class);
+ resourceManager.init(rmConf);
+ resourceManager.start();
+
+ String rmAddress = WebAppUtils.getRMWebAppURLWithScheme(this.conf);
+ SchedulerTypeInfo sti = GPGUtils
+ .invokeRMWebService(conf, rmAddress, RMWSConsts.SCHEDULER,
+ SchedulerTypeInfo.class);
+
+ Assert.assertNotNull(sti);
+ }
+
+ /**
+ * Testable policy generator overrides the methods that communicate
+ * with the RM REST endpoint, allowing us to inject faked responses.
+ */
+ class TestablePolicyGenerator extends PolicyGenerator {
+
+ TestablePolicyGenerator() {
+ super(conf, gpgContext);
+ }
+
+ @Override
+ protected Map<SubClusterId, Map<Class, Object>> getInfos(
+ Map<SubClusterId, SubClusterInfo> activeSubClusters) {
+ Map<SubClusterId, Map<Class, Object>> ret = new HashMap<>();
+ for (SubClusterId id : activeSubClusters.keySet()) {
+ if (!ret.containsKey(id)) {
+ ret.put(id, new HashMap<Class, Object>());
+ }
+ ret.get(id).put(ClusterMetricsInfo.class,
+ clusterInfos.get(id).get(ClusterMetricsInfo.class));
+ }
+ return ret;
+ }
+
+ @Override
+ protected Map<SubClusterId, SchedulerInfo> getSchedulerInfo(
+ Map<SubClusterId, SubClusterInfo> activeSubClusters) {
+ Map<SubClusterId, SchedulerInfo> ret =
+ new HashMap<SubClusterId, SchedulerInfo>();
+ for (SubClusterId id : activeSubClusters.keySet()) {
+ ret.put(id, schedulerInfos.get(id));
+ }
+ return ret;
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0bbe70ce/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo1.json
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo1.json b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo1.json
new file mode 100644
index 0000000..3ad4594
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/resources/schedulerInfo1.json
@@ -0,0 +1,134 @@
+{
+ "capacity": 100.0,
+ "usedCapacity": 0.0,
+ "maxCapacity": 100.0,
+ "queueName": "root",
+ "queues": {
+ "queue": [
+ {
+ "type": "capacitySchedulerLeafQueueInfo",
+ "capacity": 100.0,
+ "usedCapacity": 0.0,
+ "maxCapacity": 100.0,
+ "absoluteCapacity": 100.0,
+ "absoluteMaxCapacity": 100.0,
+ "absoluteUsedCapacity": 0.0,
+ "numApplications": 484,
+ "queueName": "default",
+ "state": "RUNNING",
+ "resourcesUsed": {
+ "memory": 0,
+ "vCores": 0
+ },
+ "hideReservationQueues": false,
+ "nodeLabels": [
+ "*"
+ ],
+ "numActiveApplications": 484,
+ "numPendingApplications": 0,
+ "numContainers": 0,
+ "maxApplications": 10000,
+ "maxApplicationsPerUser": 10000,
+ "userLimit": 100,
+ "users": {
+ "user": [
+ {
+ "username": "Default",
+ "resourcesUsed": {
+ "memory": 0,
+ "vCores": 0
+ },
+ "numPendingApplications": 0,
+ "numActiveApplications": 468,
+ "AMResourceUsed": {
+ "memory": 30191616,
+ "vCores": 468
+ },
+ "userResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ }
+ }
+ ]
+ },
+ "userLimitFactor": 1.0,
+ "AMResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ },
+ "usedAMResource": {
+ "memory": 30388224,
+ "vCores": 532
+ },
+ "userAMResourceLimit": {
+ "memory": 31490048,
+ "vCores": 7612
+ },
+ "preemptionDisabled": true
+ }
+ ]
+ },
+ "health": {
+ "lastrun": 1517951638085,
+ "operationsInfo": {
+ "entry": {
+ "key": "last-allocation",
+ "value": {
+ "nodeId": "node0:0",
+ "containerId": "container_e61477_1517922128312_0340_01_000001",
+ "queue": "root.default"
+ }
+ },
+ "entry": {
+ "key": "last-reservation",
+ "value": {
+ "nodeId": "node0:1",
+ "containerId": "container_e61477_1517879828320_0249_01_000001",
+ "queue": "root.default"
+ }
+ },
+ "entry": {
+ "key": "last-release",
+ "value": {
+ "nodeId": "node0:2",
+ "containerId": "container_e61477_1517922128312_0340_01_000001",
+ "queue": "root.default"
+ }
+ },
+ "entry": {
+ "key": "last-preemption",
+ "value": {
+ "nodeId": "N/A",
+ "containerId": "N/A",
+ "queue": "N/A"
+ }
+ }
+ },
+ "lastRunDetails": [
+ {
+ "operation": "releases",
+ "count": 0,
+ "resources": {
+ "memory": 0,
+ "vCores": 0
+ }
+ },
+ {
+ "operation": "allocations",
+ "count": 0,
+ "resources": {
+ "memory": 0,
+ "vCores": 0
+ }
+ },
+ {
+ "operation": "reservations",
+ "count": 0,
+ "resources": {
+ "memory": 0,
+ "vCores": 0
+ }
+ }
+ ]
+ }
+}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[30/50] [abbrv] hadoop git commit: HDDS-242. Introduce NEW_NODE,
STALE_NODE and DEAD_NODE event and corresponding event handlers in
SCM. Contributed by Nanda Kumar.
Posted by bo...@apache.org.
HDDS-242. Introduce NEW_NODE, STALE_NODE and DEAD_NODE event
and corresponding event handlers in SCM.
Contributed by Nanda Kumar.
Recommitting after making sure that patch is clean.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/632aca57
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/632aca57
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/632aca57
Branch: refs/heads/YARN-7402
Commit: 632aca5793d391c741c0bce3d2e70ae6e03fe306
Parents: b567858
Author: Anu Engineer <ae...@apache.org>
Authored: Wed Jul 11 12:08:50 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Wed Jul 11 12:08:50 2018 -0700
----------------------------------------------------------------------
.../container/CloseContainerEventHandler.java | 7 ++-
.../hdds/scm/container/ContainerMapping.java | 5 --
.../scm/container/ContainerReportHandler.java | 47 ++++++++++++++++++
.../hadoop/hdds/scm/container/Mapping.java | 9 +---
.../scm/container/closer/ContainerCloser.java | 1 -
.../hadoop/hdds/scm/events/SCMEvents.java | 22 +++++++++
.../hadoop/hdds/scm/node/DatanodeInfo.java | 11 +++++
.../hadoop/hdds/scm/node/DeadNodeHandler.java | 42 ++++++++++++++++
.../hadoop/hdds/scm/node/NewNodeHandler.java | 50 +++++++++++++++++++
.../hadoop/hdds/scm/node/NodeManager.java | 4 +-
.../hadoop/hdds/scm/node/NodeReportHandler.java | 42 ++++++++++++++++
.../hadoop/hdds/scm/node/NodeStateManager.java | 32 +++++++++++-
.../hadoop/hdds/scm/node/SCMNodeManager.java | 24 ++++++---
.../hadoop/hdds/scm/node/StaleNodeHandler.java | 42 ++++++++++++++++
.../server/SCMDatanodeHeartbeatDispatcher.java | 20 ++++++--
.../scm/server/SCMDatanodeProtocolServer.java | 18 ++-----
.../scm/server/StorageContainerManager.java | 51 +++++++++++++++-----
.../hdds/scm/container/MockNodeManager.java | 9 ++++
.../TestCloseContainerEventHandler.java | 2 +
.../hdds/scm/node/TestContainerPlacement.java | 12 ++++-
.../hadoop/hdds/scm/node/TestNodeManager.java | 11 ++++-
.../TestSCMDatanodeHeartbeatDispatcher.java | 8 ++-
.../testutils/ReplicationNodeManagerMock.java | 7 +++
23 files changed, 417 insertions(+), 59 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
index f1053d5..859e5d5 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
@@ -25,9 +25,12 @@ import org.apache.hadoop.hdds.scm.exceptions.SCMException;
import org.apache.hadoop.hdds.server.events.EventHandler;
import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.DATANODE_COMMAND;
+
/**
* In case of a node failure, volume failure, volume out of spapce, node
* out of space etc, CLOSE_CONTAINER will be triggered.
@@ -73,9 +76,11 @@ public class CloseContainerEventHandler implements EventHandler<ContainerID> {
if (info.getState() == HddsProtos.LifeCycleState.OPEN) {
for (DatanodeDetails datanode :
containerWithPipeline.getPipeline().getMachines()) {
- containerManager.getNodeManager().addDatanodeCommand(datanode.getUuid(),
+ CommandForDatanode closeContainerCommand = new CommandForDatanode<>(
+ datanode.getUuid(),
new CloseContainerCommand(containerID.getId(),
info.getReplicationType()));
+ publisher.fireEvent(DATANODE_COMMAND, closeContainerCommand);
}
try {
// Finalize event will make sure the state of the container transitions
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
index e25c5b4..abad32c 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
@@ -709,11 +709,6 @@ public class ContainerMapping implements Mapping {
}
}
- @Override
- public NodeManager getNodeManager() {
- return nodeManager;
- }
-
@VisibleForTesting
public MetadataStore getContainerStore() {
return containerStore;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
new file mode 100644
index 0000000..486162e
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .ContainerReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles container reports from datanode.
+ */
+public class ContainerReportHandler implements
+ EventHandler<ContainerReportFromDatanode> {
+
+ private final Mapping containerMapping;
+ private final Node2ContainerMap node2ContainerMap;
+
+ public ContainerReportHandler(Mapping containerMapping,
+ Node2ContainerMap node2ContainerMap) {
+ this.containerMapping = containerMapping;
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(ContainerReportFromDatanode containerReportFromDatanode,
+ EventPublisher publisher) {
+ // TODO: process container report.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
index f52eb05..ac84be4 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
-import org.apache.hadoop.hdds.scm.node.NodeManager;
import java.io.Closeable;
import java.io.IOException;
@@ -130,16 +129,10 @@ public interface Mapping extends Closeable {
throws IOException;
/**
- * Returns the nodeManager.
- * @return NodeManager
- */
- NodeManager getNodeManager();
-
- /**
* Returns the ContainerWithPipeline.
* @return NodeManager
*/
- public ContainerWithPipeline getMatchingContainerWithPipeline(final long size,
+ ContainerWithPipeline getMatchingContainerWithPipeline(long size,
String owner, ReplicationType type, ReplicationFactor factor,
LifeCycleState state) throws IOException;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
index 3ca8ba9..eb591be 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
@@ -26,7 +26,6 @@ import org.apache.hadoop.hdds.protocol.proto.HddsProtos.SCMContainerInfo;
import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
import org.apache.hadoop.util.Time;
import org.slf4j.Logger;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
index 2c9c431..0afd675 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
@@ -19,6 +19,7 @@
package org.apache.hadoop.hdds.scm.events;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.container.ContainerID;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.ContainerReportFromDatanode;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.NodeReportFromDatanode;
@@ -72,6 +73,27 @@ public final class SCMEvents {
new TypedEvent<>(ContainerID.class, "Close_Container");
/**
+ * This event will be triggered whenever a new datanode is
+ * registered with SCM.
+ */
+ public static final TypedEvent<DatanodeDetails> NEW_NODE =
+ new TypedEvent<>(DatanodeDetails.class, "New_Node");
+
+ /**
+ * This event will be triggered whenever a datanode is moved from healthy to
+ * stale state.
+ */
+ public static final TypedEvent<DatanodeDetails> STALE_NODE =
+ new TypedEvent<>(DatanodeDetails.class, "Stale_Node");
+
+ /**
+ * This event will be triggered whenever a datanode is moved from stale to
+ * dead state.
+ */
+ public static final TypedEvent<DatanodeDetails> DEAD_NODE =
+ new TypedEvent<>(DatanodeDetails.class, "Dead_Node");
+
+ /**
* Private Ctor. Never Constructed.
*/
private SCMEvents() {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
index 51465ee..6d5575b 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
@@ -106,4 +106,15 @@ public class DatanodeInfo extends DatanodeDetails {
lock.readLock().unlock();
}
}
+
+ @Override
+ public int hashCode() {
+ return super.hashCode();
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ return super.equals(obj);
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
new file mode 100644
index 0000000..427aef8
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Dead Node event.
+ */
+public class DeadNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public DeadNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ //TODO: add logic to handle dead node.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
new file mode 100644
index 0000000..79b75a5
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
@@ -0,0 +1,50 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+import java.util.Collections;
+
+/**
+ * Handles New Node event.
+ */
+public class NewNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public NewNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ try {
+ node2ContainerMap.insertNewDatanode(datanodeDetails.getUuid(),
+ Collections.emptySet());
+ } catch (SCMException e) {
+ // TODO: log exception message.
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index c13c37c..5e2969d 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -22,7 +22,9 @@ import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.server.events.EventHandler;
import org.apache.hadoop.ozone.protocol.StorageContainerNodeProtocol;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
import java.io.Closeable;
@@ -53,7 +55,7 @@ import java.util.UUID;
* list, by calling removeNode. We will throw away this nodes info soon.
*/
public interface NodeManager extends StorageContainerNodeProtocol,
- NodeManagerMXBean, Closeable {
+ EventHandler<CommandForDatanode>, NodeManagerMXBean, Closeable {
/**
* Removes a data node from the management of this Node Manager.
*
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
new file mode 100644
index 0000000..aa78d53
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .NodeReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Node Reports from datanode.
+ */
+public class NodeReportHandler implements EventHandler<NodeReportFromDatanode> {
+
+ private final NodeManager nodeManager;
+
+ public NodeReportHandler(NodeManager nodeManager) {
+ this.nodeManager = nodeManager;
+ }
+
+ @Override
+ public void onMessage(NodeReportFromDatanode nodeReportFromDatanode,
+ EventPublisher publisher) {
+ //TODO: process node report.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
index 5543c04..77f939e 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
@@ -24,9 +24,12 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
import org.apache.hadoop.hdds.scm.HddsServerUtil;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
import org.apache.hadoop.hdds.scm.node.states.NodeAlreadyExistsException;
import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
import org.apache.hadoop.hdds.scm.node.states.NodeStateMap;
+import org.apache.hadoop.hdds.server.events.Event;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.ozone.common.statemachine
.InvalidStateTransitionException;
import org.apache.hadoop.ozone.common.statemachine.StateMachine;
@@ -36,9 +39,11 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.Closeable;
+import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
+import java.util.Map;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ScheduledExecutorService;
@@ -87,6 +92,14 @@ public class NodeStateManager implements Runnable, Closeable {
*/
private final NodeStateMap nodeStateMap;
/**
+ * Used for publishing node state change events.
+ */
+ private final EventPublisher eventPublisher;
+ /**
+ * Maps the event to be triggered when a node state us updated.
+ */
+ private final Map<NodeState, Event<DatanodeDetails>> state2EventMap;
+ /**
* ExecutorService used for scheduling heartbeat processing thread.
*/
private final ScheduledExecutorService executorService;
@@ -108,8 +121,11 @@ public class NodeStateManager implements Runnable, Closeable {
*
* @param conf Configuration
*/
- public NodeStateManager(Configuration conf) {
- nodeStateMap = new NodeStateMap();
+ public NodeStateManager(Configuration conf, EventPublisher eventPublisher) {
+ this.nodeStateMap = new NodeStateMap();
+ this.eventPublisher = eventPublisher;
+ this.state2EventMap = new HashMap<>();
+ initialiseState2EventMap();
Set<NodeState> finalStates = new HashSet<>();
finalStates.add(NodeState.DECOMMISSIONED);
this.stateMachine = new StateMachine<>(NodeState.HEALTHY, finalStates);
@@ -130,6 +146,14 @@ public class NodeStateManager implements Runnable, Closeable {
TimeUnit.MILLISECONDS);
}
+ /**
+ * Populates state2event map.
+ */
+ private void initialiseState2EventMap() {
+ state2EventMap.put(NodeState.STALE, SCMEvents.STALE_NODE);
+ state2EventMap.put(NodeState.DEAD, SCMEvents.DEAD_NODE);
+ }
+
/*
*
* Node and State Transition Mapping:
@@ -220,6 +244,7 @@ public class NodeStateManager implements Runnable, Closeable {
public void addNode(DatanodeDetails datanodeDetails)
throws NodeAlreadyExistsException {
nodeStateMap.addNode(datanodeDetails, stateMachine.getInitialState());
+ eventPublisher.fireEvent(SCMEvents.NEW_NODE, datanodeDetails);
}
/**
@@ -548,6 +573,9 @@ public class NodeStateManager implements Runnable, Closeable {
if (condition.test(node.getLastHeartbeatTime())) {
NodeState newState = stateMachine.getNextState(state, lifeCycleEvent);
nodeStateMap.updateNodeState(node.getUuid(), state, newState);
+ if (state2EventMap.containsKey(newState)) {
+ eventPublisher.fireEvent(state2EventMap.get(newState), node);
+ }
}
} catch (InvalidStateTransitionException e) {
LOG.warn("Invalid state transition of node {}." +
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index d787d14..2ba8067 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
import org.apache.hadoop.hdds.scm.VersionInfo;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
-import org.apache.hadoop.hdds.server.events.EventHandler;
import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -78,8 +77,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
* as soon as you read it.
*/
public class SCMNodeManager
- implements NodeManager, StorageContainerNodeProtocol,
- EventHandler<CommandForDatanode> {
+ implements NodeManager, StorageContainerNodeProtocol {
@VisibleForTesting
static final Logger LOG =
@@ -117,14 +115,13 @@ public class SCMNodeManager
// Node pool manager.
private final StorageContainerManager scmManager;
-
-
/**
* Constructs SCM machine Manager.
*/
public SCMNodeManager(OzoneConfiguration conf, String clusterID,
- StorageContainerManager scmManager) throws IOException {
- this.nodeStateManager = new NodeStateManager(conf);
+ StorageContainerManager scmManager, EventPublisher eventPublisher)
+ throws IOException {
+ this.nodeStateManager = new NodeStateManager(conf, eventPublisher);
this.nodeStats = new ConcurrentHashMap<>();
this.scmStat = new SCMNodeStat();
this.clusterID = clusterID;
@@ -462,14 +459,25 @@ public class SCMNodeManager
return nodeCountMap;
}
+ // TODO:
+ // Since datanode commands are added through event queue, onMessage method
+ // should take care of adding commands to command queue.
+ // Refactor and remove all the usage of this method and delete this method.
@Override
public void addDatanodeCommand(UUID dnId, SCMCommand command) {
this.commandQueue.addCommand(dnId, command);
}
+ /**
+ * This method is called by EventQueue whenever someone adds a new
+ * DATANODE_COMMAND to the Queue.
+ *
+ * @param commandForDatanode DatanodeCommand
+ * @param ignored publisher
+ */
@Override
public void onMessage(CommandForDatanode commandForDatanode,
- EventPublisher publisher) {
+ EventPublisher ignored) {
addDatanodeCommand(commandForDatanode.getDatanodeId(),
commandForDatanode.getCommand());
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
new file mode 100644
index 0000000..b37dd93
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Stale node event.
+ */
+public class StaleNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public StaleNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ //TODO: logic to handle stale node.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
index a6354af..4cfa98f 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
@@ -17,6 +17,7 @@
package org.apache.hadoop.hdds.scm.server;
+import com.google.common.base.Preconditions;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
@@ -24,12 +25,16 @@ import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.NodeReportProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMHeartbeatRequestProto;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.server.events.EventPublisher;
import com.google.protobuf.GeneratedMessage;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import java.util.List;
+
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CONTAINER_REPORT;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.NODE_REPORT;
@@ -42,10 +47,15 @@ public final class SCMDatanodeHeartbeatDispatcher {
private static final Logger LOG =
LoggerFactory.getLogger(SCMDatanodeHeartbeatDispatcher.class);
- private EventPublisher eventPublisher;
+ private final NodeManager nodeManager;
+ private final EventPublisher eventPublisher;
- public SCMDatanodeHeartbeatDispatcher(EventPublisher eventPublisher) {
+ public SCMDatanodeHeartbeatDispatcher(NodeManager nodeManager,
+ EventPublisher eventPublisher) {
+ Preconditions.checkNotNull(nodeManager);
+ Preconditions.checkNotNull(eventPublisher);
+ this.nodeManager = nodeManager;
this.eventPublisher = eventPublisher;
}
@@ -54,11 +64,14 @@ public final class SCMDatanodeHeartbeatDispatcher {
* Dispatches heartbeat to registered event handlers.
*
* @param heartbeat heartbeat to be dispatched.
+ *
+ * @return list of SCMCommand
*/
- public void dispatch(SCMHeartbeatRequestProto heartbeat) {
+ public List<SCMCommand> dispatch(SCMHeartbeatRequestProto heartbeat) {
DatanodeDetails datanodeDetails =
DatanodeDetails.getFromProtoBuf(heartbeat.getDatanodeDetails());
// should we dispatch heartbeat through eventPublisher?
+ List<SCMCommand> commands = nodeManager.processHeartbeat(datanodeDetails);
if (heartbeat.hasNodeReport()) {
LOG.debug("Dispatching Node Report.");
eventPublisher.fireEvent(NODE_REPORT,
@@ -73,6 +86,7 @@ public final class SCMDatanodeHeartbeatDispatcher {
heartbeat.getContainerReport()));
}
+ return commands;
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
index aef5b03..aee64b9 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
@@ -133,7 +133,8 @@ public class SCMDatanodeProtocolServer implements
conf.getInt(OZONE_SCM_HANDLER_COUNT_KEY,
OZONE_SCM_HANDLER_COUNT_DEFAULT);
- heartbeatDispatcher = new SCMDatanodeHeartbeatDispatcher(eventPublisher);
+ heartbeatDispatcher = new SCMDatanodeHeartbeatDispatcher(
+ scm.getScmNodeManager(), eventPublisher);
RPC.setProtocolEngine(conf, StorageContainerDatanodeProtocolPB.class,
ProtobufRpcEngine.class);
@@ -214,22 +215,13 @@ public class SCMDatanodeProtocolServer implements
@Override
public SCMHeartbeatResponseProto sendHeartbeat(
- SCMHeartbeatRequestProto heartbeat)
- throws IOException {
- heartbeatDispatcher.dispatch(heartbeat);
-
- // TODO: Remove the below code after SCM refactoring.
- DatanodeDetails datanodeDetails = DatanodeDetails
- .getFromProtoBuf(heartbeat.getDatanodeDetails());
- NodeReportProto nodeReport = heartbeat.getNodeReport();
- List<SCMCommand> commands =
- scm.getScmNodeManager().processHeartbeat(datanodeDetails);
+ SCMHeartbeatRequestProto heartbeat) throws IOException {
List<SCMCommandProto> cmdResponses = new LinkedList<>();
- for (SCMCommand cmd : commands) {
+ for (SCMCommand cmd : heartbeatDispatcher.dispatch(heartbeat)) {
cmdResponses.add(getCommandResponse(cmd));
}
return SCMHeartbeatResponseProto.newBuilder()
- .setDatanodeUUID(datanodeDetails.getUuidString())
+ .setDatanodeUUID(heartbeat.getDatanodeDetails().getUuid())
.addAllCommands(cmdResponses).build();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
index 49d3a40..5f511ee 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
@@ -33,15 +33,23 @@ import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
import org.apache.hadoop.hdds.scm.block.BlockManager;
import org.apache.hadoop.hdds.scm.block.BlockManagerImpl;
+import org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler;
import org.apache.hadoop.hdds.scm.container.ContainerMapping;
+import org.apache.hadoop.hdds.scm.container.ContainerReportHandler;
import org.apache.hadoop.hdds.scm.container.Mapping;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.container.placement.metrics.ContainerStat;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMMetrics;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
import org.apache.hadoop.hdds.scm.exceptions.SCMException;
import org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes;
+import org.apache.hadoop.hdds.scm.node.DeadNodeHandler;
+import org.apache.hadoop.hdds.scm.node.NewNodeHandler;
import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeReportHandler;
import org.apache.hadoop.hdds.scm.node.SCMNodeManager;
+import org.apache.hadoop.hdds.scm.node.StaleNodeHandler;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
import org.apache.hadoop.hdds.server.ServiceRuntimeInfoImpl;
import org.apache.hadoop.hdds.server.events.EventQueue;
import org.apache.hadoop.hdfs.DFSUtil;
@@ -71,7 +79,6 @@ import java.util.concurrent.TimeUnit;
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_DEFAULT;
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_MB;
-import static org.apache.hadoop.hdds.scm.events.SCMEvents.DATANODE_COMMAND;
import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ENABLED;
import static org.apache.hadoop.util.ExitUtil.terminate;
@@ -126,6 +133,8 @@ public final class StorageContainerManager extends ServiceRuntimeInfoImpl
private final Mapping scmContainerManager;
private final BlockManager scmBlockManager;
private final SCMStorage scmStorage;
+
+ private final EventQueue eventQueue;
/*
* HTTP endpoint for JMX access.
*/
@@ -164,18 +173,35 @@ public final class StorageContainerManager extends ServiceRuntimeInfoImpl
throw new SCMException("SCM not initialized.", ResultCodes
.SCM_NOT_INITIALIZED);
}
- EventQueue eventQueue = new EventQueue();
-
- SCMNodeManager nm =
- new SCMNodeManager(conf, scmStorage.getClusterID(), this);
- scmNodeManager = nm;
- eventQueue.addHandler(DATANODE_COMMAND, nm);
- scmContainerManager = new ContainerMapping(conf, getScmNodeManager(),
- cacheSize);
-
- scmBlockManager =
- new BlockManagerImpl(conf, getScmNodeManager(), scmContainerManager);
+ eventQueue = new EventQueue();
+
+ scmNodeManager = new SCMNodeManager(
+ conf, scmStorage.getClusterID(), this, eventQueue);
+ scmContainerManager = new ContainerMapping(
+ conf, getScmNodeManager(), cacheSize);
+ scmBlockManager = new BlockManagerImpl(
+ conf, getScmNodeManager(), scmContainerManager);
+
+ Node2ContainerMap node2ContainerMap = new Node2ContainerMap();
+
+ CloseContainerEventHandler closeContainerHandler =
+ new CloseContainerEventHandler(scmContainerManager);
+ NodeReportHandler nodeReportHandler =
+ new NodeReportHandler(scmNodeManager);
+ ContainerReportHandler containerReportHandler =
+ new ContainerReportHandler(scmContainerManager, node2ContainerMap);
+ NewNodeHandler newNodeHandler = new NewNodeHandler(node2ContainerMap);
+ StaleNodeHandler staleNodeHandler = new StaleNodeHandler(node2ContainerMap);
+ DeadNodeHandler deadNodeHandler = new DeadNodeHandler(node2ContainerMap);
+
+ eventQueue.addHandler(SCMEvents.DATANODE_COMMAND, scmNodeManager);
+ eventQueue.addHandler(SCMEvents.NODE_REPORT, nodeReportHandler);
+ eventQueue.addHandler(SCMEvents.CONTAINER_REPORT, containerReportHandler);
+ eventQueue.addHandler(SCMEvents.CLOSE_CONTAINER, closeContainerHandler);
+ eventQueue.addHandler(SCMEvents.NEW_NODE, newNodeHandler);
+ eventQueue.addHandler(SCMEvents.STALE_NODE, staleNodeHandler);
+ eventQueue.addHandler(SCMEvents.DEAD_NODE, deadNodeHandler);
scmAdminUsernames = conf.getTrimmedStringCollection(OzoneConfigKeys
.OZONE_ADMINISTRATORS);
@@ -189,7 +215,6 @@ public final class StorageContainerManager extends ServiceRuntimeInfoImpl
blockProtocolServer = new SCMBlockProtocolServer(conf, this);
clientProtocolServer = new SCMClientProtocolServer(conf, this);
httpServer = new StorageContainerManagerHttpServer(conf);
-
registerMXBean();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
index 3357992..5e83c28 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
@@ -26,8 +26,10 @@ import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.NodeReportProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMVersionRequestProto;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.ozone.protocol.VersionResponse;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
import org.assertj.core.util.Preconditions;
@@ -399,6 +401,13 @@ public class MockNodeManager implements NodeManager {
}
}
+ @Override
+ public void onMessage(CommandForDatanode commandForDatanode,
+ EventPublisher publisher) {
+ addDatanodeCommand(commandForDatanode.getDatanodeId(),
+ commandForDatanode.getCommand());
+ }
+
/**
* A class to declare some values for the nodes so that our tests
* won't fail.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
index 0d46ffa..0764b12 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
@@ -41,6 +41,7 @@ import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleEvent.CR
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT;
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_GB;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CLOSE_CONTAINER;
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.DATANODE_COMMAND;
/**
* Tests the closeContainerEventHandler class.
@@ -69,6 +70,7 @@ public class TestCloseContainerEventHandler {
eventQueue = new EventQueue();
eventQueue.addHandler(CLOSE_CONTAINER,
new CloseContainerEventHandler(mapping));
+ eventQueue.addHandler(DATANODE_COMMAND, nodeManager);
}
@AfterClass
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
index c6ea2af..48567ee 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
@@ -34,6 +34,8 @@ import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.StorageReportProto;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventQueue;
import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.test.PathUtils;
@@ -41,6 +43,7 @@ import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
+import org.mockito.Mockito;
import java.io.File;
import java.io.IOException;
@@ -86,8 +89,15 @@ public class TestContainerPlacement {
SCMNodeManager createNodeManager(OzoneConfiguration config)
throws IOException {
+ EventQueue eventQueue = new EventQueue();
+ eventQueue.addHandler(SCMEvents.NEW_NODE,
+ Mockito.mock(NewNodeHandler.class));
+ eventQueue.addHandler(SCMEvents.STALE_NODE,
+ Mockito.mock(StaleNodeHandler.class));
+ eventQueue.addHandler(SCMEvents.DEAD_NODE,
+ Mockito.mock(DeadNodeHandler.class));
SCMNodeManager nodeManager = new SCMNodeManager(config,
- UUID.randomUUID().toString(), null);
+ UUID.randomUUID().toString(), null, eventQueue);
assertFalse("Node manager should be in chill mode",
nodeManager.isOutOfChillMode());
return nodeManager;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
index d72309e..cefd179 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
@@ -30,6 +30,7 @@ import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.StorageReportProto;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
import org.apache.hadoop.hdds.server.events.EventQueue;
import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
@@ -45,6 +46,7 @@ import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
+import org.mockito.Mockito;
import java.io.File;
import java.io.IOException;
@@ -124,8 +126,15 @@ public class TestNodeManager {
SCMNodeManager createNodeManager(OzoneConfiguration config)
throws IOException {
+ EventQueue eventQueue = new EventQueue();
+ eventQueue.addHandler(SCMEvents.NEW_NODE,
+ Mockito.mock(NewNodeHandler.class));
+ eventQueue.addHandler(SCMEvents.STALE_NODE,
+ Mockito.mock(StaleNodeHandler.class));
+ eventQueue.addHandler(SCMEvents.DEAD_NODE,
+ Mockito.mock(DeadNodeHandler.class));
SCMNodeManager nodeManager = new SCMNodeManager(config,
- UUID.randomUUID().toString(), null);
+ UUID.randomUUID().toString(), null, eventQueue);
assertFalse("Node manager should be in chill mode",
nodeManager.isOutOfChillMode());
return nodeManager;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
index a77ed04..042e3cc 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.hdds.protocol.proto
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMHeartbeatRequestProto;
import org.apache.hadoop.hdds.scm.TestUtils;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
.ContainerReportFromDatanode;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
@@ -37,6 +38,7 @@ import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.junit.Assert;
import org.junit.Test;
+import org.mockito.Mockito;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CONTAINER_REPORT;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.NODE_REPORT;
@@ -55,7 +57,8 @@ public class TestSCMDatanodeHeartbeatDispatcher {
NodeReportProto nodeReport = NodeReportProto.getDefaultInstance();
SCMDatanodeHeartbeatDispatcher dispatcher =
- new SCMDatanodeHeartbeatDispatcher(new EventPublisher() {
+ new SCMDatanodeHeartbeatDispatcher(Mockito.mock(NodeManager.class),
+ new EventPublisher() {
@Override
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void fireEvent(
EVENT_TYPE event, PAYLOAD payload) {
@@ -90,7 +93,8 @@ public class TestSCMDatanodeHeartbeatDispatcher {
ContainerReportsProto.getDefaultInstance();
SCMDatanodeHeartbeatDispatcher dispatcher =
- new SCMDatanodeHeartbeatDispatcher(new EventPublisher() {
+ new SCMDatanodeHeartbeatDispatcher(Mockito.mock(NodeManager.class),
+ new EventPublisher() {
@Override
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void fireEvent(
EVENT_TYPE event, PAYLOAD payload) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
index e15e0fc..2d27d71 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
@@ -28,7 +28,9 @@ import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.NodeReportProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMVersionRequestProto;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.ozone.protocol.VersionResponse;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
@@ -287,4 +289,9 @@ public class ReplicationNodeManagerMock implements NodeManager {
this.commandQueue.addCommand(dnId, command);
}
+ @Override
+ public void onMessage(CommandForDatanode commandForDatanode,
+ EventPublisher publisher) {
+ // do nothing.
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[15/50] [abbrv] hadoop git commit: Merge remote-tracking branch
'apache/trunk' into HDDS-48
Posted by bo...@apache.org.
Merge remote-tracking branch 'apache/trunk' into HDDS-48
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9bd5bef2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9bd5bef2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9bd5bef2
Branch: refs/heads/YARN-7402
Commit: 9bd5bef297b036b19f7be0c42c5477808ef8c070
Parents: 3584baf 2403231
Author: Arpit Agarwal <ar...@apache.org>
Authored: Mon Jul 9 13:22:58 2018 -0700
Committer: Arpit Agarwal <ar...@apache.org>
Committed: Mon Jul 9 13:22:58 2018 -0700
----------------------------------------------------------------------
.../hadoop-common/src/main/conf/hadoop-env.sh | 6 +-
.../src/main/conf/hadoop-metrics2.properties | 2 +-
.../crypto/key/kms/KMSClientProvider.java | 4 +-
.../src/main/conf/kms-log4j.properties | 4 +-
.../src/test/resources/log4j.properties | 4 +-
hadoop-hdds/framework/pom.xml | 5 +
.../hadoop/hdds/server/events/EventQueue.java | 108 ++++++++------
.../hadoop/hdds/server/events/EventWatcher.java | 43 +++++-
.../hdds/server/events/EventWatcherMetrics.java | 79 ++++++++++
.../server/events/SingleThreadExecutor.java | 35 +++--
.../hdds/server/events/TestEventQueue.java | 35 +----
.../hdds/server/events/TestEventWatcher.java | 107 ++++++++++++--
.../hadoop/yarn/client/AMRMClientUtils.java | 91 ------------
.../hadoop/yarn/server/AMRMClientRelayer.java | 9 +-
.../yarn/server/uam/UnmanagedAMPoolManager.java | 16 ++
.../server/uam/UnmanagedApplicationManager.java | 40 ++---
.../yarn/server/MockResourceManagerFacade.java | 13 +-
.../amrmproxy/FederationInterceptor.java | 146 ++++++++++++++++---
.../amrmproxy/BaseAMRMProxyTest.java | 2 +
.../amrmproxy/TestFederationInterceptor.java | 17 +++
20 files changed, 515 insertions(+), 251 deletions(-)
----------------------------------------------------------------------
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[11/50] [abbrv] hadoop git commit: HDDS-224. Create metrics for Event
Watcher. Contributed b Elek, Marton.
Posted by bo...@apache.org.
HDDS-224. Create metrics for Event Watcher.
Contributed b Elek, Marton.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cb5e2258
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cb5e2258
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cb5e2258
Branch: refs/heads/YARN-7402
Commit: cb5e225868a069d6d16244b462ebada44465dce8
Parents: 4a08ddf
Author: Anu Engineer <ae...@apache.org>
Authored: Mon Jul 9 12:52:39 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Mon Jul 9 13:02:40 2018 -0700
----------------------------------------------------------------------
.../hadoop/hdds/server/events/EventQueue.java | 108 +++++++++++--------
.../server/events/SingleThreadExecutor.java | 35 ++++--
.../hdds/server/events/TestEventQueue.java | 35 +-----
3 files changed, 91 insertions(+), 87 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb5e2258/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
index 44d85f5..7e29223 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
@@ -18,7 +18,11 @@
package org.apache.hadoop.hdds.server.events;
import com.google.common.annotations.VisibleForTesting;
+
+import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.util.Time;
+
+import com.google.common.base.Preconditions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -42,6 +46,8 @@ public class EventQueue implements EventPublisher, AutoCloseable {
private static final Logger LOG =
LoggerFactory.getLogger(EventQueue.class);
+ private static final String EXECUTOR_NAME_SEPARATOR = "For";
+
private final Map<Event, Map<EventExecutor, List<EventHandler>>> executors =
new HashMap<>();
@@ -51,38 +57,74 @@ public class EventQueue implements EventPublisher, AutoCloseable {
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
EVENT_TYPE event, EventHandler<PAYLOAD> handler) {
-
- this.addHandler(event, new SingleThreadExecutor<>(
- event.getName()), handler);
+ this.addHandler(event, handler, generateHandlerName(handler));
}
+ /**
+ * Add new handler to the event queue.
+ * <p>
+ * By default a separated single thread executor will be dedicated to
+ * deliver the events to the registered event handler.
+ *
+ * @param event Triggering event.
+ * @param handler Handler of event (will be called from a separated
+ * thread)
+ * @param handlerName The name of handler (should be unique together with
+ * the event name)
+ * @param <PAYLOAD> The type of the event payload.
+ * @param <EVENT_TYPE> The type of the event identifier.
+ */
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
- EVENT_TYPE event,
- EventExecutor<PAYLOAD> executor,
- EventHandler<PAYLOAD> handler) {
+ EVENT_TYPE event, EventHandler<PAYLOAD> handler, String handlerName) {
+ validateEvent(event);
+ Preconditions.checkNotNull(handler, "Handler name should not be null.");
+ String executorName =
+ StringUtils.camelize(event.getName()) + EXECUTOR_NAME_SEPARATOR
+ + handlerName;
+ this.addHandler(event, new SingleThreadExecutor<>(executorName), handler);
+ }
- executors.putIfAbsent(event, new HashMap<>());
- executors.get(event).putIfAbsent(executor, new ArrayList<>());
+ private <EVENT_TYPE extends Event<?>> void validateEvent(EVENT_TYPE event) {
+ Preconditions
+ .checkArgument(!event.getName().contains(EXECUTOR_NAME_SEPARATOR),
+ "Event name should not contain " + EXECUTOR_NAME_SEPARATOR
+ + " string.");
- executors.get(event)
- .get(executor)
- .add(handler);
+ }
+
+ private <PAYLOAD> String generateHandlerName(EventHandler<PAYLOAD> handler) {
+ if (!"".equals(handler.getClass().getSimpleName())) {
+ return handler.getClass().getSimpleName();
+ } else {
+ return handler.getClass().getName();
+ }
}
/**
- * Creates one executor with multiple event handlers.
+ * Add event handler with custom executor.
+ *
+ * @param event Triggering event.
+ * @param executor The executor imlementation to deliver events from a
+ * separated threads. Please keep in your mind that
+ * registering metrics is the responsibility of the
+ * caller.
+ * @param handler Handler of event (will be called from a separated
+ * thread)
+ * @param <PAYLOAD> The type of the event payload.
+ * @param <EVENT_TYPE> The type of the event identifier.
*/
- public void addHandlerGroup(String name, HandlerForEvent<?>...
- eventsAndHandlers) {
- SingleThreadExecutor sharedExecutor =
- new SingleThreadExecutor(name);
- for (HandlerForEvent handlerForEvent : eventsAndHandlers) {
- addHandler(handlerForEvent.event, sharedExecutor,
- handlerForEvent.handler);
- }
+ public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
+ EVENT_TYPE event, EventExecutor<PAYLOAD> executor,
+ EventHandler<PAYLOAD> handler) {
+ validateEvent(event);
+ executors.putIfAbsent(event, new HashMap<>());
+ executors.get(event).putIfAbsent(executor, new ArrayList<>());
+ executors.get(event).get(executor).add(handler);
}
+
+
/**
* Route an event with payload to the right listener(s).
*
@@ -183,31 +225,5 @@ public class EventQueue implements EventPublisher, AutoCloseable {
});
}
- /**
- * Event identifier together with the handler.
- *
- * @param <PAYLOAD>
- */
- public static class HandlerForEvent<PAYLOAD> {
-
- private final Event<PAYLOAD> event;
-
- private final EventHandler<PAYLOAD> handler;
-
- public HandlerForEvent(
- Event<PAYLOAD> event,
- EventHandler<PAYLOAD> handler) {
- this.event = event;
- this.handler = handler;
- }
-
- public Event<PAYLOAD> getEvent() {
- return event;
- }
-
- public EventHandler<PAYLOAD> getHandler() {
- return handler;
- }
- }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb5e2258/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
index a64e3d7..3253f2d 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
@@ -23,13 +23,18 @@ import org.slf4j.LoggerFactory;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
-import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.annotation.Metrics;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
/**
* Simple EventExecutor to call all the event handler one-by-one.
*
* @param <T>
*/
+@Metrics(context = "EventQueue")
public class SingleThreadExecutor<T> implements EventExecutor<T> {
public static final String THREAD_NAME_PREFIX = "EventQueue";
@@ -41,14 +46,24 @@ public class SingleThreadExecutor<T> implements EventExecutor<T> {
private final ThreadPoolExecutor executor;
- private final AtomicLong queuedCount = new AtomicLong(0);
+ @Metric
+ private MutableCounterLong queued;
- private final AtomicLong successfulCount = new AtomicLong(0);
+ @Metric
+ private MutableCounterLong done;
- private final AtomicLong failedCount = new AtomicLong(0);
+ @Metric
+ private MutableCounterLong failed;
+ /**
+ * Create SingleThreadExecutor.
+ *
+ * @param name Unique name used in monitoring and metrics.
+ */
public SingleThreadExecutor(String name) {
this.name = name;
+ DefaultMetricsSystem.instance()
+ .register("EventQueue" + name, "Event Executor metrics ", this);
LinkedBlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<>();
executor =
@@ -64,31 +79,31 @@ public class SingleThreadExecutor<T> implements EventExecutor<T> {
@Override
public void onMessage(EventHandler<T> handler, T message, EventPublisher
publisher) {
- queuedCount.incrementAndGet();
+ queued.incr();
executor.execute(() -> {
try {
handler.onMessage(message, publisher);
- successfulCount.incrementAndGet();
+ done.incr();
} catch (Exception ex) {
LOG.error("Error on execution message {}", message, ex);
- failedCount.incrementAndGet();
+ failed.incr();
}
});
}
@Override
public long failedEvents() {
- return failedCount.get();
+ return failed.value();
}
@Override
public long successfulEvents() {
- return successfulCount.get();
+ return done.value();
}
@Override
public long queuedEvents() {
- return queuedCount.get();
+ return queued.value();
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/cb5e2258/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
index 3944409..2bdf705 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
@@ -25,6 +25,8 @@ import org.junit.Test;
import java.util.Set;
import java.util.stream.Collectors;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+
/**
* Testing the basic functionality of the event queue.
*/
@@ -44,11 +46,13 @@ public class TestEventQueue {
@Before
public void startEventQueue() {
+ DefaultMetricsSystem.initialize(getClass().getSimpleName());
queue = new EventQueue();
}
@After
public void stopEventQueue() {
+ DefaultMetricsSystem.shutdown();
queue.close();
}
@@ -79,35 +83,4 @@ public class TestEventQueue {
}
- @Test
- public void handlerGroup() {
- final long[] result = new long[2];
- queue.addHandlerGroup(
- "group",
- new EventQueue.HandlerForEvent<>(EVENT3, (payload, publisher) ->
- result[0] = payload),
- new EventQueue.HandlerForEvent<>(EVENT4, (payload, publisher) ->
- result[1] = payload)
- );
-
- queue.fireEvent(EVENT3, 23L);
- queue.fireEvent(EVENT4, 42L);
-
- queue.processAll(1000);
-
- Assert.assertEquals(23, result[0]);
- Assert.assertEquals(42, result[1]);
-
- Set<String> eventQueueThreadNames =
- Thread.getAllStackTraces().keySet()
- .stream()
- .filter(t -> t.getName().startsWith(SingleThreadExecutor
- .THREAD_NAME_PREFIX))
- .map(Thread::getName)
- .collect(Collectors.toSet());
- System.out.println(eventQueueThreadNames);
- Assert.assertEquals(1, eventQueueThreadNames.size());
-
- }
-
}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[20/50] [abbrv] hadoop git commit: HDFS-13722. HDFS Native Client
Fails Compilation on Ubuntu 18.04 (contributed by Jack Bearden)
Posted by bo...@apache.org.
HDFS-13722. HDFS Native Client Fails Compilation on Ubuntu 18.04 (contributed by Jack Bearden)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5d0f01e1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5d0f01e1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5d0f01e1
Branch: refs/heads/YARN-7402
Commit: 5d0f01e1fe988616d53120bad0cb69a825a4dde0
Parents: 82ac3aa
Author: Allen Wittenauer <aw...@apache.org>
Authored: Tue Jul 10 12:17:44 2018 -0700
Committer: Allen Wittenauer <aw...@apache.org>
Committed: Tue Jul 10 12:17:44 2018 -0700
----------------------------------------------------------------------
.../src/main/native/libhdfspp/lib/rpc/request.cc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5d0f01e1/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/request.cc
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/request.cc b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/request.cc
index 9157476..2de26fd 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/request.cc
+++ b/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/rpc/request.cc
@@ -16,7 +16,7 @@
* limitations under the License.
*/
-
+#include <functional>
#include "request.h"
#include "rpc_engine.h"
#include "sasl_protocol.h"
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[47/50] [abbrv] hadoop git commit: Updating GPG module pom version
post rebase.
Posted by bo...@apache.org.
Updating GPG module pom version post rebase.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9c24328b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9c24328b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9c24328b
Branch: refs/heads/YARN-7402
Commit: 9c24328bef19ab952f69733333251ec811eba2e8
Parents: 8a70835
Author: Subru Krishnan <su...@apache.org>
Authored: Wed May 30 12:59:22 2018 -0700
Committer: Botong Huang <bo...@apache.org>
Committed: Fri Jul 13 17:42:58 2018 -0700
----------------------------------------------------------------------
.../hadoop-yarn-server-globalpolicygenerator/pom.xml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/9c24328b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
index 9398b0b..c137c9e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
@@ -19,12 +19,12 @@
<parent>
<artifactId>hadoop-yarn-server</artifactId>
<groupId>org.apache.hadoop</groupId>
- <version>3.1.0-SNAPSHOT</version>
+ <version>3.2.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-yarn-server-globalpolicygenerator</artifactId>
- <version>3.1.0-SNAPSHOT</version>
+ <version>3.2.0-SNAPSHOT</version>
<name>hadoop-yarn-server-globalpolicygenerator</name>
<properties>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[22/50] [abbrv] hadoop git commit: HDDS-242. Introduce NEW_NODE,
STALE_NODE and DEAD_NODE event and corresponding event handlers in
SCM. Contributed by Nanda Kumar.
Posted by bo...@apache.org.
HDDS-242. Introduce NEW_NODE, STALE_NODE and DEAD_NODE event
and corresponding event handlers in SCM.
Contributed by Nanda Kumar.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a47ec5da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a47ec5da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a47ec5da
Branch: refs/heads/YARN-7402
Commit: a47ec5dac4a1cdfec788ce3296b4f610411911ea
Parents: 4e59b92
Author: Anu Engineer <ae...@apache.org>
Authored: Tue Jul 10 15:58:47 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Tue Jul 10 15:58:47 2018 -0700
----------------------------------------------------------------------
.../scm/container/ContainerReportHandler.java | 47 ++++++++++++++++++
.../hadoop/hdds/scm/node/DeadNodeHandler.java | 42 ++++++++++++++++
.../hadoop/hdds/scm/node/NewNodeHandler.java | 50 +++++++++++++++++++
.../hadoop/hdds/scm/node/NodeReportHandler.java | 42 ++++++++++++++++
.../hadoop/hdds/scm/node/StaleNodeHandler.java | 42 ++++++++++++++++
.../common/src/main/bin/ozone-config.sh | 51 ++++++++++++++++++++
6 files changed, 274 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a47ec5da/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
new file mode 100644
index 0000000..486162e
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .ContainerReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles container reports from datanode.
+ */
+public class ContainerReportHandler implements
+ EventHandler<ContainerReportFromDatanode> {
+
+ private final Mapping containerMapping;
+ private final Node2ContainerMap node2ContainerMap;
+
+ public ContainerReportHandler(Mapping containerMapping,
+ Node2ContainerMap node2ContainerMap) {
+ this.containerMapping = containerMapping;
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(ContainerReportFromDatanode containerReportFromDatanode,
+ EventPublisher publisher) {
+ // TODO: process container report.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a47ec5da/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
new file mode 100644
index 0000000..427aef8
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Dead Node event.
+ */
+public class DeadNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public DeadNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ //TODO: add logic to handle dead node.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a47ec5da/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
new file mode 100644
index 0000000..79b75a5
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
@@ -0,0 +1,50 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+import java.util.Collections;
+
+/**
+ * Handles New Node event.
+ */
+public class NewNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public NewNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ try {
+ node2ContainerMap.insertNewDatanode(datanodeDetails.getUuid(),
+ Collections.emptySet());
+ } catch (SCMException e) {
+ // TODO: log exception message.
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a47ec5da/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
new file mode 100644
index 0000000..aa78d53
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .NodeReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Node Reports from datanode.
+ */
+public class NodeReportHandler implements EventHandler<NodeReportFromDatanode> {
+
+ private final NodeManager nodeManager;
+
+ public NodeReportHandler(NodeManager nodeManager) {
+ this.nodeManager = nodeManager;
+ }
+
+ @Override
+ public void onMessage(NodeReportFromDatanode nodeReportFromDatanode,
+ EventPublisher publisher) {
+ //TODO: process node report.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a47ec5da/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
new file mode 100644
index 0000000..b37dd93
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Stale node event.
+ */
+public class StaleNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public StaleNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ //TODO: logic to handle stale node.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a47ec5da/hadoop-ozone/common/src/main/bin/ozone-config.sh
----------------------------------------------------------------------
diff --git a/hadoop-ozone/common/src/main/bin/ozone-config.sh b/hadoop-ozone/common/src/main/bin/ozone-config.sh
new file mode 100755
index 0000000..83f30ce
--- /dev/null
+++ b/hadoop-ozone/common/src/main/bin/ozone-config.sh
@@ -0,0 +1,51 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# included in all the ozone scripts with source command
+# should not be executed directly
+
+function hadoop_subproject_init
+{
+ if [[ -z "${HADOOP_OZONE_ENV_PROCESSED}" ]]; then
+ if [[ -e "${HADOOP_CONF_DIR}/hdfs-env.sh" ]]; then
+ . "${HADOOP_CONF_DIR}/hdfs-env.sh"
+ export HADOOP_OZONES_ENV_PROCESSED=true
+ fi
+ fi
+ HADOOP_OZONE_HOME="${HADOOP_OZONE_HOME:-$HADOOP_HOME}"
+
+}
+
+if [[ -z "${HADOOP_LIBEXEC_DIR}" ]]; then
+ _hd_this="${BASH_SOURCE-$0}"
+ HADOOP_LIBEXEC_DIR=$(cd -P -- "$(dirname -- "${_hd_this}")" >/dev/null && pwd -P)
+fi
+
+# shellcheck source=./hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
+
+if [[ -n "${HADOOP_COMMON_HOME}" ]] &&
+ [[ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh" ]]; then
+ . "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh"
+elif [[ -e "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh" ]]; then
+ . "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh"
+elif [ -e "${HADOOP_HOME}/libexec/hadoop-config.sh" ]; then
+ . "${HADOOP_HOME}/libexec/hadoop-config.sh"
+else
+ echo "ERROR: Hadoop common not found." 2>&1
+ exit 1
+fi
+
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[21/50] [abbrv] hadoop git commit: HDDS-208. ozone createVolume
command ignores the first character of the volume name argument. Contributed
by Lokesh Jain.
Posted by bo...@apache.org.
HDDS-208. ozone createVolume command ignores the first character of the volume name argument. Contributed by Lokesh Jain.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4e59b927
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4e59b927
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4e59b927
Branch: refs/heads/YARN-7402
Commit: 4e59b9278463e4f8ccce7100d4582e896154beb8
Parents: 5d0f01e
Author: Xiaoyu Yao <xy...@apache.org>
Authored: Tue Jul 10 14:07:23 2018 -0700
Committer: Xiaoyu Yao <xy...@apache.org>
Committed: Tue Jul 10 14:07:23 2018 -0700
----------------------------------------------------------------------
.../hadoop/ozone/ozShell/TestOzoneShell.java | 26 +++++++++++++++++---
.../web/ozShell/volume/CreateVolumeHandler.java | 10 ++++----
2 files changed, 28 insertions(+), 8 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e59b927/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
index 5082870..a4b30f0 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
@@ -38,6 +38,7 @@ import java.util.Random;
import java.util.UUID;
import java.util.stream.Collectors;
+import com.google.common.base.Strings;
import org.apache.commons.lang3.RandomStringUtils;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.hdds.client.ReplicationFactor;
@@ -203,13 +204,32 @@ public class TestOzoneShell {
public void testCreateVolume() throws Exception {
LOG.info("Running testCreateVolume");
String volumeName = "volume" + RandomStringUtils.randomNumeric(5);
+ testCreateVolume(volumeName, "");
+ volumeName = "volume" + RandomStringUtils.randomNumeric(5);
+ testCreateVolume("/////" + volumeName, "");
+ testCreateVolume("/////", "Volume name is required to create a volume");
+ testCreateVolume("/////vol/123",
+ "Illegal argument: Bucket or Volume name has an unsupported character : /");
+ }
+
+ private void testCreateVolume(String volumeName, String errorMsg) throws Exception {
+ err.reset();
String userName = "bilbo";
String[] args = new String[] {"-createVolume", url + "/" + volumeName,
"-user", userName, "-root"};
- assertEquals(0, ToolRunner.run(shell, args));
- OzoneVolume volumeInfo = client.getVolumeDetails(volumeName);
- assertEquals(volumeName, volumeInfo.getName());
+ if (Strings.isNullOrEmpty(errorMsg)) {
+ assertEquals(0, ToolRunner.run(shell, args));
+ } else {
+ assertEquals(1, ToolRunner.run(shell, args));
+ assertTrue(err.toString().contains(errorMsg));
+ return;
+ }
+
+ String truncatedVolumeName =
+ volumeName.substring(volumeName.lastIndexOf('/') + 1);
+ OzoneVolume volumeInfo = client.getVolumeDetails(truncatedVolumeName);
+ assertEquals(truncatedVolumeName, volumeInfo.getName());
assertEquals(userName, volumeInfo.getOwner());
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4e59b927/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/CreateVolumeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/CreateVolumeHandler.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/CreateVolumeHandler.java
index 74fdbb0..0057282 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/CreateVolumeHandler.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/CreateVolumeHandler.java
@@ -60,15 +60,15 @@ public class CreateVolumeHandler extends Handler {
String ozoneURIString = cmd.getOptionValue(Shell.CREATE_VOLUME);
URI ozoneURI = verifyURI(ozoneURIString);
- if (ozoneURI.getPath().isEmpty()) {
+
+ // we need to skip the slash in the URI path
+ // getPath returns /volumeName needs to remove the initial slash.
+ volumeName = ozoneURI.getPath().replaceAll("^/+", "");
+ if (volumeName.isEmpty()) {
throw new OzoneClientException(
"Volume name is required to create a volume");
}
- // we need to skip the slash in the URI path
- // getPath returns /volumeName needs to remove the first slash.
- volumeName = ozoneURI.getPath().substring(1);
-
if (cmd.hasOption(Shell.VERBOSE)) {
System.out.printf("Volume name : %s%n", volumeName);
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[03/50] [abbrv] hadoop git commit: Only mount non-empty directories
for cgroups (miklos.szegedi@cloudera.com via rkanter)
Posted by bo...@apache.org.
Only mount non-empty directories for cgroups (miklos.szegedi@cloudera.com via rkanter)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0838fe83
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0838fe83
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0838fe83
Branch: refs/heads/YARN-7402
Commit: 0838fe833738e04f5e6f6408e97866d77bebbf30
Parents: eecb5ba
Author: Robert Kanter <rk...@apache.org>
Authored: Mon Jul 9 10:37:20 2018 -0700
Committer: Robert Kanter <rk...@apache.org>
Committed: Mon Jul 9 10:37:20 2018 -0700
----------------------------------------------------------------------
.../impl/container-executor.c | 30 +++++++++++++++++++-
.../test/test-container-executor.c | 20 +++++++++++++
2 files changed, 49 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0838fe83/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
index baf0e8b..effeeee 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
@@ -2379,6 +2379,28 @@ void chown_dir_contents(const char *dir_path, uid_t uid, gid_t gid) {
free(path_tmp);
}
+int is_empty(char *target_dir) {
+ DIR *dir = NULL;
+ struct dirent *entry = NULL;
+ dir = opendir(target_dir);
+ if (!dir) {
+ fprintf(LOGFILE, "Could not open directory %s - %s\n", target_dir,
+ strerror(errno));
+ return 0;
+ }
+ while ((entry = readdir(dir)) != NULL) {
+ if (strcmp(entry->d_name, ".") == 0) {
+ continue;
+ }
+ if (strcmp(entry->d_name, "..") == 0) {
+ continue;
+ }
+ fprintf(LOGFILE, "Directory is not empty %s\n", target_dir);
+ return 0;
+ }
+ return 1;
+}
+
/**
* Mount a cgroup controller at the requested mount point and create
* a hierarchy for the Hadoop NodeManager to manage.
@@ -2413,7 +2435,13 @@ int mount_cgroup(const char *pair, const char *hierarchy) {
result = -1;
} else {
if (strstr(mount_path, "..") != NULL) {
- fprintf(LOGFILE, "Unsupported cgroup mount path detected.\n");
+ fprintf(LOGFILE, "Unsupported cgroup mount path detected. %s\n",
+ mount_path);
+ result = INVALID_COMMAND_PROVIDED;
+ goto cleanup;
+ }
+ if (!is_empty(mount_path)) {
+ fprintf(LOGFILE, "cgroup mount path is not empty. %s\n", mount_path);
result = INVALID_COMMAND_PROVIDED;
goto cleanup;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0838fe83/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index 3d32883..a199d84 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -1203,6 +1203,23 @@ void test_trim_function() {
free(trimmed);
}
+void test_is_empty() {
+ printf("\nTesting is_empty function\n");
+ if (is_empty("/")) {
+ printf("FAIL: / should not be empty\n");
+ exit(1);
+ }
+ if (is_empty("/tmp/2938rf2983hcqnw8ud/noexist")) {
+ printf("FAIL: /tmp/2938rf2983hcqnw8ud/noexist should not exist\n");
+ exit(1);
+ }
+ mkdir("/tmp/2938rf2983hcqnw8ud/emptydir", S_IRWXU);
+ if (!is_empty("/tmp/2938rf2983hcqnw8ud/emptydir")) {
+ printf("FAIL: /tmp/2938rf2983hcqnw8ud/emptydir be empty\n");
+ exit(1);
+ }
+}
+
// This test is expected to be executed either by a regular
// user or by root. If executed by a regular user it doesn't
// test all the functions that would depend on changing the
@@ -1264,6 +1281,9 @@ int main(int argc, char **argv) {
printf("\nStarting tests\n");
+ printf("\ntest_is_empty()\n");
+ test_is_empty();
+
printf("\nTesting recursive_unlink_children()\n");
test_recursive_unlink_children();
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[50/50] [abbrv] hadoop git commit: YARN-6648. [GPG] Add
SubClusterCleaner in Global Policy Generator. (botong)
Posted by bo...@apache.org.
YARN-6648. [GPG] Add SubClusterCleaner in Global Policy Generator. (botong)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/fa3ee34c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/fa3ee34c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/fa3ee34c
Branch: refs/heads/YARN-7402
Commit: fa3ee34c7b74889bcbfd2effb999757c73994dd4
Parents: 43b8c2d
Author: Botong Huang <bo...@apache.org>
Authored: Thu Feb 1 14:43:48 2018 -0800
Committer: Botong Huang <bo...@apache.org>
Committed: Fri Jul 13 17:42:58 2018 -0700
----------------------------------------------------------------------
.../dev-support/findbugs-exclude.xml | 5 +
.../hadoop/yarn/conf/YarnConfiguration.java | 18 +++
.../src/main/resources/yarn-default.xml | 24 ++++
.../store/impl/MemoryFederationStateStore.java | 13 ++
.../utils/FederationStateStoreFacade.java | 41 ++++++-
.../GlobalPolicyGenerator.java | 92 ++++++++++-----
.../subclustercleaner/SubClusterCleaner.java | 109 +++++++++++++++++
.../subclustercleaner/package-info.java | 19 +++
.../TestSubClusterCleaner.java | 118 +++++++++++++++++++
9 files changed, 409 insertions(+), 30 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
index 5cc81e5..406a8b7 100644
--- a/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
+++ b/hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
@@ -387,6 +387,11 @@
<Method name="initAndStartNodeManager" />
<Bug pattern="DM_EXIT" />
</Match>
+ <Match>
+ <Class name="org.apache.hadoop.yarn.server.globalpolicygenerator.GlobalPolicyGenerator" />
+ <Medhod name="startGPG" />
+ <Bug pattern="DM_EXIT" />
+ </Match>
<!-- Ignore heartbeat exception when killing localizer -->
<Match>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
index 9156c2d..b3a4ccb 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
@@ -3335,6 +3335,24 @@ public class YarnConfiguration extends Configuration {
public static final boolean DEFAULT_ROUTER_WEBAPP_PARTIAL_RESULTS_ENABLED =
false;
+ private static final String FEDERATION_GPG_PREFIX =
+ FEDERATION_PREFIX + "gpg.";
+
+ // The number of threads to use for the GPG scheduled executor service
+ public static final String GPG_SCHEDULED_EXECUTOR_THREADS =
+ FEDERATION_GPG_PREFIX + "scheduled.executor.threads";
+ public static final int DEFAULT_GPG_SCHEDULED_EXECUTOR_THREADS = 10;
+
+ // The interval at which the subcluster cleaner runs, -1 means disabled
+ public static final String GPG_SUBCLUSTER_CLEANER_INTERVAL_MS =
+ FEDERATION_GPG_PREFIX + "subcluster.cleaner.interval-ms";
+ public static final long DEFAULT_GPG_SUBCLUSTER_CLEANER_INTERVAL_MS = -1;
+
+ // The expiration time for a subcluster heartbeat, default is 30 minutes
+ public static final String GPG_SUBCLUSTER_EXPIRATION_MS =
+ FEDERATION_GPG_PREFIX + "subcluster.heartbeat.expiration-ms";
+ public static final long DEFAULT_GPG_SUBCLUSTER_EXPIRATION_MS = 1800000;
+
////////////////////////////////
// Other Configs
////////////////////////////////
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
index 2cc842f..66493f3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
@@ -3533,6 +3533,30 @@
<property>
<description>
+ The number of threads to use for the GPG scheduled executor service.
+ </description>
+ <name>yarn.federation.gpg.scheduled.executor.threads</name>
+ <value>10</value>
+ </property>
+
+ <property>
+ <description>
+ The interval at which the subcluster cleaner runs, -1 means disabled.
+ </description>
+ <name>yarn.federation.gpg.subcluster.cleaner.interval-ms</name>
+ <value>-1</value>
+ </property>
+
+ <property>
+ <description>
+ The expiration time for a subcluster heartbeat, default is 30 minutes.
+ </description>
+ <name>yarn.federation.gpg.subcluster.heartbeat.expiration-ms</name>
+ <value>1800000</value>
+ </property>
+
+ <property>
+ <description>
It is TimelineClient 1.5 configuration whether to store active
application’s timeline data with in user directory i.e
${yarn.timeline-service.entity-group-fs-store.active-dir}/${user.name}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
index 7c06256..b42fc79 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/store/impl/MemoryFederationStateStore.java
@@ -68,6 +68,8 @@ import org.apache.hadoop.yarn.util.MonotonicClock;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import com.google.common.annotations.VisibleForTesting;
+
/**
* In-memory implementation of {@link FederationStateStore}.
*/
@@ -158,6 +160,17 @@ public class MemoryFederationStateStore implements FederationStateStore {
return SubClusterHeartbeatResponse.newInstance();
}
+ @VisibleForTesting
+ public void setSubClusterLastHeartbeat(SubClusterId subClusterId,
+ long lastHeartbeat) throws YarnException {
+ SubClusterInfo subClusterInfo = membership.get(subClusterId);
+ if (subClusterInfo == null) {
+ throw new YarnException(
+ "Subcluster " + subClusterId.toString() + " does not exist");
+ }
+ subClusterInfo.setLastHeartBeat(lastHeartbeat);
+ }
+
@Override
public GetSubClusterInfoResponse getSubCluster(
GetSubClusterInfoRequest request) throws YarnException {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
index 1bcb0f4..4c3bed0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java
@@ -62,9 +62,11 @@ import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolic
import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationResponse;
import org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoRequest;
import org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoResponse;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest;
import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
import org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState;
import org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterRequest;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -221,6 +223,22 @@ public final class FederationStateStoreFacade {
}
/**
+ * Deregister a <em>subcluster</em> identified by {@code SubClusterId} to
+ * change state in federation. This can be done to mark the sub cluster lost,
+ * deregistered, or decommissioned.
+ *
+ * @param subClusterId the target subclusterId
+ * @param subClusterState the state to update it to
+ * @throws YarnException if the request is invalid/fails
+ */
+ public void deregisterSubCluster(SubClusterId subClusterId,
+ SubClusterState subClusterState) throws YarnException {
+ stateStore.deregisterSubCluster(
+ SubClusterDeregisterRequest.newInstance(subClusterId, subClusterState));
+ return;
+ }
+
+ /**
* Returns the {@link SubClusterInfo} for the specified {@link SubClusterId}.
*
* @param subClusterId the identifier of the sub-cluster
@@ -255,8 +273,7 @@ public final class FederationStateStoreFacade {
public SubClusterInfo getSubCluster(final SubClusterId subClusterId,
final boolean flushCache) throws YarnException {
if (flushCache && isCachingEnabled()) {
- LOG.info("Flushing subClusters from cache and rehydrating from store,"
- + " most likely on account of RM failover.");
+ LOG.info("Flushing subClusters from cache and rehydrating from store.");
cache.remove(buildGetSubClustersCacheRequest(false));
}
return getSubCluster(subClusterId);
@@ -287,6 +304,26 @@ public final class FederationStateStoreFacade {
}
/**
+ * Updates the cache with the central {@link FederationStateStore} and returns
+ * the {@link SubClusterInfo} of all active sub cluster(s).
+ *
+ * @param filterInactiveSubClusters whether to filter out inactive
+ * sub-clusters
+ * @param flushCache flag to indicate if the cache should be flushed or not
+ * @return the sub cluster information
+ * @throws YarnException if the call to the state store is unsuccessful
+ */
+ public Map<SubClusterId, SubClusterInfo> getSubClusters(
+ final boolean filterInactiveSubClusters, final boolean flushCache)
+ throws YarnException {
+ if (flushCache && isCachingEnabled()) {
+ LOG.info("Flushing subClusters from cache and rehydrating from store.");
+ cache.remove(buildGetSubClustersCacheRequest(filterInactiveSubClusters));
+ }
+ return getSubClusters(filterInactiveSubClusters);
+ }
+
+ /**
* Returns the {@link SubClusterPolicyConfiguration} for the specified queue.
*
* @param queue the queue whose policy is required
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
index c1f7460..f6cfba0 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
@@ -18,8 +18,11 @@
package org.apache.hadoop.yarn.server.globalpolicygenerator;
+import java.util.concurrent.ScheduledThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
+import org.apache.commons.lang.time.DurationFormatUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
import org.apache.hadoop.service.CompositeService;
@@ -28,6 +31,7 @@ import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.subclustercleaner.SubClusterCleaner;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -55,36 +59,26 @@ public class GlobalPolicyGenerator extends CompositeService {
// Federation Variables
private GPGContext gpgContext;
+ // Scheduler service that runs tasks periodically
+ private ScheduledThreadPoolExecutor scheduledExecutorService;
+ private SubClusterCleaner subClusterCleaner;
+
public GlobalPolicyGenerator() {
super(GlobalPolicyGenerator.class.getName());
this.gpgContext = new GPGContextImpl();
}
- protected void initAndStart(Configuration conf, boolean hasToReboot) {
- try {
- // Remove the old hook if we are rebooting.
- if (hasToReboot && null != gpgShutdownHook) {
- ShutdownHookManager.get().removeShutdownHook(gpgShutdownHook);
- }
-
- gpgShutdownHook = new CompositeServiceShutdownHook(this);
- ShutdownHookManager.get().addShutdownHook(gpgShutdownHook,
- SHUTDOWN_HOOK_PRIORITY);
-
- this.init(conf);
- this.start();
- } catch (Throwable t) {
- LOG.error("Error starting globalpolicygenerator", t);
- System.exit(-1);
- }
- }
-
@Override
protected void serviceInit(Configuration conf) throws Exception {
// Set up the context
this.gpgContext
.setStateStoreFacade(FederationStateStoreFacade.getInstance());
+ this.scheduledExecutorService = new ScheduledThreadPoolExecutor(
+ conf.getInt(YarnConfiguration.GPG_SCHEDULED_EXECUTOR_THREADS,
+ YarnConfiguration.DEFAULT_GPG_SCHEDULED_EXECUTOR_THREADS));
+ this.subClusterCleaner = new SubClusterCleaner(conf, this.gpgContext);
+
DefaultMetricsSystem.initialize(METRICS_NAME);
// super.serviceInit after all services are added
@@ -94,10 +88,32 @@ public class GlobalPolicyGenerator extends CompositeService {
@Override
protected void serviceStart() throws Exception {
super.serviceStart();
+
+ // Scheduler SubClusterCleaner service
+ long scCleanerIntervalMs = getConfig().getLong(
+ YarnConfiguration.GPG_SUBCLUSTER_CLEANER_INTERVAL_MS,
+ YarnConfiguration.DEFAULT_GPG_SUBCLUSTER_CLEANER_INTERVAL_MS);
+ if (scCleanerIntervalMs > 0) {
+ this.scheduledExecutorService.scheduleAtFixedRate(this.subClusterCleaner,
+ 0, scCleanerIntervalMs, TimeUnit.MILLISECONDS);
+ LOG.info("Scheduled sub-cluster cleaner with interval: {}",
+ DurationFormatUtils.formatDurationISO(scCleanerIntervalMs));
+ }
}
@Override
protected void serviceStop() throws Exception {
+ try {
+ if (this.scheduledExecutorService != null
+ && !this.scheduledExecutorService.isShutdown()) {
+ this.scheduledExecutorService.shutdown();
+ LOG.info("Stopped ScheduledExecutorService");
+ }
+ } catch (Exception e) {
+ LOG.error("Failed to shutdown ScheduledExecutorService", e);
+ throw e;
+ }
+
if (this.isStopping.getAndSet(true)) {
return;
}
@@ -113,20 +129,40 @@ public class GlobalPolicyGenerator extends CompositeService {
return this.gpgContext;
}
+ private void initAndStart(Configuration conf, boolean hasToReboot) {
+ // Remove the old hook if we are rebooting.
+ if (hasToReboot && null != gpgShutdownHook) {
+ ShutdownHookManager.get().removeShutdownHook(gpgShutdownHook);
+ }
+
+ gpgShutdownHook = new CompositeServiceShutdownHook(this);
+ ShutdownHookManager.get().addShutdownHook(gpgShutdownHook,
+ SHUTDOWN_HOOK_PRIORITY);
+
+ this.init(conf);
+ this.start();
+ }
+
@SuppressWarnings("resource")
public static void startGPG(String[] argv, Configuration conf) {
boolean federationEnabled =
conf.getBoolean(YarnConfiguration.FEDERATION_ENABLED,
YarnConfiguration.DEFAULT_FEDERATION_ENABLED);
- if (federationEnabled) {
- Thread.setDefaultUncaughtExceptionHandler(
- new YarnUncaughtExceptionHandler());
- StringUtils.startupShutdownMessage(GlobalPolicyGenerator.class, argv,
- LOG);
- GlobalPolicyGenerator globalPolicyGenerator = new GlobalPolicyGenerator();
- globalPolicyGenerator.initAndStart(conf, false);
- } else {
- LOG.warn("Federation is not enabled. The gpg cannot start.");
+ try {
+ if (federationEnabled) {
+ Thread.setDefaultUncaughtExceptionHandler(
+ new YarnUncaughtExceptionHandler());
+ StringUtils.startupShutdownMessage(GlobalPolicyGenerator.class, argv,
+ LOG);
+ GlobalPolicyGenerator globalPolicyGenerator =
+ new GlobalPolicyGenerator();
+ globalPolicyGenerator.initAndStart(conf, false);
+ } else {
+ LOG.warn("Federation is not enabled. The gpg cannot start.");
+ }
+ } catch (Throwable t) {
+ LOG.error("Error starting globalpolicygenerator", t);
+ System.exit(-1);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/SubClusterCleaner.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/SubClusterCleaner.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/SubClusterCleaner.java
new file mode 100644
index 0000000..dad5121
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/SubClusterCleaner.java
@@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator.subclustercleaner;
+
+import java.util.Date;
+import java.util.Map;
+
+import org.apache.commons.lang.time.DurationFormatUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGContext;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The sub-cluster cleaner is one of the GPG's services that periodically checks
+ * the membership table in FederationStateStore and mark sub-clusters that have
+ * not sent a heartbeat in certain amount of time as LOST.
+ */
+public class SubClusterCleaner implements Runnable {
+
+ private static final Logger LOG =
+ LoggerFactory.getLogger(SubClusterCleaner.class);
+
+ private GPGContext gpgContext;
+ private long heartbeatExpirationMillis;
+
+ /**
+ * The sub-cluster cleaner runnable is invoked by the sub cluster cleaner
+ * service to check the membership table and remove sub clusters that have not
+ * sent a heart beat in some amount of time.
+ */
+ public SubClusterCleaner(Configuration conf, GPGContext gpgContext) {
+ this.heartbeatExpirationMillis =
+ conf.getLong(YarnConfiguration.GPG_SUBCLUSTER_EXPIRATION_MS,
+ YarnConfiguration.DEFAULT_GPG_SUBCLUSTER_EXPIRATION_MS);
+ this.gpgContext = gpgContext;
+ LOG.info("Initialized SubClusterCleaner with heartbeat expiration of {}",
+ DurationFormatUtils.formatDurationISO(this.heartbeatExpirationMillis));
+ }
+
+ @Override
+ public void run() {
+ try {
+ Date now = new Date();
+ LOG.info("SubClusterCleaner at {}", now);
+
+ Map<SubClusterId, SubClusterInfo> infoMap =
+ this.gpgContext.getStateStoreFacade().getSubClusters(false, true);
+
+ // Iterate over each sub cluster and check last heartbeat
+ for (Map.Entry<SubClusterId, SubClusterInfo> entry : infoMap.entrySet()) {
+ SubClusterInfo subClusterInfo = entry.getValue();
+
+ Date lastHeartBeat = new Date(subClusterInfo.getLastHeartBeat());
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Checking subcluster {} in state {}, last heartbeat at {}",
+ subClusterInfo.getSubClusterId(), subClusterInfo.getState(),
+ lastHeartBeat);
+ }
+
+ if (!subClusterInfo.getState().isUnusable()) {
+ long timeUntilDeregister = this.heartbeatExpirationMillis
+ - (now.getTime() - lastHeartBeat.getTime());
+ // Deregister sub-cluster as SC_LOST if last heartbeat too old
+ if (timeUntilDeregister < 0) {
+ LOG.warn(
+ "Deregistering subcluster {} in state {} last heartbeat at {}",
+ subClusterInfo.getSubClusterId(), subClusterInfo.getState(),
+ new Date(subClusterInfo.getLastHeartBeat()));
+ try {
+ this.gpgContext.getStateStoreFacade().deregisterSubCluster(
+ subClusterInfo.getSubClusterId(), SubClusterState.SC_LOST);
+ } catch (Exception e) {
+ LOG.error("deregisterSubCluster failed on subcluster "
+ + subClusterInfo.getSubClusterId(), e);
+ }
+ } else if (LOG.isDebugEnabled()) {
+ LOG.debug("Time until deregister for subcluster {}: {}",
+ entry.getKey(),
+ DurationFormatUtils.formatDurationISO(timeUntilDeregister));
+ }
+ }
+ }
+ } catch (Throwable e) {
+ LOG.error("Subcluster cleaner fails: ", e);
+ }
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/package-info.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/package-info.java
new file mode 100644
index 0000000..f65444a
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/package-info.java
@@ -0,0 +1,19 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator.subclustercleaner;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/fa3ee34c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/TestSubClusterCleaner.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/TestSubClusterCleaner.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/TestSubClusterCleaner.java
new file mode 100644
index 0000000..19b8802
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/subclustercleaner/TestSubClusterCleaner.java
@@ -0,0 +1,118 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator.subclustercleaner;
+
+import java.util.ArrayList;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStore;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterHeartbeatRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterRegisterRequest;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState;
+import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGContext;
+import org.apache.hadoop.yarn.server.globalpolicygenerator.GPGContextImpl;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Unit test for Sub-cluster Cleaner in GPG.
+ */
+public class TestSubClusterCleaner {
+
+ private Configuration conf;
+ private MemoryFederationStateStore stateStore;
+ private FederationStateStoreFacade facade;
+ private SubClusterCleaner cleaner;
+ private GPGContext gpgContext;
+
+ private ArrayList<SubClusterId> subClusterIds;
+
+ @Before
+ public void setup() throws YarnException {
+ conf = new YarnConfiguration();
+
+ // subcluster expires in one second
+ conf.setLong(YarnConfiguration.GPG_SUBCLUSTER_EXPIRATION_MS, 1000);
+
+ stateStore = new MemoryFederationStateStore();
+ stateStore.init(conf);
+
+ facade = FederationStateStoreFacade.getInstance();
+ facade.reinitialize(stateStore, conf);
+
+ gpgContext = new GPGContextImpl();
+ gpgContext.setStateStoreFacade(facade);
+
+ cleaner = new SubClusterCleaner(conf, gpgContext);
+
+ // Create and register six sub clusters
+ subClusterIds = new ArrayList<SubClusterId>();
+ for (int i = 0; i < 3; i++) {
+ // Create sub cluster id and info
+ SubClusterId subClusterId =
+ SubClusterId.newInstance("SUBCLUSTER-" + Integer.toString(i));
+
+ SubClusterInfo subClusterInfo = SubClusterInfo.newInstance(subClusterId,
+ "1.2.3.4:1", "1.2.3.4:2", "1.2.3.4:3", "1.2.3.4:4",
+ SubClusterState.SC_RUNNING, System.currentTimeMillis(), "");
+ // Register the sub cluster
+ stateStore.registerSubCluster(
+ SubClusterRegisterRequest.newInstance(subClusterInfo));
+ // Append the id to a local list
+ subClusterIds.add(subClusterId);
+ }
+ }
+
+ @After
+ public void breakDown() throws Exception {
+ stateStore.close();
+ }
+
+ @Test
+ public void testSubClusterRegisterHeartBeatTime() throws YarnException {
+ cleaner.run();
+ Assert.assertEquals(3, facade.getSubClusters(true, true).size());
+ }
+
+ /**
+ * Test the base use case.
+ */
+ @Test
+ public void testSubClusterHeartBeat() throws YarnException {
+ // The first subcluster reports as Unhealthy
+ SubClusterId subClusterId = subClusterIds.get(0);
+ stateStore.subClusterHeartbeat(SubClusterHeartbeatRequest
+ .newInstance(subClusterId, SubClusterState.SC_UNHEALTHY, "capacity"));
+
+ // The second subcluster didn't heartbeat for two seconds, should mark lost
+ subClusterId = subClusterIds.get(1);
+ stateStore.setSubClusterLastHeartbeat(subClusterId,
+ System.currentTimeMillis() - 2000);
+
+ cleaner.run();
+ Assert.assertEquals(1, facade.getSubClusters(true, true).size());
+ }
+}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[44/50] [abbrv] hadoop git commit: HDDS-210. Make "-file" argument
optional for ozone getKey command. Contributed by Lokesh Jain.
Posted by bo...@apache.org.
HDDS-210. Make "-file" argument optional for ozone getKey command. Contributed by Lokesh Jain.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/103f2eeb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/103f2eeb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/103f2eeb
Branch: refs/heads/YARN-7402
Commit: 103f2eeb57dbadd9abbbc25a05bb7c79b48fdc17
Parents: 88625f5
Author: Xiaoyu Yao <xy...@apache.org>
Authored: Fri Jul 13 11:44:24 2018 -0700
Committer: Xiaoyu Yao <xy...@apache.org>
Committed: Fri Jul 13 11:45:02 2018 -0700
----------------------------------------------------------------------
.../org/apache/hadoop/ozone/ozShell/TestOzoneShell.java | 12 ++++++++++++
.../hadoop/ozone/web/ozShell/keys/GetKeyHandler.java | 9 ++++++---
2 files changed, 18 insertions(+), 3 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/103f2eeb/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
index a4b30f0..000d530 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
@@ -705,6 +705,18 @@ public class TestOzoneShell {
randFile.read(dataBytes);
}
assertEquals(dataStr, DFSUtil.bytes2String(dataBytes));
+
+ tmpPath = baseDir.getAbsolutePath() + File.separatorChar + keyName;
+ args = new String[] {"-getKey",
+ url + "/" + volumeName + "/" + bucketName + "/" + keyName, "-file",
+ baseDir.getAbsolutePath()};
+ assertEquals(0, ToolRunner.run(shell, args));
+
+ dataBytes = new byte[dataStr.length()];
+ try (FileInputStream randFile = new FileInputStream(new File(tmpPath))) {
+ randFile.read(dataBytes);
+ }
+ assertEquals(dataStr, DFSUtil.bytes2String(dataBytes));
}
@Test
http://git-wip-us.apache.org/repos/asf/hadoop/blob/103f2eeb/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
index 34620b4..2d059e0 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
@@ -98,11 +98,14 @@ public class GetKeyHandler extends Handler {
Path dataFilePath = Paths.get(fileName);
File dataFile = new File(fileName);
+ if (dataFile.exists() && dataFile.isDirectory()) {
+ dataFile = new File(fileName, keyName);
+ }
if (dataFile.exists()) {
- throw new OzoneClientException(fileName +
- "exists. Download will overwrite an " +
- "existing file. Aborting.");
+ throw new OzoneClientException(
+ fileName + "exists. Download will overwrite an "
+ + "existing file. Aborting.");
}
OzoneVolume vol = client.getObjectStore().getVolume(volumeName);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[49/50] [abbrv] hadoop git commit: YARN-3660. [GPG] Federation Global
Policy Generator (service hook only). (Contributed by Botong Huang via
curino)
Posted by bo...@apache.org.
YARN-3660. [GPG] Federation Global Policy Generator (service hook only). (Contributed by Botong Huang via curino)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/43b8c2da
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/43b8c2da
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/43b8c2da
Branch: refs/heads/YARN-7402
Commit: 43b8c2daa0dd71c9b9934ad4b5086a81fae1e58a
Parents: 103f2ee
Author: Carlo Curino <cu...@apache.org>
Authored: Thu Jan 18 17:21:06 2018 -0800
Committer: Botong Huang <bo...@apache.org>
Committed: Fri Jul 13 17:42:58 2018 -0700
----------------------------------------------------------------------
hadoop-project/pom.xml | 6 +
hadoop-yarn-project/hadoop-yarn/bin/yarn | 5 +
hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd | 55 +++++---
.../hadoop-yarn/conf/yarn-env.sh | 12 ++
.../pom.xml | 98 +++++++++++++
.../globalpolicygenerator/GPGContext.java | 31 +++++
.../globalpolicygenerator/GPGContextImpl.java | 41 ++++++
.../GlobalPolicyGenerator.java | 136 +++++++++++++++++++
.../globalpolicygenerator/package-info.java | 19 +++
.../TestGlobalPolicyGenerator.java | 38 ++++++
.../hadoop-yarn/hadoop-yarn-server/pom.xml | 1 +
hadoop-yarn-project/pom.xml | 4 +
12 files changed, 424 insertions(+), 22 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-project/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 387a3da..ede6af4 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -446,6 +446,12 @@
<dependency>
<groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-server-globalpolicygenerator</artifactId>
+ <version>${project.version}</version>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-yarn-services-core</artifactId>
<version>${hadoop.version}</version>
</dependency>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/bin/yarn
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn b/hadoop-yarn-project/hadoop-yarn/bin/yarn
index 69afe6f..8061859 100755
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn
@@ -39,6 +39,7 @@ function hadoop_usage
hadoop_add_subcommand "container" client "prints container(s) report"
hadoop_add_subcommand "daemonlog" admin "get/set the log level for each daemon"
hadoop_add_subcommand "envvars" client "display computed Hadoop environment variables"
+ hadoop_add_subcommand "globalpolicygenerator" daemon "run the Global Policy Generator"
hadoop_add_subcommand "jar <jar>" client "run a jar file"
hadoop_add_subcommand "logs" client "dump container logs"
hadoop_add_subcommand "node" admin "prints node report(s)"
@@ -103,6 +104,10 @@ ${HADOOP_COMMON_HOME}/${HADOOP_COMMON_LIB_JARS_DIR}"
echo "HADOOP_TOOLS_LIB_JARS_DIR='${HADOOP_TOOLS_LIB_JARS_DIR}'"
exit 0
;;
+ globalpolicygenerator)
+ HADOOP_SUBCMD_SUPPORTDAEMONIZATION="true"
+ HADOOP_CLASSNAME='org.apache.hadoop.yarn.server.globalpolicygenerator.GlobalPolicyGenerator'
+ ;;
jar)
HADOOP_CLASSNAME=org.apache.hadoop.util.RunJar
;;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd b/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
index e1ac112..bebfd71 100644
--- a/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
+++ b/hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd
@@ -134,6 +134,10 @@ if "%1" == "--loglevel" (
set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-server\yarn-server-router\target\classes
)
+ if exist %HADOOP_YARN_HOME%\yarn-server\yarn-server-globalpolicygenerator\target\classes (
+ set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\yarn-server\yarn-server-globalpolicygenerator\target\classes
+ )
+
if exist %HADOOP_YARN_HOME%\build\test\classes (
set CLASSPATH=%CLASSPATH%;%HADOOP_YARN_HOME%\build\test\classes
)
@@ -155,7 +159,7 @@ if "%1" == "--loglevel" (
set yarncommands=resourcemanager nodemanager proxyserver rmadmin version jar ^
application applicationattempt container node queue logs daemonlog historyserver ^
- timelineserver timelinereader router classpath
+ timelineserver timelinereader router globalpolicygenerator classpath
for %%i in ( %yarncommands% ) do (
if %yarn-command% == %%i set yarncommand=true
)
@@ -259,7 +263,13 @@ goto :eof
:router
set CLASSPATH=%CLASSPATH%;%YARN_CONF_DIR%\router-config\log4j.properties
set CLASS=org.apache.hadoop.yarn.server.router.Router
- set YARN_OPTS=%YARN_OPTS% %HADOOP_ROUTER_OPTS%
+ set YARN_OPTS=%YARN_OPTS% %YARN_ROUTER_OPTS%
+ goto :eof
+
+:globalpolicygenerator
+ set CLASSPATH=%CLASSPATH%;%YARN_CONF_DIR%\globalpolicygenerator-config\log4j.properties
+ set CLASS=org.apache.hadoop.yarn.server.globalpolicygenerator.GlobalPolicyGenerator
+ set YARN_OPTS=%YARN_OPTS% %YARN_GLOBALPOLICYGENERATOR_OPTS%
goto :eof
:nodemanager
@@ -336,27 +346,28 @@ goto :eof
:print_usage
@echo Usage: yarn [--config confdir] [--loglevel loglevel] COMMAND
@echo where COMMAND is one of:
- @echo resourcemanager run the ResourceManager
- @echo nodemanager run a nodemanager on each slave
- @echo router run the Router daemon
- @echo timelineserver run the timeline server
- @echo timelinereader run the timeline reader server
- @echo rmadmin admin tools
- @echo version print the version
- @echo jar ^<jar^> run a jar file
- @echo application prints application(s) report/kill application
- @echo applicationattempt prints applicationattempt(s) report
- @echo cluster prints cluster information
- @echo container prints container(s) report
- @echo node prints node report(s)
- @echo queue prints queue information
- @echo logs dump container logs
- @echo schedulerconf updates scheduler configuration
- @echo classpath prints the class path needed to get the
- @echo Hadoop jar and the required libraries
- @echo daemonlog get/set the log level for each daemon
+ @echo resourcemanager run the ResourceManager
+ @echo nodemanager run a nodemanager on each slave
+ @echo router run the Router daemon
+ @echo globalpolicygenerator run the Global Policy Generator
+ @echo timelineserver run the timeline server
+ @echo timelinereader run the timeline reader server
+ @echo rmadmin admin tools
+ @echo version print the version
+ @echo jar ^<jar^> run a jar file
+ @echo application prints application(s) report/kill application
+ @echo applicationattempt prints applicationattempt(s) report
+ @echo cluster prints cluster information
+ @echo container prints container(s) report
+ @echo node prints node report(s)
+ @echo queue prints queue information
+ @echo logs dump container logs
+ @echo schedulerconf updates scheduler configuration
+ @echo classpath prints the class path needed to get the
+ @echo Hadoop jar and the required libraries
+ @echo daemonlog get/set the log level for each daemon
@echo or
- @echo CLASSNAME run the class named CLASSNAME
+ @echo CLASSNAME run the class named CLASSNAME
@echo Most commands print help when invoked w/o parameters.
endlocal
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh b/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh
index 76d1d6b..ae5af49 100644
--- a/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh
+++ b/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh
@@ -150,6 +150,18 @@
#export YARN_ROUTER_OPTS=
###
+# Global Policy Generator specific parameters
+###
+
+# Specify the JVM options to be used when starting the GPG.
+# These options will be appended to the options specified as HADOOP_OPTS
+# and therefore may override any similar flags set in HADOOP_OPTS
+#
+# See ResourceManager for some examples
+#
+#export YARN_GLOBALPOLICYGENERATOR_OPTS=
+
+###
# Registry DNS specific parameters
###
# For privileged registry DNS, user to run as after dropping privileges
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
new file mode 100644
index 0000000..9bbb936
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/pom.xml
@@ -0,0 +1,98 @@
+<?xml version="1.0"?>
+<!--
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License. See accompanying LICENSE file.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+ http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <parent>
+ <artifactId>hadoop-yarn-server</artifactId>
+ <groupId>org.apache.hadoop</groupId>
+ <version>3.1.0-SNAPSHOT</version>
+ </parent>
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-server-globalpolicygenerator</artifactId>
+ <version>3.1.0-SNAPSHOT</version>
+ <name>hadoop-yarn-server-globalpolicygenerator</name>
+
+ <properties>
+ <!-- Needed for generating FindBugs warnings using parent pom -->
+ <yarn.basedir>${project.parent.parent.basedir}</yarn.basedir>
+ </properties>
+
+ <dependencies>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-server-common</artifactId>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-common</artifactId>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-api</artifactId>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-common</artifactId>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-common</artifactId>
+ <type>test-jar</type>
+ <scope>test</scope>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+ </dependency>
+
+ <dependency>
+ <groupId>junit</groupId>
+ <artifactId>junit</artifactId>
+ <scope>test</scope>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-server-common</artifactId>
+ <type>test-jar</type>
+ <scope>test</scope>
+ </dependency>
+
+ <dependency>
+ <groupId>org.hsqldb</groupId>
+ <artifactId>hsqldb</artifactId>
+ <scope>test</scope>
+ </dependency>
+
+ </dependencies>
+
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.rat</groupId>
+ <artifactId>apache-rat-plugin</artifactId>
+ </plugin>
+ </plugins>
+ </build>
+</project>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java
new file mode 100644
index 0000000..da8a383
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContext.java
@@ -0,0 +1,31 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator;
+
+import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+
+/**
+ * Context for Global Policy Generator.
+ */
+public interface GPGContext {
+
+ FederationStateStoreFacade getStateStoreFacade();
+
+ void setStateStoreFacade(FederationStateStoreFacade facade);
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java
new file mode 100644
index 0000000..3884ace
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGContextImpl.java
@@ -0,0 +1,41 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator;
+
+import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+
+/**
+ * Context implementation for Global Policy Generator.
+ */
+public class GPGContextImpl implements GPGContext {
+
+ private FederationStateStoreFacade facade;
+
+ @Override
+ public FederationStateStoreFacade getStateStoreFacade() {
+ return facade;
+ }
+
+ @Override
+ public void setStateStoreFacade(
+ FederationStateStoreFacade federationStateStoreFacade) {
+ this.facade = federationStateStoreFacade;
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
new file mode 100644
index 0000000..c1f7460
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java
@@ -0,0 +1,136 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
+import org.apache.hadoop.service.CompositeService;
+import org.apache.hadoop.util.ShutdownHookManager;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.server.federation.utils.FederationStateStoreFacade;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Global Policy Generator (GPG) is a Yarn Federation component. By tuning the
+ * Federation policies in Federation State Store, GPG overlooks the entire
+ * federated cluster and ensures that the system is tuned and balanced all the
+ * time.
+ *
+ * The GPG operates continuously but out-of-band from all cluster operations,
+ * that allows to enforce global invariants, affect load balancing, trigger
+ * draining of sub-clusters that will undergo maintenance, etc.
+ */
+public class GlobalPolicyGenerator extends CompositeService {
+
+ public static final Logger LOG =
+ LoggerFactory.getLogger(GlobalPolicyGenerator.class);
+
+ // YARN Variables
+ private static CompositeServiceShutdownHook gpgShutdownHook;
+ public static final int SHUTDOWN_HOOK_PRIORITY = 30;
+ private AtomicBoolean isStopping = new AtomicBoolean(false);
+ private static final String METRICS_NAME = "Global Policy Generator";
+
+ // Federation Variables
+ private GPGContext gpgContext;
+
+ public GlobalPolicyGenerator() {
+ super(GlobalPolicyGenerator.class.getName());
+ this.gpgContext = new GPGContextImpl();
+ }
+
+ protected void initAndStart(Configuration conf, boolean hasToReboot) {
+ try {
+ // Remove the old hook if we are rebooting.
+ if (hasToReboot && null != gpgShutdownHook) {
+ ShutdownHookManager.get().removeShutdownHook(gpgShutdownHook);
+ }
+
+ gpgShutdownHook = new CompositeServiceShutdownHook(this);
+ ShutdownHookManager.get().addShutdownHook(gpgShutdownHook,
+ SHUTDOWN_HOOK_PRIORITY);
+
+ this.init(conf);
+ this.start();
+ } catch (Throwable t) {
+ LOG.error("Error starting globalpolicygenerator", t);
+ System.exit(-1);
+ }
+ }
+
+ @Override
+ protected void serviceInit(Configuration conf) throws Exception {
+ // Set up the context
+ this.gpgContext
+ .setStateStoreFacade(FederationStateStoreFacade.getInstance());
+
+ DefaultMetricsSystem.initialize(METRICS_NAME);
+
+ // super.serviceInit after all services are added
+ super.serviceInit(conf);
+ }
+
+ @Override
+ protected void serviceStart() throws Exception {
+ super.serviceStart();
+ }
+
+ @Override
+ protected void serviceStop() throws Exception {
+ if (this.isStopping.getAndSet(true)) {
+ return;
+ }
+ DefaultMetricsSystem.shutdown();
+ super.serviceStop();
+ }
+
+ public String getName() {
+ return "FederationGlobalPolicyGenerator";
+ }
+
+ public GPGContext getGPGContext() {
+ return this.gpgContext;
+ }
+
+ @SuppressWarnings("resource")
+ public static void startGPG(String[] argv, Configuration conf) {
+ boolean federationEnabled =
+ conf.getBoolean(YarnConfiguration.FEDERATION_ENABLED,
+ YarnConfiguration.DEFAULT_FEDERATION_ENABLED);
+ if (federationEnabled) {
+ Thread.setDefaultUncaughtExceptionHandler(
+ new YarnUncaughtExceptionHandler());
+ StringUtils.startupShutdownMessage(GlobalPolicyGenerator.class, argv,
+ LOG);
+ GlobalPolicyGenerator globalPolicyGenerator = new GlobalPolicyGenerator();
+ globalPolicyGenerator.initAndStart(conf, false);
+ } else {
+ LOG.warn("Federation is not enabled. The gpg cannot start.");
+ }
+ }
+
+ public static void main(String[] argv) {
+ startGPG(argv, new YarnConfiguration());
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/package-info.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/package-info.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/package-info.java
new file mode 100644
index 0000000..abaa57c
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/package-info.java
@@ -0,0 +1,19 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/TestGlobalPolicyGenerator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/TestGlobalPolicyGenerator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/TestGlobalPolicyGenerator.java
new file mode 100644
index 0000000..f657b86
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/TestGlobalPolicyGenerator.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.globalpolicygenerator;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.junit.Test;
+
+/**
+ * Unit test for GlobalPolicyGenerator.
+ */
+public class TestGlobalPolicyGenerator {
+
+ @Test(timeout = 1000)
+ public void testNonFederation() {
+ Configuration conf = new YarnConfiguration();
+ conf.setBoolean(YarnConfiguration.FEDERATION_ENABLED, false);
+
+ // If GPG starts running, this call will not return
+ GlobalPolicyGenerator.startGPG(new String[0], conf);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
index de4484c..226407b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/pom.xml
@@ -46,5 +46,6 @@
<module>hadoop-yarn-server-timelineservice-hbase</module>
<module>hadoop-yarn-server-timelineservice-hbase-tests</module>
<module>hadoop-yarn-server-router</module>
+ <module>hadoop-yarn-server-globalpolicygenerator</module>
</modules>
</project>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/43b8c2da/hadoop-yarn-project/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/pom.xml b/hadoop-yarn-project/pom.xml
index 4593441..311b26e 100644
--- a/hadoop-yarn-project/pom.xml
+++ b/hadoop-yarn-project/pom.xml
@@ -80,6 +80,10 @@
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-yarn-server-globalpolicygenerator</artifactId>
+ </dependency>
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-yarn-services-core</artifactId>
</dependency>
</dependencies>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[17/50] [abbrv] hadoop git commit: YARN-8473. Containers being
launched as app tears down can leave containers in NEW state. Contributed by
Jason Lowe.
Posted by bo...@apache.org.
YARN-8473. Containers being launched as app tears down can leave containers in NEW state. Contributed by Jason Lowe.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/705e2c1f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/705e2c1f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/705e2c1f
Branch: refs/heads/YARN-7402
Commit: 705e2c1f7cba51496b0d019ecedffbe5fb55c28b
Parents: ca8b80b
Author: Sunil G <su...@apache.org>
Authored: Tue Jul 10 20:11:47 2018 +0530
Committer: Sunil G <su...@apache.org>
Committed: Tue Jul 10 20:11:47 2018 +0530
----------------------------------------------------------------------
.../application/ApplicationImpl.java | 36 ++++++++++---
.../application/TestApplication.java | 53 ++++++++++++++++----
2 files changed, 71 insertions(+), 18 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/705e2c1f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
index 39be7a7..6d84fb2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
@@ -211,6 +211,9 @@ public class ApplicationImpl implements Application {
private static final ContainerDoneTransition CONTAINER_DONE_TRANSITION =
new ContainerDoneTransition();
+ private static final InitContainerTransition INIT_CONTAINER_TRANSITION =
+ new InitContainerTransition();
+
private static StateMachineFactory<ApplicationImpl, ApplicationState,
ApplicationEventType, ApplicationEvent> stateMachineFactory =
new StateMachineFactory<ApplicationImpl, ApplicationState,
@@ -221,12 +224,12 @@ public class ApplicationImpl implements Application {
ApplicationEventType.INIT_APPLICATION, new AppInitTransition())
.addTransition(ApplicationState.NEW, ApplicationState.NEW,
ApplicationEventType.INIT_CONTAINER,
- new InitContainerTransition())
+ INIT_CONTAINER_TRANSITION)
// Transitions from INITING state
.addTransition(ApplicationState.INITING, ApplicationState.INITING,
ApplicationEventType.INIT_CONTAINER,
- new InitContainerTransition())
+ INIT_CONTAINER_TRANSITION)
.addTransition(ApplicationState.INITING,
EnumSet.of(ApplicationState.FINISHING_CONTAINERS_WAIT,
ApplicationState.APPLICATION_RESOURCES_CLEANINGUP),
@@ -249,7 +252,7 @@ public class ApplicationImpl implements Application {
.addTransition(ApplicationState.RUNNING,
ApplicationState.RUNNING,
ApplicationEventType.INIT_CONTAINER,
- new InitContainerTransition())
+ INIT_CONTAINER_TRANSITION)
.addTransition(ApplicationState.RUNNING,
ApplicationState.RUNNING,
ApplicationEventType.APPLICATION_CONTAINER_FINISHED,
@@ -270,6 +273,10 @@ public class ApplicationImpl implements Application {
new AppFinishTransition())
.addTransition(ApplicationState.FINISHING_CONTAINERS_WAIT,
ApplicationState.FINISHING_CONTAINERS_WAIT,
+ ApplicationEventType.INIT_CONTAINER,
+ INIT_CONTAINER_TRANSITION)
+ .addTransition(ApplicationState.FINISHING_CONTAINERS_WAIT,
+ ApplicationState.FINISHING_CONTAINERS_WAIT,
EnumSet.of(
ApplicationEventType.APPLICATION_LOG_HANDLING_INITED,
ApplicationEventType.APPLICATION_LOG_HANDLING_FAILED,
@@ -286,6 +293,10 @@ public class ApplicationImpl implements Application {
new AppCompletelyDoneTransition())
.addTransition(ApplicationState.APPLICATION_RESOURCES_CLEANINGUP,
ApplicationState.APPLICATION_RESOURCES_CLEANINGUP,
+ ApplicationEventType.INIT_CONTAINER,
+ INIT_CONTAINER_TRANSITION)
+ .addTransition(ApplicationState.APPLICATION_RESOURCES_CLEANINGUP,
+ ApplicationState.APPLICATION_RESOURCES_CLEANINGUP,
EnumSet.of(
ApplicationEventType.APPLICATION_LOG_HANDLING_INITED,
ApplicationEventType.APPLICATION_LOG_HANDLING_FAILED,
@@ -300,9 +311,14 @@ public class ApplicationImpl implements Application {
ApplicationEventType.APPLICATION_LOG_HANDLING_FINISHED,
ApplicationEventType.APPLICATION_LOG_HANDLING_FAILED),
new AppLogsAggregatedTransition())
+ .addTransition(ApplicationState.FINISHED,
+ ApplicationState.FINISHED,
+ ApplicationEventType.INIT_CONTAINER,
+ INIT_CONTAINER_TRANSITION)
.addTransition(ApplicationState.FINISHED, ApplicationState.FINISHED,
EnumSet.of(
ApplicationEventType.APPLICATION_LOG_HANDLING_INITED,
+ ApplicationEventType.APPLICATION_CONTAINER_FINISHED,
ApplicationEventType.FINISH_APPLICATION))
// create the topology tables
.installTopology();
@@ -445,8 +461,9 @@ public class ApplicationImpl implements Application {
app.containers.put(container.getContainerId(), container);
LOG.info("Adding " + container.getContainerId()
+ " to application " + app.toString());
-
- switch (app.getApplicationState()) {
+
+ ApplicationState appState = app.getApplicationState();
+ switch (appState) {
case RUNNING:
app.dispatcher.getEventHandler().handle(new ContainerInitEvent(
container.getContainerId()));
@@ -456,8 +473,13 @@ public class ApplicationImpl implements Application {
// these get queued up and sent out in AppInitDoneTransition
break;
default:
- assert false : "Invalid state for InitContainerTransition: " +
- app.getApplicationState();
+ LOG.warn("Killing {} because {} is in state {}",
+ container.getContainerId(), app, appState);
+ app.dispatcher.getEventHandler().handle(new ContainerKillEvent(
+ container.getContainerId(),
+ ContainerExitStatus.KILLED_AFTER_APP_COMPLETION,
+ "Application no longer running.\n"));
+ break;
}
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/705e2c1f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
index c8f28e2..cbe19ff 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/TestApplication.java
@@ -360,35 +360,66 @@ public class TestApplication {
}
}
-//TODO Re-work after Application transitions are changed.
-// @Test
+ @Test
@SuppressWarnings("unchecked")
- public void testStartContainerAfterAppFinished() {
+ public void testStartContainerAfterAppRunning() {
WrappedApplication wa = null;
try {
- wa = new WrappedApplication(5, 314159265358979L, "yak", 3);
+ wa = new WrappedApplication(5, 314159265358979L, "yak", 4);
wa.initApplication();
- wa.initContainer(-1);
+ wa.initContainer(0);
assertEquals(ApplicationState.INITING, wa.app.getApplicationState());
wa.applicationInited();
assertEquals(ApplicationState.RUNNING, wa.app.getApplicationState());
- reset(wa.localizerBus);
- wa.containerFinished(0);
- wa.containerFinished(1);
- wa.containerFinished(2);
assertEquals(ApplicationState.RUNNING, wa.app.getApplicationState());
- assertEquals(0, wa.app.getContainers().size());
+ assertEquals(1, wa.app.getContainers().size());
wa.appFinished();
+ verify(wa.containerBus).handle(
+ argThat(new ContainerKillMatcher(wa.containers.get(0)
+ .getContainerId())));
+ assertEquals(ApplicationState.FINISHING_CONTAINERS_WAIT,
+ wa.app.getApplicationState());
+
+ wa.initContainer(1);
+ verify(wa.containerBus).handle(
+ argThat(new ContainerKillMatcher(wa.containers.get(1)
+ .getContainerId())));
+ assertEquals(ApplicationState.FINISHING_CONTAINERS_WAIT,
+ wa.app.getApplicationState());
+ wa.containerFinished(1);
+ assertEquals(ApplicationState.FINISHING_CONTAINERS_WAIT,
+ wa.app.getApplicationState());
+
+ wa.containerFinished(0);
assertEquals(ApplicationState.APPLICATION_RESOURCES_CLEANINGUP,
wa.app.getApplicationState());
verify(wa.localizerBus).handle(
refEq(new ApplicationLocalizationEvent(
- LocalizationEventType.DESTROY_APPLICATION_RESOURCES, wa.app)));
+ LocalizationEventType.DESTROY_APPLICATION_RESOURCES,
+ wa.app), "timestamp"));
+
+ wa.initContainer(2);
+ verify(wa.containerBus).handle(
+ argThat(new ContainerKillMatcher(wa.containers.get(2)
+ .getContainerId())));
+ assertEquals(ApplicationState.APPLICATION_RESOURCES_CLEANINGUP,
+ wa.app.getApplicationState());
+ wa.containerFinished(2);
+ assertEquals(ApplicationState.APPLICATION_RESOURCES_CLEANINGUP,
+ wa.app.getApplicationState());
wa.appResourcesCleanedup();
assertEquals(ApplicationState.FINISHED, wa.app.getApplicationState());
+
+ wa.initContainer(3);
+ verify(wa.containerBus).handle(
+ argThat(new ContainerKillMatcher(wa.containers.get(3)
+ .getContainerId())));
+ assertEquals(ApplicationState.FINISHED, wa.app.getApplicationState());
+ wa.containerFinished(3);
+ assertEquals(ApplicationState.FINISHED, wa.app.getApplicationState());
} finally {
if (wa != null)
wa.finished();
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[05/50] [abbrv] hadoop git commit: HADOOP-15591. KMSClientProvider
should log KMS DT acquisition at INFO level. Contributed by Kitti Nanasi.
Posted by bo...@apache.org.
HADOOP-15591. KMSClientProvider should log KMS DT acquisition at INFO level. Contributed by Kitti Nanasi.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/def9d94a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/def9d94a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/def9d94a
Branch: refs/heads/YARN-7402
Commit: def9d94a40e1ff71a0dc5a4db1f042e2704cb84d
Parents: 83cd84b
Author: Xiao Chen <xi...@apache.org>
Authored: Mon Jul 9 12:00:32 2018 -0700
Committer: Xiao Chen <xi...@apache.org>
Committed: Mon Jul 9 12:01:52 2018 -0700
----------------------------------------------------------------------
.../java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/def9d94a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index 7b46075..11815da 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -1036,13 +1036,13 @@ public class KMSClientProvider extends KeyProvider implements CryptoExtension,
public Token<?> run() throws Exception {
// Not using the cached token here.. Creating a new token here
// everytime.
- LOG.debug("Getting new token from {}, renewer:{}", url, renewer);
+ LOG.info("Getting new token from {}, renewer:{}", url, renewer);
return authUrl.getDelegationToken(url,
new DelegationTokenAuthenticatedURL.Token(), renewer, doAsUser);
}
});
if (token != null) {
- LOG.debug("New token received: ({})", token);
+ LOG.info("New token received: ({})", token);
credentials.addToken(token.getService(), token);
tokens = new Token<?>[] { token };
} else {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[36/50] [abbrv] hadoop git commit: YARN-8518. test-container-executor
test_is_empty() is broken (Jim_Brennan via rkanter)
Posted by bo...@apache.org.
YARN-8518. test-container-executor test_is_empty() is broken (Jim_Brennan via rkanter)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1bc106a7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1bc106a7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1bc106a7
Branch: refs/heads/YARN-7402
Commit: 1bc106a738a6ce4f7ed025d556bb44c1ede022e3
Parents: 556d9b3
Author: Robert Kanter <rk...@apache.org>
Authored: Thu Jul 12 16:38:46 2018 -0700
Committer: Robert Kanter <rk...@apache.org>
Committed: Thu Jul 12 16:38:46 2018 -0700
----------------------------------------------------------------------
.../container-executor/test/test-container-executor.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bc106a7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index a199d84..5607823 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -1203,19 +1203,23 @@ void test_trim_function() {
free(trimmed);
}
+int is_empty(char *name);
+
void test_is_empty() {
printf("\nTesting is_empty function\n");
if (is_empty("/")) {
printf("FAIL: / should not be empty\n");
exit(1);
}
- if (is_empty("/tmp/2938rf2983hcqnw8ud/noexist")) {
- printf("FAIL: /tmp/2938rf2983hcqnw8ud/noexist should not exist\n");
+ char *noexist = TEST_ROOT "/noexist";
+ if (is_empty(noexist)) {
+ printf("%s should not exist\n", noexist);
exit(1);
}
- mkdir("/tmp/2938rf2983hcqnw8ud/emptydir", S_IRWXU);
- if (!is_empty("/tmp/2938rf2983hcqnw8ud/emptydir")) {
- printf("FAIL: /tmp/2938rf2983hcqnw8ud/emptydir be empty\n");
+ char *emptydir = TEST_ROOT "/emptydir";
+ mkdir(emptydir, S_IRWXU);
+ if (!is_empty(emptydir)) {
+ printf("FAIL: %s should be empty\n", emptydir);
exit(1);
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[42/50] [abbrv] hadoop git commit: YARN-8515. container-executor can
crash with SIGPIPE after nodemanager restart. Contributed by Jim Brennan
Posted by bo...@apache.org.
YARN-8515. container-executor can crash with SIGPIPE after nodemanager restart. Contributed by Jim Brennan
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/17118f44
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/17118f44
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/17118f44
Branch: refs/heads/YARN-7402
Commit: 17118f446c2387aa796849da8b69a845d9d307d3
Parents: d185072
Author: Jason Lowe <jl...@apache.org>
Authored: Fri Jul 13 10:05:25 2018 -0500
Committer: Jason Lowe <jl...@apache.org>
Committed: Fri Jul 13 10:05:25 2018 -0500
----------------------------------------------------------------------
.../src/main/native/container-executor/impl/main.c | 6 ++++++
1 file changed, 6 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/17118f44/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
index 2099ace..6ab522f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
@@ -31,6 +31,7 @@
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
+#include <signal.h>
static void display_usage(FILE *stream) {
fprintf(stream,
@@ -112,6 +113,11 @@ static void open_log_files() {
if (ERRORFILE == NULL) {
ERRORFILE = stderr;
}
+
+ // There may be a process reading from stdout/stderr, and if it
+ // exits, we will crash on a SIGPIPE when we try to write to them.
+ // By ignoring SIGPIPE, we can handle the EPIPE instead of crashing.
+ signal(SIGPIPE, SIG_IGN);
}
/* Flushes and closes log files */
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[07/50] [abbrv] hadoop git commit: HDDS-224. Create metrics for Event
Watcher. Contributed by Elek, Marton.
Posted by bo...@apache.org.
HDDS-224. Create metrics for Event Watcher.
Contributed by Elek, Marton.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e12d93bf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e12d93bf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e12d93bf
Branch: refs/heads/YARN-7402
Commit: e12d93bfc1a0efd007bc84758e60b5149c3aa663
Parents: 895845e
Author: Anu Engineer <ae...@apache.org>
Authored: Mon Jul 9 12:02:20 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Mon Jul 9 12:10:12 2018 -0700
----------------------------------------------------------------------
hadoop-hdds/framework/pom.xml | 5 +
.../hadoop/hdds/server/events/EventWatcher.java | 43 +++++++-
.../hdds/server/events/EventWatcherMetrics.java | 79 ++++++++++++++
.../hdds/server/events/TestEventWatcher.java | 107 ++++++++++++++++---
4 files changed, 220 insertions(+), 14 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e12d93bf/hadoop-hdds/framework/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/pom.xml b/hadoop-hdds/framework/pom.xml
index a497133..6e1927d 100644
--- a/hadoop-hdds/framework/pom.xml
+++ b/hadoop-hdds/framework/pom.xml
@@ -39,6 +39,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
<artifactId>hadoop-hdds-common</artifactId>
<scope>provided</scope>
</dependency>
+ <dependency>
+ <groupId>org.mockito</groupId>
+ <artifactId>mockito-all</artifactId>
+ <scope>test</scope>
+ </dependency>
</dependencies>
<build>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e12d93bf/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
index 19fddde..8c5605a 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcher.java
@@ -26,12 +26,17 @@ import java.util.concurrent.ConcurrentHashMap;
import java.util.function.Predicate;
import java.util.stream.Collectors;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
import org.apache.hadoop.ozone.lease.Lease;
import org.apache.hadoop.ozone.lease.LeaseAlreadyExistException;
import org.apache.hadoop.ozone.lease.LeaseExpiredException;
import org.apache.hadoop.ozone.lease.LeaseManager;
import org.apache.hadoop.ozone.lease.LeaseNotFoundException;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.commons.collections.map.HashedMap;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -58,18 +63,39 @@ public abstract class EventWatcher<TIMEOUT_PAYLOAD extends
private final LeaseManager<UUID> leaseManager;
+ private final EventWatcherMetrics metrics;
+
+ private final String name;
+
protected final Map<UUID, TIMEOUT_PAYLOAD> trackedEventsByUUID =
new ConcurrentHashMap<>();
protected final Set<TIMEOUT_PAYLOAD> trackedEvents = new HashSet<>();
- public EventWatcher(Event<TIMEOUT_PAYLOAD> startEvent,
+ private final Map<UUID, Long> startTrackingTimes = new HashedMap();
+
+ public EventWatcher(String name, Event<TIMEOUT_PAYLOAD> startEvent,
Event<COMPLETION_PAYLOAD> completionEvent,
LeaseManager<UUID> leaseManager) {
this.startEvent = startEvent;
this.completionEvent = completionEvent;
this.leaseManager = leaseManager;
+ this.metrics = new EventWatcherMetrics();
+ Preconditions.checkNotNull(name);
+ if (name.equals("")) {
+ name = getClass().getSimpleName();
+ }
+ if (name.equals("")) {
+ //for anonymous inner classes
+ name = getClass().getName();
+ }
+ this.name = name;
+ }
+ public EventWatcher(Event<TIMEOUT_PAYLOAD> startEvent,
+ Event<COMPLETION_PAYLOAD> completionEvent,
+ LeaseManager<UUID> leaseManager) {
+ this("", startEvent, completionEvent, leaseManager);
}
public void start(EventQueue queue) {
@@ -87,11 +113,16 @@ public abstract class EventWatcher<TIMEOUT_PAYLOAD extends
}
});
+ MetricsSystem ms = DefaultMetricsSystem.instance();
+ ms.register(name, "EventWatcher metrics", metrics);
}
private synchronized void handleStartEvent(TIMEOUT_PAYLOAD payload,
EventPublisher publisher) {
+ metrics.incrementTrackedEvents();
UUID identifier = payload.getUUID();
+ startTrackingTimes.put(identifier, System.currentTimeMillis());
+
trackedEventsByUUID.put(identifier, payload);
trackedEvents.add(payload);
try {
@@ -112,16 +143,21 @@ public abstract class EventWatcher<TIMEOUT_PAYLOAD extends
private synchronized void handleCompletion(UUID uuid,
EventPublisher publisher) throws LeaseNotFoundException {
+ metrics.incrementCompletedEvents();
leaseManager.release(uuid);
TIMEOUT_PAYLOAD payload = trackedEventsByUUID.remove(uuid);
trackedEvents.remove(payload);
+ long originalTime = startTrackingTimes.remove(uuid);
+ metrics.updateFinishingTime(System.currentTimeMillis() - originalTime);
onFinished(publisher, payload);
}
private synchronized void handleTimeout(EventPublisher publisher,
UUID identifier) {
+ metrics.incrementTimedOutEvents();
TIMEOUT_PAYLOAD payload = trackedEventsByUUID.remove(identifier);
trackedEvents.remove(payload);
+ startTrackingTimes.remove(payload.getUUID());
onTimeout(publisher, payload);
}
@@ -154,4 +190,9 @@ public abstract class EventWatcher<TIMEOUT_PAYLOAD extends
return trackedEventsByUUID.values().stream().filter(predicate)
.collect(Collectors.toList());
}
+
+ @VisibleForTesting
+ protected EventWatcherMetrics getMetrics() {
+ return metrics;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e12d93bf/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcherMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcherMetrics.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcherMetrics.java
new file mode 100644
index 0000000..1db81a9
--- /dev/null
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventWatcherMetrics.java
@@ -0,0 +1,79 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.server.events;
+
+import org.apache.hadoop.metrics2.annotation.Metric;
+import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+import org.apache.hadoop.metrics2.lib.MutableRate;
+
+import com.google.common.annotations.VisibleForTesting;
+
+/**
+ * Metrics for any event watcher.
+ */
+public class EventWatcherMetrics {
+
+ @Metric()
+ private MutableCounterLong trackedEvents;
+
+ @Metric()
+ private MutableCounterLong timedOutEvents;
+
+ @Metric()
+ private MutableCounterLong completedEvents;
+
+ @Metric()
+ private MutableRate completionTime;
+
+ public void incrementTrackedEvents() {
+ trackedEvents.incr();
+ }
+
+ public void incrementTimedOutEvents() {
+ timedOutEvents.incr();
+ }
+
+ public void incrementCompletedEvents() {
+ completedEvents.incr();
+ }
+
+ @VisibleForTesting
+ public void updateFinishingTime(long duration) {
+ completionTime.add(duration);
+ }
+
+ @VisibleForTesting
+ public MutableCounterLong getTrackedEvents() {
+ return trackedEvents;
+ }
+
+ @VisibleForTesting
+ public MutableCounterLong getTimedOutEvents() {
+ return timedOutEvents;
+ }
+
+ @VisibleForTesting
+ public MutableCounterLong getCompletedEvents() {
+ return completedEvents;
+ }
+
+ @VisibleForTesting
+ public MutableRate getCompletionTime() {
+ return completionTime;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e12d93bf/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventWatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventWatcher.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventWatcher.java
index 1731350..38e1554 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventWatcher.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventWatcher.java
@@ -21,8 +21,13 @@ import java.util.List;
import java.util.Objects;
import java.util.UUID;
+import org.apache.hadoop.metrics2.MetricsSource;
+import org.apache.hadoop.metrics2.MetricsSystem;
+import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
import org.apache.hadoop.ozone.lease.LeaseManager;
+import org.apache.hadoop.test.MetricsAsserts;
+import static org.apache.hadoop.test.MetricsAsserts.assertCounter;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
@@ -46,6 +51,7 @@ public class TestEventWatcher {
@Before
public void startLeaseManager() {
+ DefaultMetricsSystem.instance();
leaseManager = new LeaseManager<>(2000l);
leaseManager.start();
}
@@ -53,12 +59,12 @@ public class TestEventWatcher {
@After
public void stopLeaseManager() {
leaseManager.shutdown();
+ DefaultMetricsSystem.shutdown();
}
@Test
public void testEventHandling() throws InterruptedException {
-
EventQueue queue = new EventQueue();
EventWatcher<UnderreplicatedEvent, ReplicationCompletedEvent>
@@ -139,26 +145,101 @@ public class TestEventWatcher {
Assert.assertEquals(0, c1todo.size());
Assert.assertFalse(replicationWatcher.contains(event1));
+ }
+
+ @Test
+ public void testMetrics() throws InterruptedException {
+
+ DefaultMetricsSystem.initialize("test");
+
+ EventQueue queue = new EventQueue();
+
+ EventWatcher<UnderreplicatedEvent, ReplicationCompletedEvent>
+ replicationWatcher = createEventWatcher();
+
+ EventHandlerStub<UnderreplicatedEvent> underReplicatedEvents =
+ new EventHandlerStub<>();
+
+ queue.addHandler(UNDER_REPLICATED, underReplicatedEvents);
+
+ replicationWatcher.start(queue);
+
+ //send 3 event to track 3 in-progress activity
+ UnderreplicatedEvent event1 =
+ new UnderreplicatedEvent(UUID.randomUUID(), "C1");
+
+ UnderreplicatedEvent event2 =
+ new UnderreplicatedEvent(UUID.randomUUID(), "C2");
+
+ UnderreplicatedEvent event3 =
+ new UnderreplicatedEvent(UUID.randomUUID(), "C1");
+
+ queue.fireEvent(WATCH_UNDER_REPLICATED, event1);
+
+ queue.fireEvent(WATCH_UNDER_REPLICATED, event2);
+
+ queue.fireEvent(WATCH_UNDER_REPLICATED, event3);
+
+ //1st event is completed, don't need to track any more
+ ReplicationCompletedEvent event1Completed =
+ new ReplicationCompletedEvent(event1.UUID, "C1", "D1");
+
+ queue.fireEvent(REPLICATION_COMPLETED, event1Completed);
+
+
+ Thread.sleep(2200l);
+
+ //until now: 3 in-progress activities are tracked with three
+ // UnderreplicatedEvents. The first one is completed, the remaining two
+ // are timed out (as the timeout -- defined in the leasmanager -- is 2000ms.
+ EventWatcherMetrics metrics = replicationWatcher.getMetrics();
+
+ //3 events are received
+ Assert.assertEquals(3, metrics.getTrackedEvents().value());
+
+ //one is finished. doesn't need to be resent
+ Assert.assertEquals(1, metrics.getCompletedEvents().value());
+
+ //Other two are timed out and resent
+ Assert.assertEquals(2, metrics.getTimedOutEvents().value());
+
+ DefaultMetricsSystem.shutdown();
}
private EventWatcher<UnderreplicatedEvent, ReplicationCompletedEvent>
createEventWatcher() {
- return new EventWatcher<UnderreplicatedEvent, ReplicationCompletedEvent>(
- WATCH_UNDER_REPLICATED, REPLICATION_COMPLETED, leaseManager) {
+ return new CommandWatcherExample(WATCH_UNDER_REPLICATED,
+ REPLICATION_COMPLETED, leaseManager);
+ }
- @Override
- void onTimeout(EventPublisher publisher, UnderreplicatedEvent payload) {
- publisher.fireEvent(UNDER_REPLICATED, payload);
- }
+ private class CommandWatcherExample
+ extends EventWatcher<UnderreplicatedEvent, ReplicationCompletedEvent> {
- @Override
- void onFinished(EventPublisher publisher, UnderreplicatedEvent payload) {
- //Good job. We did it.
- }
- };
+ public CommandWatcherExample(Event<UnderreplicatedEvent> startEvent,
+ Event<ReplicationCompletedEvent> completionEvent,
+ LeaseManager<UUID> leaseManager) {
+ super("TestCommandWatcher", startEvent, completionEvent, leaseManager);
+ }
+
+ @Override
+ void onTimeout(EventPublisher publisher, UnderreplicatedEvent payload) {
+ publisher.fireEvent(UNDER_REPLICATED, payload);
+ }
+
+ @Override
+ void onFinished(EventPublisher publisher, UnderreplicatedEvent payload) {
+ //Good job. We did it.
+ }
+
+ @Override
+ public EventWatcherMetrics getMetrics() {
+ return super.getMetrics();
+ }
}
+ ;
+
private static class ReplicationCompletedEvent
implements IdentifiableEventPayload {
@@ -217,4 +298,4 @@ public class TestEventWatcher {
}
}
-}
\ No newline at end of file
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[48/50] [abbrv] hadoop git commit: YARN-7402. [GPG] Fix potential
connection leak in GPGUtils. Contributed by Giovanni Matteo Fumarola.
Posted by bo...@apache.org.
YARN-7402. [GPG] Fix potential connection leak in GPGUtils. Contributed by Giovanni Matteo Fumarola.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8a70835e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8a70835e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8a70835e
Branch: refs/heads/YARN-7402
Commit: 8a70835ecb3c55ca6f78fc5b658131829f01657a
Parents: 0bbe70c
Author: Botong Huang <bo...@apache.org>
Authored: Wed May 23 12:45:32 2018 -0700
Committer: Botong Huang <bo...@apache.org>
Committed: Fri Jul 13 17:42:58 2018 -0700
----------------------------------------------------------------------
.../server/globalpolicygenerator/GPGUtils.java | 31 +++++++++++++-------
1 file changed, 20 insertions(+), 11 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/8a70835e/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
index 429bec4..31cee1c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java
@@ -18,21 +18,22 @@
package org.apache.hadoop.yarn.server.globalpolicygenerator;
+import static javax.servlet.http.HttpServletResponse.SC_OK;
+
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
-import javax.servlet.http.HttpServletResponse;
import javax.ws.rs.core.MediaType;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
+import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
import com.sun.jersey.api.client.Client;
import com.sun.jersey.api.client.ClientResponse;
import com.sun.jersey.api.client.WebResource;
-import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId;
-import org.apache.hadoop.yarn.server.federation.store.records.SubClusterIdInfo;
/**
* GPGUtils contains utility functions for the GPG.
@@ -53,15 +54,23 @@ public final class GPGUtils {
T obj = null;
WebResource webResource = client.resource(webAddr);
- ClientResponse response = webResource.path("ws/v1/cluster").path(path)
- .accept(MediaType.APPLICATION_XML).get(ClientResponse.class);
- if (response.getStatus() == HttpServletResponse.SC_OK) {
- obj = response.getEntity(returnType);
- } else {
- throw new YarnRuntimeException("Bad response from remote web service: "
- + response.getStatus());
+ ClientResponse response = null;
+ try {
+ response = webResource.path("ws/v1/cluster").path(path)
+ .accept(MediaType.APPLICATION_XML).get(ClientResponse.class);
+ if (response.getStatus() == SC_OK) {
+ obj = response.getEntity(returnType);
+ } else {
+ throw new YarnRuntimeException(
+ "Bad response from remote web service: " + response.getStatus());
+ }
+ return obj;
+ } finally {
+ if (response != null) {
+ response.close();
+ }
+ client.destroy();
}
- return obj;
}
/**
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[27/50] [abbrv] hadoop git commit: HDFS-13723. Occasional "Should be
different group" error in TestRefreshUserMappings#testGroupMappingRefresh.
Contributed by Siyao Meng.
Posted by bo...@apache.org.
HDFS-13723. Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh. Contributed by Siyao Meng.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/162228e8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/162228e8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/162228e8
Branch: refs/heads/YARN-7402
Commit: 162228e8db937d4bdb9cf61d15ed555f1c96368f
Parents: d36ed94
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Wed Jul 11 10:02:08 2018 -0700
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Wed Jul 11 10:02:08 2018 -0700
----------------------------------------------------------------------
.../java/org/apache/hadoop/security/Groups.java | 5 ++++-
.../hadoop/security/TestRefreshUserMappings.java | 19 +++++++++++++------
2 files changed, 17 insertions(+), 7 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/162228e8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
index ad09865..63ec9a5 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
@@ -73,7 +73,8 @@ import org.slf4j.LoggerFactory;
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
@InterfaceStability.Evolving
public class Groups {
- private static final Logger LOG = LoggerFactory.getLogger(Groups.class);
+ @VisibleForTesting
+ static final Logger LOG = LoggerFactory.getLogger(Groups.class);
private final GroupMappingServiceProvider impl;
@@ -308,6 +309,7 @@ public class Groups {
*/
@Override
public List<String> load(String user) throws Exception {
+ LOG.debug("GroupCacheLoader - load.");
TraceScope scope = null;
Tracer tracer = Tracer.curThreadTracer();
if (tracer != null) {
@@ -346,6 +348,7 @@ public class Groups {
public ListenableFuture<List<String>> reload(final String key,
List<String> oldValue)
throws Exception {
+ LOG.debug("GroupCacheLoader - reload (async).");
if (!reloadGroupsInBackground) {
return super.reload(key, oldValue);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/162228e8/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
index f511eb1..0e7dfc3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
@@ -45,6 +45,8 @@ import org.apache.hadoop.hdfs.tools.DFSAdmin;
import org.apache.hadoop.security.authorize.AuthorizationException;
import org.apache.hadoop.security.authorize.DefaultImpersonationProvider;
import org.apache.hadoop.security.authorize.ProxyUsers;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.slf4j.event.Level;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -93,6 +95,8 @@ public class TestRefreshUserMappings {
FileSystem.setDefaultUri(config, "hdfs://localhost:" + "0");
cluster = new MiniDFSCluster.Builder(config).build();
cluster.waitActive();
+
+ GenericTestUtils.setLogLevel(Groups.LOG, Level.DEBUG);
}
@After
@@ -114,21 +118,24 @@ public class TestRefreshUserMappings {
String [] args = new String[]{"-refreshUserToGroupsMappings"};
Groups groups = Groups.getUserToGroupsMappingService(config);
String user = UserGroupInformation.getCurrentUser().getUserName();
- System.out.println("first attempt:");
+
+ System.out.println("First attempt:");
List<String> g1 = groups.getGroups(user);
String [] str_groups = new String [g1.size()];
g1.toArray(str_groups);
System.out.println(Arrays.toString(str_groups));
- System.out.println("second attempt, should be same:");
+ System.out.println("Second attempt, should be the same:");
List<String> g2 = groups.getGroups(user);
g2.toArray(str_groups);
System.out.println(Arrays.toString(str_groups));
for(int i=0; i<g2.size(); i++) {
assertEquals("Should be same group ", g1.get(i), g2.get(i));
}
+
+ // Test refresh command
admin.run(args);
- System.out.println("third attempt(after refresh command), should be different:");
+ System.out.println("Third attempt(after refresh command), should be different:");
List<String> g3 = groups.getGroups(user);
g3.toArray(str_groups);
System.out.println(Arrays.toString(str_groups));
@@ -137,9 +144,9 @@ public class TestRefreshUserMappings {
g1.get(i).equals(g3.get(i)));
}
- // test time out
- Thread.sleep(groupRefreshTimeoutSec*1100);
- System.out.println("fourth attempt(after timeout), should be different:");
+ // Test timeout
+ Thread.sleep(groupRefreshTimeoutSec * 1500);
+ System.out.println("Fourth attempt(after timeout), should be different:");
List<String> g4 = groups.getGroups(user);
g4.toArray(str_groups);
System.out.println(Arrays.toString(str_groups));
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[38/50] [abbrv] hadoop git commit: HDDS-187. Command status publisher
for datanode. Contributed by Ajay Kumar.
Posted by bo...@apache.org.
HDDS-187. Command status publisher for datanode.
Contributed by Ajay Kumar.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f89e2659
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f89e2659
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f89e2659
Branch: refs/heads/YARN-7402
Commit: f89e265905f39c8e51263a3946a8b8e6ab4ebad9
Parents: 87eeb26
Author: Anu Engineer <ae...@apache.org>
Authored: Thu Jul 12 21:34:32 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Thu Jul 12 21:35:12 2018 -0700
----------------------------------------------------------------------
.../org/apache/hadoop/hdds/HddsConfigKeys.java | 8 +
.../org/apache/hadoop/hdds/HddsIdFactory.java | 53 ++++++
.../common/src/main/resources/ozone-default.xml | 9 +
.../apache/hadoop/utils/TestHddsIdFactory.java | 77 +++++++++
.../report/CommandStatusReportPublisher.java | 71 ++++++++
.../common/report/ReportPublisher.java | 9 +
.../common/report/ReportPublisherFactory.java | 4 +
.../statemachine/DatanodeStateMachine.java | 2 +
.../common/statemachine/StateContext.java | 70 ++++++++
.../CloseContainerCommandHandler.java | 5 +-
.../commandhandler/CommandHandler.java | 11 ++
.../DeleteBlocksCommandHandler.java | 166 ++++++++++---------
.../ReplicateContainerCommandHandler.java | 7 +-
.../commands/CloseContainerCommand.java | 36 ++--
.../ozone/protocol/commands/CommandStatus.java | 141 ++++++++++++++++
.../protocol/commands/DeleteBlocksCommand.java | 13 +-
.../commands/ReplicateContainerCommand.java | 20 ++-
.../protocol/commands/ReregisterCommand.java | 10 ++
.../ozone/protocol/commands/SCMCommand.java | 19 +++
.../StorageContainerDatanodeProtocol.proto | 21 +++
.../ozone/container/common/ScmTestMock.java | 33 +++-
.../common/report/TestReportPublisher.java | 75 ++++++++-
.../hadoop/hdds/scm/events/SCMEvents.java | 57 ++++---
.../server/SCMDatanodeHeartbeatDispatcher.java | 23 ++-
.../TestSCMDatanodeHeartbeatDispatcher.java | 25 ++-
.../ozone/container/common/TestEndPoint.java | 111 ++++++++++++-
26 files changed, 935 insertions(+), 141 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index dec2c1c..8b449fb 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -17,7 +17,15 @@
*/
package org.apache.hadoop.hdds;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+/**
+ * Config class for HDDS.
+ */
public final class HddsConfigKeys {
private HddsConfigKeys() {
}
+ public static final String HDDS_COMMAND_STATUS_REPORT_INTERVAL =
+ "hdds.command.status.report.interval";
+ public static final String HDDS_COMMAND_STATUS_REPORT_INTERVAL_DEFAULT =
+ ScmConfigKeys.OZONE_SCM_HEARBEAT_INTERVAL_DEFAULT;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsIdFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsIdFactory.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsIdFactory.java
new file mode 100644
index 0000000..b244b8c
--- /dev/null
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsIdFactory.java
@@ -0,0 +1,53 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds;
+
+import java.util.UUID;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * HDDS Id generator.
+ */
+public final class HddsIdFactory {
+ private HddsIdFactory() {
+ }
+
+ private static final AtomicLong LONG_COUNTER = new AtomicLong(
+ System.currentTimeMillis());
+
+ /**
+ * Returns an incrementing long. This class doesn't
+ * persist initial value for long Id's, so incremental id's after restart
+ * may collide with previously generated Id's.
+ *
+ * @return long
+ */
+ public static long getLongId() {
+ return LONG_COUNTER.incrementAndGet();
+ }
+
+ /**
+ * Returns a uuid.
+ *
+ * @return UUID.
+ */
+ public static UUID getUUId() {
+ return UUID.randomUUID();
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/common/src/main/resources/ozone-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index d5ce9e6..1b6fb33 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -1061,4 +1061,13 @@
</description>
</property>
+ <property>
+ <name>hdds.command.status.report.interval</name>
+ <value>30s</value>
+ <tag>OZONE, DATANODE, MANAGEMENT</tag>
+ <description>Time interval of the datanode to send status of commands
+ executed since last report. Unit could be defined with
+ postfix (ns,ms,s,m,h,d)</description>
+ </property>
+
</configuration>
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestHddsIdFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestHddsIdFactory.java b/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestHddsIdFactory.java
new file mode 100644
index 0000000..a341ccc
--- /dev/null
+++ b/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestHddsIdFactory.java
@@ -0,0 +1,77 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.utils;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import org.apache.hadoop.hdds.HddsIdFactory;
+import org.junit.After;
+import static org.junit.Assert.assertEquals;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Test the JMX interface for the rocksdb metastore implementation.
+ */
+public class TestHddsIdFactory {
+
+ private static final Set<Long> ID_SET = ConcurrentHashMap.newKeySet();
+ private static final int IDS_PER_THREAD = 10000;
+ private static final int NUM_OF_THREADS = 5;
+
+ @After
+ public void cleanup() {
+ ID_SET.clear();
+ }
+
+ @Test
+ public void testGetLongId() throws Exception {
+
+ ExecutorService executor = Executors.newFixedThreadPool(5);
+ List<Callable<Integer>> tasks = new ArrayList<>(5);
+ addTasks(tasks);
+ List<Future<Integer>> result = executor.invokeAll(tasks);
+ assertEquals(IDS_PER_THREAD * NUM_OF_THREADS, ID_SET.size());
+ for (Future<Integer> r : result) {
+ assertEquals(r.get().intValue(), IDS_PER_THREAD);
+ }
+ }
+
+ private void addTasks(List<Callable<Integer>> tasks) {
+ for (int i = 0; i < NUM_OF_THREADS; i++) {
+ Callable<Integer> task = () -> {
+ for (int idNum = 0; idNum < IDS_PER_THREAD; idNum++) {
+ long var = HddsIdFactory.getLongId();
+ if (ID_SET.contains(var)) {
+ Assert.fail("Duplicate id found");
+ }
+ ID_SET.add(var);
+ }
+ return IDS_PER_THREAD;
+ };
+ tasks.add(task);
+ }
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/CommandStatusReportPublisher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/CommandStatusReportPublisher.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/CommandStatusReportPublisher.java
new file mode 100644
index 0000000..ca5174a
--- /dev/null
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/CommandStatusReportPublisher.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.report;
+
+import java.util.Iterator;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
+
+/**
+ * Publishes CommandStatusReport which will be sent to SCM as part of
+ * heartbeat. CommandStatusReport consist of the following information:
+ * - type : type of command.
+ * - status : status of command execution (PENDING, EXECUTED, FAILURE).
+ * - cmdId : Command id.
+ * - msg : optional message.
+ */
+public class CommandStatusReportPublisher extends
+ ReportPublisher<CommandStatusReportsProto> {
+
+ private long cmdStatusReportInterval = -1;
+
+ @Override
+ protected long getReportFrequency() {
+ if (cmdStatusReportInterval == -1) {
+ cmdStatusReportInterval = getConf().getTimeDuration(
+ HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL,
+ HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL_DEFAULT,
+ TimeUnit.MILLISECONDS);
+ }
+ return cmdStatusReportInterval;
+ }
+
+ @Override
+ protected CommandStatusReportsProto getReport() {
+ Map<Long, CommandStatus> map = this.getContext()
+ .getCommandStatusMap();
+ Iterator<Long> iterator = map.keySet().iterator();
+ CommandStatusReportsProto.Builder builder = CommandStatusReportsProto
+ .newBuilder();
+
+ iterator.forEachRemaining(key -> {
+ CommandStatus cmdStatus = map.get(key);
+ builder.addCmdStatus(cmdStatus.getProtoBufMessage());
+ // If status is still pending then don't remove it from map as
+ // CommandHandler will change its status when it works on this command.
+ if (!cmdStatus.getStatus().equals(Status.PENDING)) {
+ map.remove(key);
+ }
+ });
+ return builder.build();
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
index 4ff47a0..105f073 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
@@ -93,4 +93,13 @@ public abstract class ReportPublisher<T extends GeneratedMessage>
*/
protected abstract T getReport();
+ /**
+ * Returns {@link StateContext}.
+ *
+ * @return stateContext report
+ */
+ protected StateContext getContext() {
+ return context;
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
index dc246d9..ea89280 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.ozone.container.common.report;
import com.google.protobuf.GeneratedMessage;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
@@ -49,6 +51,8 @@ public class ReportPublisherFactory {
report2publisher.put(NodeReportProto.class, NodeReportPublisher.class);
report2publisher.put(ContainerReportsProto.class,
ContainerReportPublisher.class);
+ report2publisher.put(CommandStatusReportsProto.class,
+ CommandStatusReportPublisher.class);
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
index 245d76f..69a243e 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
@@ -21,6 +21,7 @@ import com.google.common.util.concurrent.ThreadFactoryBuilder;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
@@ -107,6 +108,7 @@ public class DatanodeStateMachine implements Closeable {
.setStateContext(context)
.addPublisherFor(NodeReportProto.class)
.addPublisherFor(ContainerReportsProto.class)
+ .addPublisherFor(CommandStatusReportsProto.class)
.build();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
index 98eb7a0..7ed30f8 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
@@ -17,12 +17,17 @@
package org.apache.hadoop.ozone.container.common.statemachine;
import com.google.protobuf.GeneratedMessage;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
import org.apache.hadoop.ozone.container.common.states.DatanodeState;
import org.apache.hadoop.ozone.container.common.states.datanode
.InitDatanodeState;
import org.apache.hadoop.ozone.container.common.states.datanode
.RunningDatanodeState;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus.CommandStatusBuilder;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -48,6 +53,7 @@ public class StateContext {
static final Logger LOG =
LoggerFactory.getLogger(StateContext.class);
private final Queue<SCMCommand> commandQueue;
+ private final Map<Long, CommandStatus> cmdStatusMap;
private final Lock lock;
private final DatanodeStateMachine parent;
private final AtomicLong stateExecutionCount;
@@ -68,6 +74,7 @@ public class StateContext {
this.state = state;
this.parent = parent;
commandQueue = new LinkedList<>();
+ cmdStatusMap = new ConcurrentHashMap<>();
reports = new LinkedList<>();
lock = new ReentrantLock();
stateExecutionCount = new AtomicLong(0);
@@ -269,6 +276,7 @@ public class StateContext {
} finally {
lock.unlock();
}
+ this.addCmdStatus(command);
}
/**
@@ -279,4 +287,66 @@ public class StateContext {
return stateExecutionCount.get();
}
+ /**
+ * Returns the next {@link CommandStatus} or null if it is empty.
+ *
+ * @return {@link CommandStatus} or Null.
+ */
+ public CommandStatus getCmdStatus(Long key) {
+ return cmdStatusMap.get(key);
+ }
+
+ /**
+ * Adds a {@link CommandStatus} to the State Machine.
+ *
+ * @param status - {@link CommandStatus}.
+ */
+ public void addCmdStatus(Long key, CommandStatus status) {
+ cmdStatusMap.put(key, status);
+ }
+
+ /**
+ * Adds a {@link CommandStatus} to the State Machine for given SCMCommand.
+ *
+ * @param cmd - {@link SCMCommand}.
+ */
+ public void addCmdStatus(SCMCommand cmd) {
+ this.addCmdStatus(cmd.getCmdId(),
+ CommandStatusBuilder.newBuilder()
+ .setCmdId(cmd.getCmdId())
+ .setStatus(Status.PENDING)
+ .setType(cmd.getType())
+ .build());
+ }
+
+ /**
+ * Get map holding all {@link CommandStatus} objects.
+ *
+ */
+ public Map<Long, CommandStatus> getCommandStatusMap() {
+ return cmdStatusMap;
+ }
+
+ /**
+ * Remove object from cache in StateContext#cmdStatusMap.
+ *
+ */
+ public void removeCommandStatus(Long cmdId) {
+ cmdStatusMap.remove(cmdId);
+ }
+
+ /**
+ * Updates status of a pending status command.
+ * @param cmdId command id
+ * @param cmdExecuted SCMCommand
+ * @return true if command status updated successfully else false.
+ */
+ public boolean updateCommandStatus(Long cmdId, boolean cmdExecuted) {
+ if(cmdStatusMap.containsKey(cmdId)) {
+ cmdStatusMap.get(cmdId)
+ .setStatus(cmdExecuted ? Status.EXECUTED : Status.FAILED);
+ return true;
+ }
+ return false;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
index 45f2bbd..f58cbae 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
@@ -41,6 +41,7 @@ public class CloseContainerCommandHandler implements CommandHandler {
LoggerFactory.getLogger(CloseContainerCommandHandler.class);
private int invocationCount;
private long totalTime;
+ private boolean cmdExecuted;
/**
* Constructs a ContainerReport handler.
@@ -61,6 +62,7 @@ public class CloseContainerCommandHandler implements CommandHandler {
StateContext context, SCMConnectionManager connectionManager) {
LOG.debug("Processing Close Container command.");
invocationCount++;
+ cmdExecuted = false;
long startTime = Time.monotonicNow();
// TODO: define this as INVALID_CONTAINER_ID in HddsConsts.java (TBA)
long containerID = -1;
@@ -88,10 +90,11 @@ public class CloseContainerCommandHandler implements CommandHandler {
// submit the close container request for the XceiverServer to handle
container.submitContainerRequest(
request.build(), replicationType);
-
+ cmdExecuted = true;
} catch (Exception e) {
LOG.error("Can't close container " + containerID, e);
} finally {
+ updateCommandStatus(context, command, cmdExecuted, LOG);
long endTime = Time.monotonicNow();
totalTime += endTime - startTime;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java
index 60e2dc4..2016419 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.ozone.container.common.statemachine
import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.slf4j.Logger;
/**
* Generic interface for handlers.
@@ -58,4 +59,14 @@ public interface CommandHandler {
*/
long getAverageRunTime();
+ /**
+ * Default implementation for updating command status.
+ */
+ default void updateCommandStatus(StateContext context, SCMCommand command,
+ boolean cmdExecuted, Logger log) {
+ if (!context.updateCommandStatus(command.getCmdId(), cmdExecuted)) {
+ log.debug("{} with cmdId:{} not found.", command.getType(),
+ command.getCmdId());
+ }
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
index c3d1596..9640f93 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
@@ -21,7 +21,8 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMCommandProto;
-import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.hdds.scm.container.common.helpers
+ .StorageContainerException;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerBlocksDeletionACKProto;
@@ -54,7 +55,8 @@ import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.List;
-import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.CONTAINER_NOT_FOUND;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
+ .Result.CONTAINER_NOT_FOUND;
/**
* Handle block deletion commands.
@@ -68,6 +70,7 @@ public class DeleteBlocksCommandHandler implements CommandHandler {
private final Configuration conf;
private int invocationCount;
private long totalTime;
+ private boolean cmdExecuted;
public DeleteBlocksCommandHandler(ContainerSet cset,
Configuration conf) {
@@ -78,93 +81,98 @@ public class DeleteBlocksCommandHandler implements CommandHandler {
@Override
public void handle(SCMCommand command, OzoneContainer container,
StateContext context, SCMConnectionManager connectionManager) {
- if (command.getType() != SCMCommandProto.Type.deleteBlocksCommand) {
- LOG.warn("Skipping handling command, expected command "
- + "type {} but found {}",
- SCMCommandProto.Type.deleteBlocksCommand, command.getType());
- return;
- }
- LOG.debug("Processing block deletion command.");
- invocationCount++;
+ cmdExecuted = false;
long startTime = Time.monotonicNow();
-
- // move blocks to deleting state.
- // this is a metadata update, the actual deletion happens in another
- // recycling thread.
- DeleteBlocksCommand cmd = (DeleteBlocksCommand) command;
- List<DeletedBlocksTransaction> containerBlocks = cmd.blocksTobeDeleted();
-
-
- DeletedContainerBlocksSummary summary =
- DeletedContainerBlocksSummary.getFrom(containerBlocks);
- LOG.info("Start to delete container blocks, TXIDs={}, "
- + "numOfContainers={}, numOfBlocks={}",
- summary.getTxIDSummary(),
- summary.getNumOfContainers(),
- summary.getNumOfBlocks());
-
- ContainerBlocksDeletionACKProto.Builder resultBuilder =
- ContainerBlocksDeletionACKProto.newBuilder();
- containerBlocks.forEach(entry -> {
- DeleteBlockTransactionResult.Builder txResultBuilder =
- DeleteBlockTransactionResult.newBuilder();
- txResultBuilder.setTxID(entry.getTxID());
- try {
- long containerId = entry.getContainerID();
- Container cont = containerSet.getContainer(containerId);
- if(cont == null) {
- throw new StorageContainerException("Unable to find the container "
- + containerId, CONTAINER_NOT_FOUND);
- }
- ContainerProtos.ContainerType containerType = cont.getContainerType();
- switch (containerType) {
- case KeyValueContainer:
- KeyValueContainerData containerData = (KeyValueContainerData)
- cont.getContainerData();
- deleteKeyValueContainerBlocks(containerData, entry);
- txResultBuilder.setSuccess(true);
- break;
- default:
- LOG.error("Delete Blocks Command Handler is not implemented for " +
- "containerType {}", containerType);
- }
- } catch (IOException e) {
- LOG.warn("Failed to delete blocks for container={}, TXID={}",
- entry.getContainerID(), entry.getTxID(), e);
- txResultBuilder.setSuccess(false);
+ try {
+ if (command.getType() != SCMCommandProto.Type.deleteBlocksCommand) {
+ LOG.warn("Skipping handling command, expected command "
+ + "type {} but found {}",
+ SCMCommandProto.Type.deleteBlocksCommand, command.getType());
+ return;
}
- resultBuilder.addResults(txResultBuilder.build());
- });
- ContainerBlocksDeletionACKProto blockDeletionACK = resultBuilder.build();
-
- // Send ACK back to SCM as long as meta updated
- // TODO Or we should wait until the blocks are actually deleted?
- if (!containerBlocks.isEmpty()) {
- for (EndpointStateMachine endPoint : connectionManager.getValues()) {
+ LOG.debug("Processing block deletion command.");
+ invocationCount++;
+
+ // move blocks to deleting state.
+ // this is a metadata update, the actual deletion happens in another
+ // recycling thread.
+ DeleteBlocksCommand cmd = (DeleteBlocksCommand) command;
+ List<DeletedBlocksTransaction> containerBlocks = cmd.blocksTobeDeleted();
+
+ DeletedContainerBlocksSummary summary =
+ DeletedContainerBlocksSummary.getFrom(containerBlocks);
+ LOG.info("Start to delete container blocks, TXIDs={}, "
+ + "numOfContainers={}, numOfBlocks={}",
+ summary.getTxIDSummary(),
+ summary.getNumOfContainers(),
+ summary.getNumOfBlocks());
+
+ ContainerBlocksDeletionACKProto.Builder resultBuilder =
+ ContainerBlocksDeletionACKProto.newBuilder();
+ containerBlocks.forEach(entry -> {
+ DeleteBlockTransactionResult.Builder txResultBuilder =
+ DeleteBlockTransactionResult.newBuilder();
+ txResultBuilder.setTxID(entry.getTxID());
try {
- if (LOG.isDebugEnabled()) {
- LOG.debug("Sending following block deletion ACK to SCM");
- for (DeleteBlockTransactionResult result :
- blockDeletionACK.getResultsList()) {
- LOG.debug(result.getTxID() + " : " + result.getSuccess());
- }
+ long containerId = entry.getContainerID();
+ Container cont = containerSet.getContainer(containerId);
+ if (cont == null) {
+ throw new StorageContainerException("Unable to find the container "
+ + containerId, CONTAINER_NOT_FOUND);
+ }
+ ContainerProtos.ContainerType containerType = cont.getContainerType();
+ switch (containerType) {
+ case KeyValueContainer:
+ KeyValueContainerData containerData = (KeyValueContainerData)
+ cont.getContainerData();
+ deleteKeyValueContainerBlocks(containerData, entry);
+ txResultBuilder.setSuccess(true);
+ break;
+ default:
+ LOG.error(
+ "Delete Blocks Command Handler is not implemented for " +
+ "containerType {}", containerType);
}
- endPoint.getEndPoint()
- .sendContainerBlocksDeletionACK(blockDeletionACK);
} catch (IOException e) {
- LOG.error("Unable to send block deletion ACK to SCM {}",
- endPoint.getAddress().toString(), e);
+ LOG.warn("Failed to delete blocks for container={}, TXID={}",
+ entry.getContainerID(), entry.getTxID(), e);
+ txResultBuilder.setSuccess(false);
+ }
+ resultBuilder.addResults(txResultBuilder.build());
+ });
+ ContainerBlocksDeletionACKProto blockDeletionACK = resultBuilder.build();
+
+ // Send ACK back to SCM as long as meta updated
+ // TODO Or we should wait until the blocks are actually deleted?
+ if (!containerBlocks.isEmpty()) {
+ for (EndpointStateMachine endPoint : connectionManager.getValues()) {
+ try {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Sending following block deletion ACK to SCM");
+ for (DeleteBlockTransactionResult result :
+ blockDeletionACK.getResultsList()) {
+ LOG.debug(result.getTxID() + " : " + result.getSuccess());
+ }
+ }
+ endPoint.getEndPoint()
+ .sendContainerBlocksDeletionACK(blockDeletionACK);
+ } catch (IOException e) {
+ LOG.error("Unable to send block deletion ACK to SCM {}",
+ endPoint.getAddress().toString(), e);
+ }
}
}
+ cmdExecuted = true;
+ } finally {
+ updateCommandStatus(context, command, cmdExecuted, LOG);
+ long endTime = Time.monotonicNow();
+ totalTime += endTime - startTime;
}
-
- long endTime = Time.monotonicNow();
- totalTime += endTime - startTime;
}
/**
- * Move a bunch of blocks from a container to deleting state.
- * This is a meta update, the actual deletes happen in async mode.
+ * Move a bunch of blocks from a container to deleting state. This is a meta
+ * update, the actual deletes happen in async mode.
*
* @param containerData - KeyValueContainerData
* @param delTX a block deletion transaction.
@@ -222,7 +230,7 @@ public class DeleteBlocksCommandHandler implements CommandHandler {
}
} else {
LOG.debug("Block {} not found or already under deletion in"
- + " container {}, skip deleting it.", blk, containerId);
+ + " container {}, skip deleting it.", blk, containerId);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
index b4e83b7..fe1d4e8 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
@@ -39,12 +39,17 @@ public class ReplicateContainerCommandHandler implements CommandHandler {
private int invocationCount;
private long totalTime;
+ private boolean cmdExecuted;
@Override
public void handle(SCMCommand command, OzoneContainer container,
StateContext context, SCMConnectionManager connectionManager) {
LOG.warn("Replicate command is not yet handled");
-
+ try {
+ cmdExecuted = true;
+ } finally {
+ updateCommandStatus(context, command, cmdExecuted, LOG);
+ }
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
index c7d8df5..6b7c22c 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
@@ -1,19 +1,18 @@
/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
* <p>
* http://www.apache.org/licenses/LICENSE-2.0
* <p>
* Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
*/
package org.apache.hadoop.ozone.protocol.commands;
@@ -24,7 +23,6 @@ import org.apache.hadoop.hdds.protocol.proto
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.CloseContainerCommandProto;
-
/**
* Asks datanode to close a container.
*/
@@ -36,6 +34,15 @@ public class CloseContainerCommand
public CloseContainerCommand(long containerID,
HddsProtos.ReplicationType replicationType) {
+ super();
+ this.containerID = containerID;
+ this.replicationType = replicationType;
+ }
+
+ // Should be called only for protobuf conversion
+ private CloseContainerCommand(long containerID,
+ HddsProtos.ReplicationType replicationType, long cmdId) {
+ super(cmdId);
this.containerID = containerID;
this.replicationType = replicationType;
}
@@ -63,6 +70,7 @@ public class CloseContainerCommand
public CloseContainerCommandProto getProto() {
return CloseContainerCommandProto.newBuilder()
.setContainerID(containerID)
+ .setCmdId(getCmdId())
.setReplicationType(replicationType).build();
}
@@ -70,8 +78,8 @@ public class CloseContainerCommand
CloseContainerCommandProto closeContainerProto) {
Preconditions.checkNotNull(closeContainerProto);
return new CloseContainerCommand(closeContainerProto.getContainerID(),
- closeContainerProto.getReplicationType());
-
+ closeContainerProto.getReplicationType(), closeContainerProto
+ .getCmdId());
}
public long getContainerID() {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandStatus.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandStatus.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandStatus.java
new file mode 100644
index 0000000..bf99700
--- /dev/null
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandStatus.java
@@ -0,0 +1,141 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.protocol.commands;
+
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
+
+/**
+ * A class that is used to communicate status of datanode commands.
+ */
+public class CommandStatus {
+
+ private SCMCommandProto.Type type;
+ private Long cmdId;
+ private Status status;
+ private String msg;
+
+ public Type getType() {
+ return type;
+ }
+
+ public Long getCmdId() {
+ return cmdId;
+ }
+
+ public Status getStatus() {
+ return status;
+ }
+
+ public String getMsg() {
+ return msg;
+ }
+
+ /**
+ * To allow change of status once commandStatus is initialized.
+ *
+ * @param status
+ */
+ public void setStatus(Status status) {
+ this.status = status;
+ }
+
+ /**
+ * Returns a CommandStatus from the protocol buffers.
+ *
+ * @param cmdStatusProto - protoBuf Message
+ * @return CommandStatus
+ */
+ public CommandStatus getFromProtoBuf(
+ StorageContainerDatanodeProtocolProtos.CommandStatus cmdStatusProto) {
+ return CommandStatusBuilder.newBuilder()
+ .setCmdId(cmdStatusProto.getCmdId())
+ .setStatus(cmdStatusProto.getStatus())
+ .setType(cmdStatusProto.getType())
+ .setMsg(cmdStatusProto.getMsg()).build();
+ }
+ /**
+ * Returns a CommandStatus from the protocol buffers.
+ *
+ * @return StorageContainerDatanodeProtocolProtos.CommandStatus
+ */
+ public StorageContainerDatanodeProtocolProtos.CommandStatus
+ getProtoBufMessage() {
+ StorageContainerDatanodeProtocolProtos.CommandStatus.Builder builder =
+ StorageContainerDatanodeProtocolProtos.CommandStatus.newBuilder()
+ .setCmdId(this.getCmdId())
+ .setStatus(this.getStatus())
+ .setType(this.getType());
+ if (this.getMsg() != null) {
+ builder.setMsg(this.getMsg());
+ }
+ return builder.build();
+ }
+
+ /**
+ * Builder class for CommandStatus.
+ */
+ public static final class CommandStatusBuilder {
+
+ private SCMCommandProto.Type type;
+ private Long cmdId;
+ private StorageContainerDatanodeProtocolProtos.CommandStatus.Status status;
+ private String msg;
+
+ private CommandStatusBuilder() {
+ }
+
+ public static CommandStatusBuilder newBuilder() {
+ return new CommandStatusBuilder();
+ }
+
+ public CommandStatusBuilder setType(Type type) {
+ this.type = type;
+ return this;
+ }
+
+ public CommandStatusBuilder setCmdId(Long cmdId) {
+ this.cmdId = cmdId;
+ return this;
+ }
+
+ public CommandStatusBuilder setStatus(Status status) {
+ this.status = status;
+ return this;
+ }
+
+ public CommandStatusBuilder setMsg(String msg) {
+ this.msg = msg;
+ return this;
+ }
+
+ public CommandStatus build() {
+ CommandStatus commandStatus = new CommandStatus();
+ commandStatus.type = this.type;
+ commandStatus.msg = this.msg;
+ commandStatus.status = this.status;
+ commandStatus.cmdId = this.cmdId;
+ return commandStatus;
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java
index 4fa33f6..46af794 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
- * http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -36,6 +36,14 @@ public class DeleteBlocksCommand extends
public DeleteBlocksCommand(List<DeletedBlocksTransaction> blocks) {
+ super();
+ this.blocksTobeDeleted = blocks;
+ }
+
+ // Should be called only for protobuf conversion
+ private DeleteBlocksCommand(List<DeletedBlocksTransaction> blocks,
+ long cmdId) {
+ super(cmdId);
this.blocksTobeDeleted = blocks;
}
@@ -56,11 +64,12 @@ public class DeleteBlocksCommand extends
public static DeleteBlocksCommand getFromProtobuf(
DeleteBlocksCommandProto deleteBlocksProto) {
return new DeleteBlocksCommand(deleteBlocksProto
- .getDeletedBlocksTransactionsList());
+ .getDeletedBlocksTransactionsList(), deleteBlocksProto.getCmdId());
}
public DeleteBlocksCommandProto getProto() {
return DeleteBlocksCommandProto.newBuilder()
+ .setCmdId(getCmdId())
.addAllDeletedBlocksTransactions(blocksTobeDeleted).build();
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java
index 834318b..e860c93 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java
@@ -30,7 +30,6 @@ import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMCommandProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
-import org.apache.hadoop.hdds.scm.container.ContainerID;
import com.google.common.base.Preconditions;
@@ -41,11 +40,19 @@ public class ReplicateContainerCommand
extends SCMCommand<ReplicateContainerCommandProto> {
private final long containerID;
-
private final List<DatanodeDetails> sourceDatanodes;
public ReplicateContainerCommand(long containerID,
List<DatanodeDetails> sourceDatanodes) {
+ super();
+ this.containerID = containerID;
+ this.sourceDatanodes = sourceDatanodes;
+ }
+
+ // Should be called only for protobuf conversion
+ public ReplicateContainerCommand(long containerID,
+ List<DatanodeDetails> sourceDatanodes, long cmdId) {
+ super(cmdId);
this.containerID = containerID;
this.sourceDatanodes = sourceDatanodes;
}
@@ -62,6 +69,7 @@ public class ReplicateContainerCommand
public ReplicateContainerCommandProto getProto() {
Builder builder = ReplicateContainerCommandProto.newBuilder()
+ .setCmdId(getCmdId())
.setContainerID(containerID);
for (DatanodeDetails dd : sourceDatanodes) {
builder.addSources(dd.getProtoBufMessage());
@@ -75,12 +83,12 @@ public class ReplicateContainerCommand
List<DatanodeDetails> datanodeDetails =
protoMessage.getSourcesList()
- .stream()
- .map(DatanodeDetails::getFromProtoBuf)
- .collect(Collectors.toList());
+ .stream()
+ .map(DatanodeDetails::getFromProtoBuf)
+ .collect(Collectors.toList());
return new ReplicateContainerCommand(protoMessage.getContainerID(),
- datanodeDetails);
+ datanodeDetails, protoMessage.getCmdId());
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java
index 953e31a..d557104 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java
@@ -49,6 +49,16 @@ public class ReregisterCommand extends
return getProto().toByteArray();
}
+ /**
+ * Not implemented for ReregisterCommand.
+ *
+ * @return cmdId.
+ */
+ @Override
+ public long getCmdId() {
+ return 0;
+ }
+
public ReregisterCommandProto getProto() {
return ReregisterCommandProto
.newBuilder()
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java
index 35ca802..6cda591 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java
@@ -18,6 +18,7 @@
package org.apache.hadoop.ozone.protocol.commands;
import com.google.protobuf.GeneratedMessage;
+import org.apache.hadoop.hdds.HddsIdFactory;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMCommandProto;
@@ -27,6 +28,15 @@ import org.apache.hadoop.hdds.protocol.proto
* @param <T>
*/
public abstract class SCMCommand<T extends GeneratedMessage> {
+ private long cmdId;
+
+ SCMCommand() {
+ this.cmdId = HddsIdFactory.getLongId();
+ }
+
+ SCMCommand(long cmdId) {
+ this.cmdId = cmdId;
+ }
/**
* Returns the type of this command.
* @return Type
@@ -38,4 +48,13 @@ public abstract class SCMCommand<T extends GeneratedMessage> {
* @return A protobuf message.
*/
public abstract byte[] getProtoBufMessage();
+
+ /**
+ * Gets the commandId of this object.
+ * @return uuid.
+ */
+ public long getCmdId() {
+ return cmdId;
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto b/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
index 54230c1..4238389 100644
--- a/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
+++ b/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
@@ -80,6 +80,7 @@ message SCMHeartbeatRequestProto {
optional NodeReportProto nodeReport = 2;
optional ContainerReportsProto containerReport = 3;
optional ContainerActionsProto containerActions = 4;
+ optional CommandStatusReportsProto commandStatusReport = 5;
}
/*
@@ -127,6 +128,22 @@ message ContainerReportsProto {
repeated ContainerInfo reports = 1;
}
+message CommandStatusReportsProto {
+ repeated CommandStatus cmdStatus = 1;
+}
+
+message CommandStatus {
+ enum Status {
+ PENDING = 1;
+ EXECUTED = 2;
+ FAILED = 3;
+ }
+ required int64 cmdId = 1;
+ required Status status = 2 [default = PENDING];
+ required SCMCommandProto.Type type = 3;
+ optional string msg = 4;
+}
+
message ContainerActionsProto {
repeated ContainerAction containerActions = 1;
}
@@ -193,6 +210,7 @@ message ReregisterCommandProto {}
// HB response from SCM, contains a list of block deletion transactions.
message DeleteBlocksCommandProto {
repeated DeletedBlocksTransaction deletedBlocksTransactions = 1;
+ required int64 cmdId = 3;
}
// The deleted blocks which are stored in deletedBlock.db of scm.
@@ -226,6 +244,7 @@ This command asks the datanode to close a specific container.
message CloseContainerCommandProto {
required int64 containerID = 1;
required hadoop.hdds.ReplicationType replicationType = 2;
+ required int64 cmdId = 3;
}
/**
@@ -233,6 +252,7 @@ This command asks the datanode to delete a specific container.
*/
message DeleteContainerCommandProto {
required int64 containerID = 1;
+ required int64 cmdId = 2;
}
/**
@@ -241,6 +261,7 @@ This command asks the datanode to replicate a container from specific sources.
message ReplicateContainerCommandProto {
required int64 containerID = 1;
repeated DatanodeDetailsProto sources = 2;
+ required int64 cmdId = 3;
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
index 8f4b0e3..fb8e7c1 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
@@ -18,6 +18,8 @@ package org.apache.hadoop.ozone.container.common;
import com.google.common.base.Preconditions;
import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatus;
import org.apache.hadoop.hdds.scm.VersionInfo;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
@@ -59,6 +61,9 @@ public class ScmTestMock implements StorageContainerDatanodeProtocol {
private Map<DatanodeDetails, Map<String, ContainerInfo>> nodeContainers =
new HashMap();
private Map<DatanodeDetails, NodeReportProto> nodeReports = new HashMap<>();
+ private AtomicInteger commandStatusReport = new AtomicInteger(0);
+ private List<CommandStatus> cmdStatusList = new LinkedList<>();
+ private List<SCMCommandProto> scmCommandRequests = new LinkedList<>();
/**
* Returns the number of heartbeats made to this class.
*
@@ -180,10 +185,12 @@ public class ScmTestMock implements StorageContainerDatanodeProtocol {
sendHeartbeat(SCMHeartbeatRequestProto heartbeat) throws IOException {
rpcCount.incrementAndGet();
heartbeatCount.incrementAndGet();
+ if(heartbeat.hasCommandStatusReport()){
+ cmdStatusList.addAll(heartbeat.getCommandStatusReport().getCmdStatusList());
+ commandStatusReport.incrementAndGet();
+ }
sleepIfNeeded();
- List<SCMCommandProto>
- cmdResponses = new LinkedList<>();
- return SCMHeartbeatResponseProto.newBuilder().addAllCommands(cmdResponses)
+ return SCMHeartbeatResponseProto.newBuilder().addAllCommands(scmCommandRequests)
.setDatanodeUUID(heartbeat.getDatanodeDetails().getUuid())
.build();
}
@@ -302,4 +309,24 @@ public class ScmTestMock implements StorageContainerDatanodeProtocol {
nodeContainers.clear();
}
+
+ public int getCommandStatusReportCount() {
+ return commandStatusReport.get();
+ }
+
+ public List<CommandStatus> getCmdStatusList() {
+ return cmdStatusList;
+ }
+
+ public List<SCMCommandProto> getScmCommandRequests() {
+ return scmCommandRequests;
+ }
+
+ public void clearScmCommandRequests() {
+ scmCommandRequests.clear();
+ }
+
+ public void addScmCommandRequest(SCMCommandProto scmCmd) {
+ scmCommandRequests.add(scmCmd);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
index 5fd9cf6..026e7aa 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
@@ -20,18 +20,27 @@ package org.apache.hadoop.ozone.container.common.report;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
import com.google.protobuf.Descriptors;
import com.google.protobuf.GeneratedMessage;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsIdFactory;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.NodeReportProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMHeartbeatRequestProto;
import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
import org.apache.hadoop.util.concurrent.HadoopExecutors;
import org.junit.Assert;
+import org.junit.BeforeClass;
import org.junit.Test;
import org.mockito.Mockito;
@@ -42,12 +51,20 @@ import java.util.concurrent.TimeUnit;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
/**
* Test cases to test {@link ReportPublisher}.
*/
public class TestReportPublisher {
+ private static Configuration config;
+
+ @BeforeClass
+ public static void setup() {
+ config = new OzoneConfiguration();
+ }
+
/**
* Dummy report publisher for testing.
*/
@@ -93,9 +110,9 @@ public class TestReportPublisher {
.setNameFormat("Unit test ReportManager Thread - %d").build());
publisher.init(dummyContext, executorService);
Thread.sleep(150);
- Assert.assertEquals(1, ((DummyReportPublisher)publisher).getReportCount);
+ Assert.assertEquals(1, ((DummyReportPublisher) publisher).getReportCount);
Thread.sleep(150);
- Assert.assertEquals(2, ((DummyReportPublisher)publisher).getReportCount);
+ Assert.assertEquals(2, ((DummyReportPublisher) publisher).getReportCount);
executorService.shutdown();
}
@@ -110,12 +127,58 @@ public class TestReportPublisher {
publisher.init(dummyContext, executorService);
Thread.sleep(150);
executorService.shutdown();
- Assert.assertEquals(1, ((DummyReportPublisher)publisher).getReportCount);
+ Assert.assertEquals(1, ((DummyReportPublisher) publisher).getReportCount);
verify(dummyContext, times(1)).addReport(null);
}
@Test
+ public void testCommandStatusPublisher() throws InterruptedException {
+ StateContext dummyContext = Mockito.mock(StateContext.class);
+ ReportPublisher publisher = new CommandStatusReportPublisher();
+ final Map<Long, CommandStatus> cmdStatusMap = new ConcurrentHashMap<>();
+ when(dummyContext.getCommandStatusMap()).thenReturn(cmdStatusMap);
+ publisher.setConf(config);
+
+ ScheduledExecutorService executorService = HadoopExecutors
+ .newScheduledThreadPool(1,
+ new ThreadFactoryBuilder().setDaemon(true)
+ .setNameFormat("Unit test ReportManager Thread - %d").build());
+ publisher.init(dummyContext, executorService);
+ Assert.assertEquals(0,
+ ((CommandStatusReportPublisher) publisher).getReport()
+ .getCmdStatusCount());
+
+ // Insert to status object to state context map and then get the report.
+ CommandStatus obj1 = CommandStatus.CommandStatusBuilder.newBuilder()
+ .setCmdId(HddsIdFactory.getLongId())
+ .setType(Type.deleteBlocksCommand)
+ .setStatus(Status.PENDING)
+ .build();
+ CommandStatus obj2 = CommandStatus.CommandStatusBuilder.newBuilder()
+ .setCmdId(HddsIdFactory.getLongId())
+ .setType(Type.closeContainerCommand)
+ .setStatus(Status.EXECUTED)
+ .build();
+ cmdStatusMap.put(obj1.getCmdId(), obj1);
+ cmdStatusMap.put(obj2.getCmdId(), obj2);
+ Assert.assertEquals("Should publish report with 2 status objects", 2,
+ ((CommandStatusReportPublisher) publisher).getReport()
+ .getCmdStatusCount());
+ Assert.assertEquals(
+ "Next report should have 1 status objects as command status o"
+ + "bjects are still in Pending state",
+ 1, ((CommandStatusReportPublisher) publisher).getReport()
+ .getCmdStatusCount());
+ Assert.assertTrue(
+ "Next report should have 1 status objects as command status "
+ + "objects are still in Pending state",
+ ((CommandStatusReportPublisher) publisher).getReport()
+ .getCmdStatusList().get(0).getStatus().equals(Status.PENDING));
+ executorService.shutdown();
+ }
+
+ @Test
public void testAddingReportToHeartbeat() {
Configuration conf = new OzoneConfiguration();
ReportPublisherFactory factory = new ReportPublisherFactory(conf);
@@ -168,10 +231,10 @@ public class TestReportPublisher {
* Adds the report to heartbeat.
*
* @param requestBuilder builder to which the report has to be added.
- * @param report the report to be added.
+ * @param report the report to be added.
*/
- private static void addReport(SCMHeartbeatRequestProto.Builder requestBuilder,
- GeneratedMessage report) {
+ private static void addReport(SCMHeartbeatRequestProto.Builder
+ requestBuilder, GeneratedMessage report) {
String reportName = report.getDescriptorForType().getFullName();
for (Descriptors.FieldDescriptor descriptor :
SCMHeartbeatRequestProto.getDescriptor().getFields()) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
index 0afd675..485b3f5 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
@@ -21,8 +21,12 @@ package org.apache.hadoop.hdds.scm.events;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.container.ContainerID;
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.ContainerReportFromDatanode;
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.NodeReportFromDatanode;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .CommandStatusReportFromDatanode;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .ContainerReportFromDatanode;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .NodeReportFromDatanode;
import org.apache.hadoop.hdds.server.events.Event;
import org.apache.hadoop.hdds.server.events.TypedEvent;
@@ -34,47 +38,54 @@ import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
public final class SCMEvents {
/**
- * NodeReports are sent out by Datanodes. This report is
- * received by SCMDatanodeHeartbeatDispatcher and NodeReport Event is
- * generated.
+ * NodeReports are sent out by Datanodes. This report is received by
+ * SCMDatanodeHeartbeatDispatcher and NodeReport Event is generated.
*/
public static final TypedEvent<NodeReportFromDatanode> NODE_REPORT =
new TypedEvent<>(NodeReportFromDatanode.class, "Node_Report");
/**
- * ContainerReports are send out by Datanodes. This report
- * is received by SCMDatanodeHeartbeatDispatcher and Container_Report Event
- * i generated.
+ * ContainerReports are send out by Datanodes. This report is received by
+ * SCMDatanodeHeartbeatDispatcher and Container_Report Event
+ * isTestSCMDatanodeHeartbeatDispatcher generated.
*/
public static final TypedEvent<ContainerReportFromDatanode> CONTAINER_REPORT =
new TypedEvent<>(ContainerReportFromDatanode.class, "Container_Report");
/**
+ * A Command status report will be sent by datanodes. This repoort is received
+ * by SCMDatanodeHeartbeatDispatcher and CommandReport event is generated.
+ */
+ public static final TypedEvent<CommandStatusReportFromDatanode>
+ CMD_STATUS_REPORT =
+ new TypedEvent<>(CommandStatusReportFromDatanode.class,
+ "Cmd_Status_Report");
+
+ /**
* When ever a command for the Datanode needs to be issued by any component
- * inside SCM, a Datanode_Command event is generated. NodeManager listens
- * to these events and dispatches them to Datanode for further processing.
+ * inside SCM, a Datanode_Command event is generated. NodeManager listens to
+ * these events and dispatches them to Datanode for further processing.
*/
public static final Event<CommandForDatanode> DATANODE_COMMAND =
new TypedEvent<>(CommandForDatanode.class, "Datanode_Command");
/**
- * A Close Container Event can be triggered under many condition.
- * Some of them are:
- * 1. A Container is full, then we stop writing further information to
- * that container. DN's let SCM know that current state and sends a
- * informational message that allows SCM to close the container.
- *
- * 2. If a pipeline is open; for example Ratis; if a single node fails,
- * we will proactively close these containers.
- *
- * Once a command is dispatched to DN, we will also listen to updates from
- * the datanode which lets us know that this command completed or timed out.
+ * A Close Container Event can be triggered under many condition. Some of them
+ * are: 1. A Container is full, then we stop writing further information to
+ * that container. DN's let SCM know that current state and sends a
+ * informational message that allows SCM to close the container.
+ * <p>
+ * 2. If a pipeline is open; for example Ratis; if a single node fails, we
+ * will proactively close these containers.
+ * <p>
+ * Once a command is dispatched to DN, we will also listen to updates from the
+ * datanode which lets us know that this command completed or timed out.
*/
public static final TypedEvent<ContainerID> CLOSE_CONTAINER =
new TypedEvent<>(ContainerID.class, "Close_Container");
/**
- * This event will be triggered whenever a new datanode is
- * registered with SCM.
+ * This event will be triggered whenever a new datanode is registered with
+ * SCM.
*/
public static final TypedEvent<DatanodeDetails> NEW_NODE =
new TypedEvent<>(DatanodeDetails.class, "New_Node");
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
index 4cfa98f..2461d37 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.hdds.scm.server;
import com.google.common.base.Preconditions;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
@@ -37,7 +39,7 @@ import java.util.List;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CONTAINER_REPORT;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.NODE_REPORT;
-
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.CMD_STATUS_REPORT;
/**
* This class is responsible for dispatching heartbeat from datanode to
* appropriate EventHandler at SCM.
@@ -86,6 +88,13 @@ public final class SCMDatanodeHeartbeatDispatcher {
heartbeat.getContainerReport()));
}
+
+ if (heartbeat.hasCommandStatusReport()) {
+ eventPublisher.fireEvent(CMD_STATUS_REPORT,
+ new CommandStatusReportFromDatanode(datanodeDetails,
+ heartbeat.getCommandStatusReport()));
+ }
+
return commands;
}
@@ -136,4 +145,16 @@ public final class SCMDatanodeHeartbeatDispatcher {
}
}
+ /**
+ * Container report event payload with origin.
+ */
+ public static class CommandStatusReportFromDatanode
+ extends ReportFromDatanode<CommandStatusReportsProto> {
+
+ public CommandStatusReportFromDatanode(DatanodeDetails datanodeDetails,
+ CommandStatusReportsProto report) {
+ super(datanodeDetails, report);
+ }
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
index 042e3cc..1b79ebf 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
@@ -21,6 +21,10 @@ import java.io.IOException;
import java.util.concurrent.atomic.AtomicInteger;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
+import org.apache.hadoop.hdds.scm.server.
+ SCMDatanodeHeartbeatDispatcher.CommandStatusReportFromDatanode;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
@@ -42,6 +46,7 @@ import org.mockito.Mockito;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CONTAINER_REPORT;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.NODE_REPORT;
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.CMD_STATUS_REPORT;
/**
* This class tests the behavior of SCMDatanodeHeartbeatDispatcher.
@@ -91,6 +96,8 @@ public class TestSCMDatanodeHeartbeatDispatcher {
ContainerReportsProto containerReport =
ContainerReportsProto.getDefaultInstance();
+ CommandStatusReportsProto commandStatusReport =
+ CommandStatusReportsProto.getDefaultInstance();
SCMDatanodeHeartbeatDispatcher dispatcher =
new SCMDatanodeHeartbeatDispatcher(Mockito.mock(NodeManager.class),
@@ -98,9 +105,18 @@ public class TestSCMDatanodeHeartbeatDispatcher {
@Override
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void fireEvent(
EVENT_TYPE event, PAYLOAD payload) {
- Assert.assertEquals(event, CONTAINER_REPORT);
- Assert.assertEquals(containerReport,
- ((ContainerReportFromDatanode)payload).getReport());
+ Assert.assertTrue(
+ event.equals(CONTAINER_REPORT)
+ || event.equals(CMD_STATUS_REPORT));
+
+ if (payload instanceof ContainerReportFromDatanode) {
+ Assert.assertEquals(containerReport,
+ ((ContainerReportFromDatanode) payload).getReport());
+ }
+ if (payload instanceof CommandStatusReportFromDatanode) {
+ Assert.assertEquals(commandStatusReport,
+ ((CommandStatusReportFromDatanode) payload).getReport());
+ }
eventReceived.incrementAndGet();
}
});
@@ -111,9 +127,10 @@ public class TestSCMDatanodeHeartbeatDispatcher {
SCMHeartbeatRequestProto.newBuilder()
.setDatanodeDetails(datanodeDetails.getProtoBufMessage())
.setContainerReport(containerReport)
+ .setCommandStatusReport(commandStatusReport)
.build();
dispatcher.dispatch(heartbeat);
- Assert.assertEquals(1, eventReceived.get());
+ Assert.assertEquals(2, eventReceived.get());
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
index 9db9e80..be8bd87 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
@@ -16,12 +16,29 @@
*/
package org.apache.hadoop.ozone.container.common;
+import java.util.Map;
import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.lang3.RandomUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CloseContainerCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.DeleteBlocksCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.ReplicateContainerCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
import org.apache.hadoop.hdds.scm.TestUtils;
import org.apache.hadoop.hdds.scm.VersionInfo;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -54,6 +71,7 @@ import org.apache.hadoop.ozone.container.common.states.endpoint
import org.apache.hadoop.ozone.container.common.states.endpoint
.VersionEndpointTask;
import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
import org.apache.hadoop.test.PathUtils;
import org.apache.hadoop.util.Time;
import org.junit.AfterClass;
@@ -74,6 +92,9 @@ import static org.apache.hadoop.ozone.container.common.ContainerTestUtils
.createEndpoint;
import static org.hamcrest.Matchers.lessThanOrEqualTo;
import static org.mockito.Mockito.when;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
/**
* Tests the endpoints.
@@ -83,6 +104,7 @@ public class TestEndPoint {
private static RPC.Server scmServer;
private static ScmTestMock scmServerImpl;
private static File testDir;
+ private static Configuration config;
@AfterClass
public static void tearDown() throws Exception {
@@ -99,6 +121,12 @@ public class TestEndPoint {
scmServer = SCMTestUtils.startScmRpcServer(SCMTestUtils.getConf(),
scmServerImpl, serverAddress, 10);
testDir = PathUtils.getTestDir(TestEndPoint.class);
+ config = SCMTestUtils.getConf();
+ config.set(DFS_DATANODE_DATA_DIR_KEY, testDir.getAbsolutePath());
+ config.set(OZONE_METADATA_DIRS, testDir.getAbsolutePath());
+ config
+ .setBoolean(OzoneConfigKeys.DFS_CONTAINER_RATIS_IPC_RANDOM_PORT, true);
+ config.set(HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL,"1s");
}
@Test
@@ -312,7 +340,87 @@ public class TestEndPoint {
}
}
- private void heartbeatTaskHelper(InetSocketAddress scmAddress,
+ @Test
+ public void testHeartbeatWithCommandStatusReport() throws Exception {
+ DatanodeDetails dataNode = getDatanodeDetails();
+ try (EndpointStateMachine rpcEndPoint =
+ createEndpoint(SCMTestUtils.getConf(),
+ serverAddress, 1000)) {
+ String storageId = UUID.randomUUID().toString();
+ // Add some scmCommands for heartbeat response
+ addScmCommands();
+
+
+ SCMHeartbeatRequestProto request = SCMHeartbeatRequestProto.newBuilder()
+ .setDatanodeDetails(dataNode.getProtoBufMessage())
+ .setNodeReport(TestUtils.createNodeReport(
+ getStorageReports(storageId)))
+ .build();
+
+ SCMHeartbeatResponseProto responseProto = rpcEndPoint.getEndPoint()
+ .sendHeartbeat(request);
+ assertNotNull(responseProto);
+ assertEquals(3, responseProto.getCommandsCount());
+ assertEquals(0, scmServerImpl.getCommandStatusReportCount());
+
+ // Send heartbeat again from heartbeat endpoint task
+ final StateContext stateContext = heartbeatTaskHelper(serverAddress, 3000);
+ Map<Long, CommandStatus> map = stateContext.getCommandStatusMap();
+ assertNotNull(map);
+ assertEquals("Should have 3 objects", 3, map.size());
+ assertTrue(map.containsKey(Long.valueOf(1)));
+ assertTrue(map.containsKey(Long.valueOf(2)));
+ assertTrue(map.containsKey(Long.valueOf(3)));
+ assertTrue(map.get(Long.valueOf(1)).getType()
+ .equals(Type.closeContainerCommand));
+ assertTrue(map.get(Long.valueOf(2)).getType()
+ .equals(Type.replicateContainerCommand));
+ assertTrue(
+ map.get(Long.valueOf(3)).getType().equals(Type.deleteBlocksCommand));
+ assertTrue(map.get(Long.valueOf(1)).getStatus().equals(Status.PENDING));
+ assertTrue(map.get(Long.valueOf(2)).getStatus().equals(Status.PENDING));
+ assertTrue(map.get(Long.valueOf(3)).getStatus().equals(Status.PENDING));
+
+ scmServerImpl.clearScmCommandRequests();
+ }
+ }
+
+ private void addScmCommands() {
+ SCMCommandProto closeCommand = SCMCommandProto.newBuilder()
+ .setCloseContainerCommandProto(
+ CloseContainerCommandProto.newBuilder().setCmdId(1)
+ .setContainerID(1)
+ .setReplicationType(ReplicationType.RATIS)
+ .build())
+ .setCommandType(Type.closeContainerCommand)
+ .build();
+ SCMCommandProto replicationCommand = SCMCommandProto.newBuilder()
+ .setReplicateContainerCommandProto(
+ ReplicateContainerCommandProto.newBuilder()
+ .setCmdId(2)
+ .setContainerID(2)
+ .build())
+ .setCommandType(Type.replicateContainerCommand)
+ .build();
+ SCMCommandProto deleteBlockCommand = SCMCommandProto.newBuilder()
+ .setDeleteBlocksCommandProto(
+ DeleteBlocksCommandProto.newBuilder()
+ .setCmdId(3)
+ .addDeletedBlocksTransactions(
+ DeletedBlocksTransaction.newBuilder()
+ .setContainerID(45)
+ .setCount(1)
+ .setTxID(23)
+ .build())
+ .build())
+ .setCommandType(Type.deleteBlocksCommand)
+ .build();
+ scmServerImpl.addScmCommandRequest(closeCommand);
+ scmServerImpl.addScmCommandRequest(deleteBlockCommand);
+ scmServerImpl.addScmCommandRequest(replicationCommand);
+ }
+
+ private StateContext heartbeatTaskHelper(InetSocketAddress scmAddress,
int rpcTimeout) throws Exception {
Configuration conf = SCMTestUtils.getConf();
conf.set(DFS_DATANODE_DATA_DIR_KEY, testDir.getAbsolutePath());
@@ -344,6 +452,7 @@ public class TestEndPoint {
Assert.assertEquals(EndpointStateMachine.EndPointStates.HEARTBEAT,
rpcEndPoint.getState());
+ return stateContext;
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[10/50] [abbrv] hadoop git commit: HADOOP-15568. fix some typos in
the .sh comments. Contributed by Steve Loughran.
Posted by bo...@apache.org.
HADOOP-15568. fix some typos in the .sh comments. Contributed by Steve Loughran.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4a08ddfa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4a08ddfa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4a08ddfa
Branch: refs/heads/YARN-7402
Commit: 4a08ddfa68a405bfd97ffd96fafc1e3d48d20d7e
Parents: ea9b608
Author: Akira Ajisaka <aa...@apache.org>
Authored: Mon Jul 9 15:43:38 2018 -0400
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Mon Jul 9 15:43:38 2018 -0400
----------------------------------------------------------------------
.../hadoop-common/src/main/conf/hadoop-env.sh | 6 +++---
.../hadoop-common/src/main/conf/hadoop-metrics2.properties | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a08ddfa/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
index 3826f67..6db085a 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
+++ b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
@@ -88,7 +88,7 @@
# Extra Java runtime options for all Hadoop commands. We don't support
# IPv6 yet/still, so by default the preference is set to IPv4.
# export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"
-# For Kerberos debugging, an extended option set logs more invormation
+# For Kerberos debugging, an extended option set logs more information
# export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug"
# Some parts of the shell code may do special things dependent upon
@@ -120,9 +120,9 @@ esac
#
# By default, Apache Hadoop overrides Java's CLASSPATH
# environment variable. It is configured such
-# that it sarts out blank with new entries added after passing
+# that it starts out blank with new entries added after passing
# a series of checks (file/dir exists, not already listed aka
-# de-deduplication). During de-depulication, wildcards and/or
+# de-deduplication). During de-deduplication, wildcards and/or
# directories are *NOT* expanded to keep it simple. Therefore,
# if the computed classpath has two specific mentions of
# awesome-methods-1.0.jar, only the first one added will be seen.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4a08ddfa/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties
index 16fdcf0..f061313 100644
--- a/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties
+++ b/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties
@@ -47,7 +47,7 @@
#*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
# Tag values to use for the ganglia prefix. If not defined no tags are used.
-# If '*' all tags are used. If specifiying multiple tags separate them with
+# If '*' all tags are used. If specifying multiple tags separate them with
# commas. Note that the last segment of the property name is the context name.
#
# A typical use of tags is separating the metrics by the HDFS rpc port
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[26/50] [abbrv] hadoop git commit: HADOOP-15594. Exclude
commons-lang3 from hadoop-client-minicluster. Contributed by Takanobu
Asanuma.
Posted by bo...@apache.org.
HADOOP-15594. Exclude commons-lang3 from hadoop-client-minicluster. Contributed by Takanobu Asanuma.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d36ed94e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d36ed94e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d36ed94e
Branch: refs/heads/YARN-7402
Commit: d36ed94ee06945fe9122970b196968fd1c997dcc
Parents: 2ae13d4
Author: Akira Ajisaka <aa...@apache.org>
Authored: Wed Jul 11 10:53:08 2018 -0400
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Wed Jul 11 10:53:08 2018 -0400
----------------------------------------------------------------------
hadoop-client-modules/hadoop-client-minicluster/pom.xml | 8 ++++++++
1 file changed, 8 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d36ed94e/hadoop-client-modules/hadoop-client-minicluster/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index 6fa24b4..490281a 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -167,6 +167,10 @@
<artifactId>commons-io</artifactId>
</exclusion>
<exclusion>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-lang3</artifactId>
+ </exclusion>
+ <exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
@@ -492,6 +496,10 @@
<artifactId>commons-codec</artifactId>
</exclusion>
<exclusion>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-lang3</artifactId>
+ </exclusion>
+ <exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[40/50] [abbrv] hadoop git commit: HDDS-253. SCMBlockDeletingService
should publish events for delete blocks to EventQueue. Contributed by Lokesh
Jain.
Posted by bo...@apache.org.
HDDS-253. SCMBlockDeletingService should publish events for delete blocks to EventQueue. Contributed by Lokesh Jain.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1fe5b938
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1fe5b938
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1fe5b938
Branch: refs/heads/YARN-7402
Commit: 1fe5b938435ab49e40cffa66f4dd16ddf1592405
Parents: 3f3f722
Author: Nanda kumar <na...@apache.org>
Authored: Fri Jul 13 17:18:42 2018 +0530
Committer: Nanda kumar <na...@apache.org>
Committed: Fri Jul 13 17:18:42 2018 +0530
----------------------------------------------------------------------
.../apache/hadoop/hdds/scm/block/BlockManagerImpl.java | 10 ++++++----
.../hadoop/hdds/scm/block/SCMBlockDeletingService.java | 13 +++++++++----
.../hdds/scm/server/StorageContainerManager.java | 2 +-
.../apache/hadoop/hdds/scm/block/TestBlockManager.java | 2 +-
.../apache/hadoop/ozone/scm/TestContainerSQLCli.java | 3 +--
5 files changed, 18 insertions(+), 12 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
index 953f71e..6825ca4 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.metrics2.util.MBeans;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.hdds.client.BlockID;
@@ -87,10 +88,12 @@ public class BlockManagerImpl implements BlockManager, BlockmanagerMXBean {
* @param conf - configuration.
* @param nodeManager - node manager.
* @param containerManager - container manager.
+ * @param eventPublisher - event publisher.
* @throws IOException
*/
public BlockManagerImpl(final Configuration conf,
- final NodeManager nodeManager, final Mapping containerManager)
+ final NodeManager nodeManager, final Mapping containerManager,
+ EventPublisher eventPublisher)
throws IOException {
this.nodeManager = nodeManager;
this.containerManager = containerManager;
@@ -120,9 +123,8 @@ public class BlockManagerImpl implements BlockManager, BlockmanagerMXBean {
OZONE_BLOCK_DELETING_SERVICE_TIMEOUT_DEFAULT,
TimeUnit.MILLISECONDS);
blockDeletingService =
- new SCMBlockDeletingService(
- deletedBlockLog, containerManager, nodeManager, svcInterval,
- serviceTimeout, conf);
+ new SCMBlockDeletingService(deletedBlockLog, containerManager,
+ nodeManager, eventPublisher, svcInterval, serviceTimeout, conf);
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
index 2c555e0..6f65fdd 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
@@ -20,11 +20,14 @@ import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.scm.container.Mapping;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.apache.hadoop.ozone.protocol.commands.DeleteBlocksCommand;
import org.apache.hadoop.util.Time;
import org.apache.hadoop.utils.BackgroundService;
@@ -61,6 +64,7 @@ public class SCMBlockDeletingService extends BackgroundService {
private final DeletedBlockLog deletedBlockLog;
private final Mapping mappingService;
private final NodeManager nodeManager;
+ private final EventPublisher eventPublisher;
// Block delete limit size is dynamically calculated based on container
// delete limit size (ozone.block.deleting.container.limit.per.interval)
@@ -76,13 +80,14 @@ public class SCMBlockDeletingService extends BackgroundService {
private int blockDeleteLimitSize;
public SCMBlockDeletingService(DeletedBlockLog deletedBlockLog,
- Mapping mapper, NodeManager nodeManager,
- long interval, long serviceTimeout, Configuration conf) {
+ Mapping mapper, NodeManager nodeManager, EventPublisher eventPublisher,
+ long interval, long serviceTimeout, Configuration conf) {
super("SCMBlockDeletingService", interval, TimeUnit.MILLISECONDS,
BLOCK_DELETING_SERVICE_CORE_POOL_SIZE, serviceTimeout);
this.deletedBlockLog = deletedBlockLog;
this.mappingService = mapper;
this.nodeManager = nodeManager;
+ this.eventPublisher = eventPublisher;
int containerLimit = conf.getInt(
OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL,
@@ -145,8 +150,8 @@ public class SCMBlockDeletingService extends BackgroundService {
// We should stop caching new commands if num of un-processed
// command is bigger than a limit, e.g 50. In case datanode goes
// offline for sometime, the cached commands be flooded.
- nodeManager.addDatanodeCommand(dnId,
- new DeleteBlocksCommand(dnTXs));
+ eventPublisher.fireEvent(SCMEvents.DATANODE_COMMAND,
+ new CommandForDatanode<>(dnId, new DeleteBlocksCommand(dnTXs)));
LOG.debug(
"Added delete block command for datanode {} in the queue,"
+ " number of delete block transactions: {}, TxID list: {}",
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
index 5f511ee..f37a0ed 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
@@ -181,7 +181,7 @@ public final class StorageContainerManager extends ServiceRuntimeInfoImpl
scmContainerManager = new ContainerMapping(
conf, getScmNodeManager(), cacheSize);
scmBlockManager = new BlockManagerImpl(
- conf, getScmNodeManager(), scmContainerManager);
+ conf, getScmNodeManager(), scmContainerManager, eventQueue);
Node2ContainerMap node2ContainerMap = new Node2ContainerMap();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
index 9fbb9fa..06e7420 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
@@ -74,7 +74,7 @@ public class TestBlockManager {
}
nodeManager = new MockNodeManager(true, 10);
mapping = new ContainerMapping(conf, nodeManager, 128);
- blockManager = new BlockManagerImpl(conf, nodeManager, mapping);
+ blockManager = new BlockManagerImpl(conf, nodeManager, mapping, null);
if(conf.getBoolean(ScmConfigKeys.DFS_CONTAINER_RATIS_ENABLED_KEY,
ScmConfigKeys.DFS_CONTAINER_RATIS_ENABLED_DEFAULT)){
factor = HddsProtos.ReplicationFactor.THREE;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
index 1a1f37c..a878627 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
@@ -17,7 +17,6 @@
*/
package org.apache.hadoop.ozone.scm;
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.ozone.MiniOzoneCluster;
import org.apache.hadoop.ozone.OzoneConfigKeys;
@@ -117,7 +116,7 @@ public class TestContainerSQLCli {
nodeManager = cluster.getStorageContainerManager().getScmNodeManager();
mapping = new ContainerMapping(conf, nodeManager, 128);
- blockManager = new BlockManagerImpl(conf, nodeManager, mapping);
+ blockManager = new BlockManagerImpl(conf, nodeManager, mapping, null);
// blockManager.allocateBlock() will create containers if there is none
// stored in levelDB. The number of containers to create is the value of
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[06/50] [abbrv] hadoop git commit: HADOOP-15581. Set default jetty
log level to INFO in KMS. Contributed by Kitti Nanasi.
Posted by bo...@apache.org.
HADOOP-15581. Set default jetty log level to INFO in KMS. Contributed by Kitti Nanasi.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/895845e9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/895845e9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/895845e9
Branch: refs/heads/YARN-7402
Commit: 895845e9b0d7ac49da36b5cf773c6330afe4f3e0
Parents: def9d94
Author: Xiao Chen <xi...@apache.org>
Authored: Mon Jul 9 12:06:25 2018 -0700
Committer: Xiao Chen <xi...@apache.org>
Committed: Mon Jul 9 12:06:50 2018 -0700
----------------------------------------------------------------------
.../hadoop-kms/src/main/conf/kms-log4j.properties | 4 +++-
.../hadoop-kms/src/test/resources/log4j.properties | 4 +++-
2 files changed, 6 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/895845e9/hadoop-common-project/hadoop-kms/src/main/conf/kms-log4j.properties
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-kms/src/main/conf/kms-log4j.properties b/hadoop-common-project/hadoop-kms/src/main/conf/kms-log4j.properties
index 04a3cf3..e2afd41 100644
--- a/hadoop-common-project/hadoop-kms/src/main/conf/kms-log4j.properties
+++ b/hadoop-common-project/hadoop-kms/src/main/conf/kms-log4j.properties
@@ -37,4 +37,6 @@ log4j.logger.org.apache.hadoop=INFO
log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
# make zookeeper log level an explicit config, and not changing with rootLogger.
log4j.logger.org.apache.zookeeper=INFO
-log4j.logger.org.apache.curator=INFO
\ No newline at end of file
+log4j.logger.org.apache.curator=INFO
+# make jetty log level an explicit config, and not changing with rootLogger.
+log4j.logger.org.eclipse.jetty=INFO
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/895845e9/hadoop-common-project/hadoop-kms/src/test/resources/log4j.properties
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-kms/src/test/resources/log4j.properties b/hadoop-common-project/hadoop-kms/src/test/resources/log4j.properties
index e319af6..b8e6353 100644
--- a/hadoop-common-project/hadoop-kms/src/test/resources/log4j.properties
+++ b/hadoop-common-project/hadoop-kms/src/test/resources/log4j.properties
@@ -31,4 +31,6 @@ log4j.logger.org.apache.directory.server.core=OFF
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=OFF
# make zookeeper log level an explicit config, and not changing with rootLogger.
log4j.logger.org.apache.zookeeper=INFO
-log4j.logger.org.apache.curator=INFO
\ No newline at end of file
+log4j.logger.org.apache.curator=INFO
+# make jetty log level an explicit config, and not changing with rootLogger.
+log4j.logger.org.eclipse.jetty=INFO
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[19/50] [abbrv] hadoop git commit: YARN-8502. Use path strings
consistently for webservice endpoints in RMWebServices. Contributed by
Szilard Nemeth.
Posted by bo...@apache.org.
YARN-8502. Use path strings consistently for webservice endpoints in RMWebServices. Contributed by Szilard Nemeth.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/82ac3aa6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/82ac3aa6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/82ac3aa6
Branch: refs/heads/YARN-7402
Commit: 82ac3aa6d0a83235cfac2805a444dd26efe5f9ce
Parents: d503f65
Author: Giovanni Matteo Fumarola <gi...@apache.com>
Authored: Tue Jul 10 10:36:17 2018 -0700
Committer: Giovanni Matteo Fumarola <gi...@apache.com>
Committed: Tue Jul 10 10:36:17 2018 -0700
----------------------------------------------------------------------
.../hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java | 3 +++
.../yarn/server/resourcemanager/webapp/RMWebServices.java | 6 +++---
2 files changed, 6 insertions(+), 3 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/82ac3aa6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java
index 29ae81b..9822878 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java
@@ -42,6 +42,9 @@ public final class RMWSConsts {
/** Path for {@code RMWebServiceProtocol#getSchedulerInfo}. */
public static final String SCHEDULER = "/scheduler";
+ /** Path for {@code RMWebServices#updateSchedulerConfiguration}. */
+ public static final String SCHEDULER_CONF = "/scheduler-conf";
+
/** Path for {@code RMWebServiceProtocol#dumpSchedulerLogs}. */
public static final String SCHEDULER_LOGS = "/scheduler/logs";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/82ac3aa6/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index 864653c..15b58d7 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
@@ -955,7 +955,7 @@ public class RMWebServices extends WebServices implements RMWebServiceProtocol {
}
@GET
- @Path("/apps/{appid}/appattempts/{appattemptid}/containers/{containerid}")
+ @Path(RMWSConsts.GET_CONTAINER)
@Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
@Override
@@ -969,7 +969,7 @@ public class RMWebServices extends WebServices implements RMWebServiceProtocol {
}
@GET
- @Path("/apps/{appid}/state")
+ @Path(RMWSConsts.APPS_APPID_STATE)
@Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
@Override
@@ -2422,7 +2422,7 @@ public class RMWebServices extends WebServices implements RMWebServiceProtocol {
}
@PUT
- @Path("/scheduler-conf")
+ @Path(RMWSConsts.SCHEDULER_CONF)
@Produces({ MediaType.APPLICATION_JSON + "; " + JettyUtils.UTF_8,
MediaType.APPLICATION_XML + "; " + JettyUtils.UTF_8 })
@Consumes({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML })
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[32/50] [abbrv] hadoop git commit: HADOOP-15349. S3Guard DDB
retryBackoff to be more informative on limits exceeded. Contributed by Gabor
Bota.
Posted by bo...@apache.org.
HADOOP-15349. S3Guard DDB retryBackoff to be more informative on limits exceeded. Contributed by Gabor Bota.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a08812a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a08812a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a08812a1
Branch: refs/heads/YARN-7402
Commit: a08812a1b10df059b26f6a216e6339490298ba28
Parents: 4f3f939
Author: Sean Mackrory <ma...@apache.org>
Authored: Thu Jul 12 16:46:02 2018 +0200
Committer: Sean Mackrory <ma...@apache.org>
Committed: Thu Jul 12 17:24:01 2018 +0200
----------------------------------------------------------------------
.../org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a08812a1/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
index 116827d..43849b1 100644
--- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
+++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
@@ -655,7 +655,8 @@ public class DynamoDBMetadataStore implements MetadataStore {
retryCount, 0, true);
if (action.action == RetryPolicy.RetryAction.RetryDecision.FAIL) {
throw new IOException(
- String.format("Max retries exceeded (%d) for DynamoDB",
+ String.format("Max retries exceeded (%d) for DynamoDB. This may be"
+ + " because write threshold of DynamoDB is set too low.",
retryCount));
} else {
LOG.debug("Sleeping {} msec before next retry", action.delayMillis);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[29/50] [abbrv] hadoop git commit: Revert "HDDS-242. Introduce
NEW_NODE, STALE_NODE and DEAD_NODE event" This reverts commit
a47ec5dac4a1cdfec788ce3296b4f610411911ea. There was a spurious file in this
commit. Revert to clean it.
Posted by bo...@apache.org.
Revert "HDDS-242. Introduce NEW_NODE, STALE_NODE and DEAD_NODE event"
This reverts commit a47ec5dac4a1cdfec788ce3296b4f610411911ea.
There was a spurious file in this commit. Revert to clean it.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b5678587
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b5678587
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b5678587
Branch: refs/heads/YARN-7402
Commit: b56785873a4ec9f6f5617e4252888b23837604e2
Parents: 418cc7f
Author: Anu Engineer <ae...@apache.org>
Authored: Wed Jul 11 12:03:42 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Wed Jul 11 12:03:42 2018 -0700
----------------------------------------------------------------------
.../scm/container/ContainerReportHandler.java | 47 ------------------
.../hadoop/hdds/scm/node/DeadNodeHandler.java | 42 ----------------
.../hadoop/hdds/scm/node/NewNodeHandler.java | 50 -------------------
.../hadoop/hdds/scm/node/NodeReportHandler.java | 42 ----------------
.../hadoop/hdds/scm/node/StaleNodeHandler.java | 42 ----------------
.../common/src/main/bin/ozone-config.sh | 51 --------------------
6 files changed, 274 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
deleted file mode 100644
index 486162e..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.container;
-
-import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
- .ContainerReportFromDatanode;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-/**
- * Handles container reports from datanode.
- */
-public class ContainerReportHandler implements
- EventHandler<ContainerReportFromDatanode> {
-
- private final Mapping containerMapping;
- private final Node2ContainerMap node2ContainerMap;
-
- public ContainerReportHandler(Mapping containerMapping,
- Node2ContainerMap node2ContainerMap) {
- this.containerMapping = containerMapping;
- this.node2ContainerMap = node2ContainerMap;
- }
-
- @Override
- public void onMessage(ContainerReportFromDatanode containerReportFromDatanode,
- EventPublisher publisher) {
- // TODO: process container report.
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
deleted file mode 100644
index 427aef8..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.node;
-
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-/**
- * Handles Dead Node event.
- */
-public class DeadNodeHandler implements EventHandler<DatanodeDetails> {
-
- private final Node2ContainerMap node2ContainerMap;
-
- public DeadNodeHandler(Node2ContainerMap node2ContainerMap) {
- this.node2ContainerMap = node2ContainerMap;
- }
-
- @Override
- public void onMessage(DatanodeDetails datanodeDetails,
- EventPublisher publisher) {
- //TODO: add logic to handle dead node.
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
deleted file mode 100644
index 79b75a5..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.node;
-
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.scm.exceptions.SCMException;
-import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-import java.util.Collections;
-
-/**
- * Handles New Node event.
- */
-public class NewNodeHandler implements EventHandler<DatanodeDetails> {
-
- private final Node2ContainerMap node2ContainerMap;
-
- public NewNodeHandler(Node2ContainerMap node2ContainerMap) {
- this.node2ContainerMap = node2ContainerMap;
- }
-
- @Override
- public void onMessage(DatanodeDetails datanodeDetails,
- EventPublisher publisher) {
- try {
- node2ContainerMap.insertNewDatanode(datanodeDetails.getUuid(),
- Collections.emptySet());
- } catch (SCMException e) {
- // TODO: log exception message.
- }
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
deleted file mode 100644
index aa78d53..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.node;
-
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
- .NodeReportFromDatanode;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-/**
- * Handles Node Reports from datanode.
- */
-public class NodeReportHandler implements EventHandler<NodeReportFromDatanode> {
-
- private final NodeManager nodeManager;
-
- public NodeReportHandler(NodeManager nodeManager) {
- this.nodeManager = nodeManager;
- }
-
- @Override
- public void onMessage(NodeReportFromDatanode nodeReportFromDatanode,
- EventPublisher publisher) {
- //TODO: process node report.
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
deleted file mode 100644
index b37dd93..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.node;
-
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-/**
- * Handles Stale node event.
- */
-public class StaleNodeHandler implements EventHandler<DatanodeDetails> {
-
- private final Node2ContainerMap node2ContainerMap;
-
- public StaleNodeHandler(Node2ContainerMap node2ContainerMap) {
- this.node2ContainerMap = node2ContainerMap;
- }
-
- @Override
- public void onMessage(DatanodeDetails datanodeDetails,
- EventPublisher publisher) {
- //TODO: logic to handle stale node.
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-ozone/common/src/main/bin/ozone-config.sh
----------------------------------------------------------------------
diff --git a/hadoop-ozone/common/src/main/bin/ozone-config.sh b/hadoop-ozone/common/src/main/bin/ozone-config.sh
deleted file mode 100755
index 83f30ce..0000000
--- a/hadoop-ozone/common/src/main/bin/ozone-config.sh
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements. See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# included in all the ozone scripts with source command
-# should not be executed directly
-
-function hadoop_subproject_init
-{
- if [[ -z "${HADOOP_OZONE_ENV_PROCESSED}" ]]; then
- if [[ -e "${HADOOP_CONF_DIR}/hdfs-env.sh" ]]; then
- . "${HADOOP_CONF_DIR}/hdfs-env.sh"
- export HADOOP_OZONES_ENV_PROCESSED=true
- fi
- fi
- HADOOP_OZONE_HOME="${HADOOP_OZONE_HOME:-$HADOOP_HOME}"
-
-}
-
-if [[ -z "${HADOOP_LIBEXEC_DIR}" ]]; then
- _hd_this="${BASH_SOURCE-$0}"
- HADOOP_LIBEXEC_DIR=$(cd -P -- "$(dirname -- "${_hd_this}")" >/dev/null && pwd -P)
-fi
-
-# shellcheck source=./hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
-
-if [[ -n "${HADOOP_COMMON_HOME}" ]] &&
- [[ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh" ]]; then
- . "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh"
-elif [[ -e "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh" ]]; then
- . "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh"
-elif [ -e "${HADOOP_HOME}/libexec/hadoop-config.sh" ]; then
- . "${HADOOP_HOME}/libexec/hadoop-config.sh"
-else
- echo "ERROR: Hadoop common not found." 2>&1
- exit 1
-fi
-
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[16/50] [abbrv] hadoop git commit: HADOOP-15384. distcp
numListstatusThreads option doesn't get to -delete scan. Contributed by Steve
Loughran.
Posted by bo...@apache.org.
HADOOP-15384. distcp numListstatusThreads option doesn't get to -delete scan.
Contributed by Steve Loughran.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ca8b80bf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ca8b80bf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ca8b80bf
Branch: refs/heads/YARN-7402
Commit: ca8b80bf59c0570bb9172208d3a6c993a6854514
Parents: 9bd5bef
Author: Steve Loughran <st...@apache.org>
Authored: Tue Jul 10 10:43:59 2018 +0100
Committer: Steve Loughran <st...@apache.org>
Committed: Tue Jul 10 10:43:59 2018 +0100
----------------------------------------------------------------------
.../java/org/apache/hadoop/tools/DistCpOptions.java | 5 ++++-
.../org/apache/hadoop/tools/mapred/CopyCommitter.java | 13 +++++++++++--
.../tools/contract/AbstractContractDistCpTest.java | 2 +-
3 files changed, 16 insertions(+), 4 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca8b80bf/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
index 9db0eb5..aca5d0e 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
@@ -387,7 +387,10 @@ public final class DistCpOptions {
DistCpOptionSwitch.addToConf(conf, DistCpOptionSwitch.TRACK_MISSING,
String.valueOf(trackPath));
}
-
+ if (numListstatusThreads > 0) {
+ DistCpOptionSwitch.addToConf(conf, DistCpOptionSwitch.NUM_LISTSTATUS_THREADS,
+ Integer.toString(numListstatusThreads));
+ }
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca8b80bf/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
index 07eacb0..38106fa 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
@@ -392,6 +392,9 @@ public class CopyCommitter extends FileOutputCommitter {
Path sourceListing = new Path(conf.get(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH));
FileSystem clusterFS = sourceListing.getFileSystem(conf);
Path sortedSourceListing = DistCpUtils.sortListing(conf, sourceListing);
+ long sourceListingCompleted = System.currentTimeMillis();
+ LOG.info("Source listing completed in {}",
+ formatDuration(sourceListingCompleted - listingStart));
// Similarly, create the listing of target-files. Sort alphabetically.
Path targetListing = new Path(sourceListing.getParent(), "targetListing.seq");
@@ -409,8 +412,8 @@ public class CopyCommitter extends FileOutputCommitter {
// Walk both source and target file listings.
// Delete all from target that doesn't also exist on source.
long deletionStart = System.currentTimeMillis();
- LOG.info("Listing completed in {}",
- formatDuration(deletionStart - listingStart));
+ LOG.info("Destination listing completed in {}",
+ formatDuration(deletionStart - sourceListingCompleted));
long deletedEntries = 0;
long filesDeleted = 0;
@@ -545,9 +548,15 @@ public class CopyCommitter extends FileOutputCommitter {
// Set up options to be the same from the CopyListing.buildListing's
// perspective, so to collect similar listings as when doing the copy
//
+ // thread count is picked up from the job
+ int threads = conf.getInt(DistCpConstants.CONF_LABEL_LISTSTATUS_THREADS,
+ DistCpConstants.DEFAULT_LISTSTATUS_THREADS);
+ LOG.info("Scanning destination directory {} with thread count: {}",
+ targetFinalPath, threads);
DistCpOptions options = new DistCpOptions.Builder(targets, resultNonePath)
.withOverwrite(overwrite)
.withSyncFolder(syncFolder)
+ .withNumListstatusThreads(threads)
.build();
DistCpContext distCpContext = new DistCpContext(options);
distCpContext.setTargetPathExists(targetPathExists);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca8b80bf/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
index a5e0a03..1458991 100644
--- a/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
+++ b/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
@@ -572,7 +572,7 @@ public abstract class AbstractContractDistCpTest
private DistCpOptions buildWithStandardOptions(
DistCpOptions.Builder builder) {
return builder
- .withNumListstatusThreads(8)
+ .withNumListstatusThreads(DistCpOptions.MAX_NUM_LISTSTATUS_THREADS)
.build();
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[43/50] [abbrv] hadoop git commit: HADOOP-15531. Use commons-text
instead of commons-lang in some classes to fix deprecation warnings.
Contributed by Takanobu Asanuma.
Posted by bo...@apache.org.
HADOOP-15531. Use commons-text instead of commons-lang in some classes to fix deprecation warnings. Contributed by Takanobu Asanuma.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/88625f5c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/88625f5c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/88625f5c
Branch: refs/heads/YARN-7402
Commit: 88625f5cd90766136a9ebd76a8d84b45a37e6c99
Parents: 17118f4
Author: Akira Ajisaka <aa...@apache.org>
Authored: Fri Jul 13 11:42:12 2018 -0400
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Fri Jul 13 11:42:12 2018 -0400
----------------------------------------------------------------------
hadoop-client-modules/hadoop-client-minicluster/pom.xml | 4 ++++
hadoop-common-project/hadoop-common/pom.xml | 5 +++++
.../org/apache/hadoop/conf/ReconfigurationServlet.java | 2 +-
.../hdfs/qjournal/server/GetJournalEditServlet.java | 2 +-
.../hadoop/hdfs/server/diskbalancer/command/Command.java | 6 +++---
.../hdfs/server/diskbalancer/command/PlanCommand.java | 4 ++--
.../hdfs/server/diskbalancer/command/ReportCommand.java | 10 +++++-----
.../apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 2 +-
.../java/org/apache/hadoop/hdfs/tools/CacheAdmin.java | 2 +-
.../java/org/apache/hadoop/hdfs/TestDecommission.java | 4 ++--
.../java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java | 4 ++--
.../apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java | 2 +-
.../apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java | 2 +-
.../apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java | 2 +-
.../apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java | 2 +-
hadoop-project/pom.xml | 5 +++++
.../java/org/apache/hadoop/yarn/client/cli/TopCLI.java | 3 ++-
.../src/main/java/org/apache/hadoop/yarn/state/Graph.java | 2 +-
.../org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java | 2 +-
.../org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java | 2 +-
.../java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java | 2 +-
.../java/org/apache/hadoop/yarn/webapp/view/TextView.java | 2 +-
.../apache/hadoop/yarn/server/webapp/AppAttemptBlock.java | 2 +-
.../org/apache/hadoop/yarn/server/webapp/AppBlock.java | 2 +-
.../org/apache/hadoop/yarn/server/webapp/AppsBlock.java | 2 +-
.../resourcemanager/webapp/FairSchedulerAppsBlock.java | 2 +-
.../server/resourcemanager/webapp/RMAppAttemptBlock.java | 2 +-
.../yarn/server/resourcemanager/webapp/RMAppBlock.java | 2 +-
.../yarn/server/resourcemanager/webapp/RMAppsBlock.java | 2 +-
.../hadoop/yarn/server/router/webapp/AppsBlock.java | 4 ++--
30 files changed, 52 insertions(+), 37 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-client-modules/hadoop-client-minicluster/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index 490281a..ea8d680 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -171,6 +171,10 @@
<artifactId>commons-lang3</artifactId>
</exclusion>
<exclusion>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-text</artifactId>
+ </exclusion>
+ <exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-common-project/hadoop-common/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/pom.xml b/hadoop-common-project/hadoop-common/pom.xml
index 67a5a54..42554da 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -172,6 +172,11 @@
<scope>compile</scope>
</dependency>
<dependency>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-text</artifactId>
+ <scope>compile</scope>
+ </dependency>
+ <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<scope>compile</scope>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
index c5bdf4e..ef4eac6 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
@@ -18,7 +18,7 @@
package org.apache.hadoop.conf;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import java.util.Collection;
import java.util.Enumeration;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
index 64ac11c..e967527 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
@@ -31,7 +31,7 @@ import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.classification.InterfaceAudience;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
index 968a5a7..eddef33 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
@@ -27,7 +27,7 @@ import com.google.common.collect.Lists;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.Option;
import org.apache.commons.lang3.StringUtils;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.CommonConfigurationKeys;
@@ -491,7 +491,7 @@ public abstract class Command extends Configured implements Closeable {
/**
* Put output line to log and string buffer.
* */
- protected void recordOutput(final StrBuilder result,
+ protected void recordOutput(final TextStringBuilder result,
final String outputLine) {
LOG.info(outputLine);
result.appendln(outputLine);
@@ -501,7 +501,7 @@ public abstract class Command extends Configured implements Closeable {
* Parse top number of nodes to be processed.
* @return top number of nodes to be processed.
*/
- protected int parseTopNodes(final CommandLine cmd, final StrBuilder result)
+ protected int parseTopNodes(final CommandLine cmd, final TextStringBuilder result)
throws IllegalArgumentException {
String outputLine = "";
int nodes = 0;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
index 90cc0c4..dab9559 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
@@ -23,7 +23,7 @@ import com.google.common.base.Throwables;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.HelpFormatter;
import org.apache.commons.lang3.StringUtils;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
@@ -89,7 +89,7 @@ public class PlanCommand extends Command {
*/
@Override
public void execute(CommandLine cmd) throws Exception {
- StrBuilder result = new StrBuilder();
+ TextStringBuilder result = new TextStringBuilder();
String outputLine = "";
LOG.debug("Processing Plan Command.");
Preconditions.checkState(cmd.hasOption(DiskBalancerCLI.PLAN));
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java
index 5f4e0f7..4f75aff 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java
@@ -25,7 +25,7 @@ import java.util.ListIterator;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.HelpFormatter;
import org.apache.commons.lang3.StringUtils;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException;
import org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerDataNode;
@@ -67,7 +67,7 @@ public class ReportCommand extends Command {
@Override
public void execute(CommandLine cmd) throws Exception {
- StrBuilder result = new StrBuilder();
+ TextStringBuilder result = new TextStringBuilder();
String outputLine = "Processing report command";
recordOutput(result, outputLine);
@@ -99,7 +99,7 @@ public class ReportCommand extends Command {
getPrintStream().println(result.toString());
}
- private void handleTopReport(final CommandLine cmd, final StrBuilder result,
+ private void handleTopReport(final CommandLine cmd, final TextStringBuilder result,
final String nodeFormat) throws IllegalArgumentException {
Collections.sort(getCluster().getNodes(), Collections.reverseOrder());
@@ -131,7 +131,7 @@ public class ReportCommand extends Command {
}
}
- private void handleNodeReport(final CommandLine cmd, StrBuilder result,
+ private void handleNodeReport(final CommandLine cmd, TextStringBuilder result,
final String nodeFormat, final String volumeFormat) throws Exception {
String outputLine = "";
/*
@@ -175,7 +175,7 @@ public class ReportCommand extends Command {
/**
* Put node report lines to string buffer.
*/
- private void recordNodeReport(StrBuilder result, DiskBalancerDataNode dbdn,
+ private void recordNodeReport(TextStringBuilder result, DiskBalancerDataNode dbdn,
final String nodeFormat, final String volumeFormat) throws Exception {
final String trueStr = "True";
final String falseStr = "False";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index f94f6d0..66bc567 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -17,7 +17,7 @@
*/
package org.apache.hadoop.hdfs.server.namenode;
-import static org.apache.commons.lang3.StringEscapeUtils.escapeJava;
+import static org.apache.commons.text.StringEscapeUtils.escapeJava;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_DEFAULT;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_KEY;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_CALLER_CONTEXT_ENABLED_DEFAULT;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
index 9781ea1..9e7a3cb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
@@ -22,7 +22,7 @@ import java.util.EnumSet;
import java.util.LinkedList;
import java.util.List;
-import org.apache.commons.lang3.text.WordUtils;
+import org.apache.commons.text.WordUtils;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
index 42b4257..bd266ed 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
@@ -38,7 +38,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
import com.google.common.base.Supplier;
import com.google.common.collect.Lists;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.fs.FSDataOutputStream;
@@ -661,7 +661,7 @@ public class TestDecommission extends AdminStatesBaseTest {
}
private static String scanIntoString(final ByteArrayOutputStream baos) {
- final StrBuilder sb = new StrBuilder();
+ final TextStringBuilder sb = new TextStringBuilder();
final Scanner scanner = new Scanner(baos.toString());
while (scanner.hasNextLine()) {
sb.appendln(scanner.nextLine());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
index 1245247..badb81b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
@@ -27,7 +27,7 @@ import com.google.common.base.Supplier;
import com.google.common.collect.Lists;
import org.apache.commons.io.FileUtils;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
@@ -518,7 +518,7 @@ public class TestDFSAdmin {
}
private static String scanIntoString(final ByteArrayOutputStream baos) {
- final StrBuilder sb = new StrBuilder();
+ final TextStringBuilder sb = new TextStringBuilder();
final Scanner scanner = new Scanner(baos.toString());
while (scanner.hasNextLine()) {
sb.appendln(scanner.nextLine());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
index 944f65e..4b8cde3 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
@@ -27,7 +27,7 @@ import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
import java.util.EnumSet;
import java.util.Collection;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.MRConfig;
import org.apache.hadoop.mapreduce.v2.api.records.JobId;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java
index a2d8fa9..a6d9f52 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java
@@ -24,7 +24,7 @@ import static org.apache.hadoop.yarn.util.StringHelper.join;
import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR;
import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR_VALUE;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.mapreduce.v2.api.records.TaskType;
import org.apache.hadoop.mapreduce.v2.app.job.Task;
import org.apache.hadoop.mapreduce.v2.app.webapp.dao.TaskInfo;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
index 216bdce..3f4daf9 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
@@ -21,7 +21,7 @@ package org.apache.hadoop.mapreduce.v2.hs.webapp;
import java.text.SimpleDateFormat;
import java.util.Date;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.MRConfig;
import org.apache.hadoop.mapreduce.v2.app.AppContext;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
index e8e76d1..8defc4f 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
@@ -29,7 +29,7 @@ import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
import java.util.Collection;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.mapreduce.v2.api.records.TaskId;
import org.apache.hadoop.mapreduce.v2.api.records.TaskType;
import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-project/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 8e28afe..387a3da 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1070,6 +1070,11 @@
<version>3.7</version>
</dependency>
<dependency>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-text</artifactId>
+ <version>1.4</version>
+ </dependency>
+ <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
index b890bee..aed5258 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
@@ -867,7 +867,8 @@ public class TopCLI extends YarnCLI {
TimeUnit.MILLISECONDS.toMinutes(uptime)
- TimeUnit.HOURS.toMinutes(TimeUnit.MILLISECONDS.toHours(uptime));
String uptimeStr = String.format("%dd, %d:%d", days, hours, minutes);
- String currentTime = DateFormatUtils.ISO_TIME_NO_T_FORMAT.format(now);
+ String currentTime = DateFormatUtils.ISO_8601_EXTENDED_TIME_FORMAT
+ .format(now);
ret.append(CLEAR_LINE);
ret.append(limitLineLength(String.format(
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
index ab884fa..11e6f86 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
@@ -26,7 +26,7 @@ import java.util.HashSet;
import java.util.List;
import java.util.Set;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.classification.InterfaceAudience.Private;
@Private
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java
index 1562b1e..b0ff19f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java
@@ -28,7 +28,7 @@ import java.util.EnumSet;
import static java.util.EnumSet.*;
import java.util.Iterator;
-import static org.apache.commons.lang3.StringEscapeUtils.*;
+import static org.apache.commons.text.StringEscapeUtils.*;
import static org.apache.hadoop.yarn.webapp.hamlet.HamletImpl.EOpt.*;
import org.apache.hadoop.classification.InterfaceAudience;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java
index 1fcab23..1c4db06 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java
@@ -28,7 +28,7 @@ import java.util.EnumSet;
import static java.util.EnumSet.*;
import java.util.Iterator;
-import static org.apache.commons.lang3.StringEscapeUtils.*;
+import static org.apache.commons.text.StringEscapeUtils.*;
import static org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl.EOpt.*;
import org.apache.hadoop.classification.InterfaceAudience;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
index 91e5f89..b8e954d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
@@ -18,7 +18,7 @@
package org.apache.hadoop.yarn.webapp.view;
-import static org.apache.commons.lang3.StringEscapeUtils.escapeEcmaScript;
+import static org.apache.commons.text.StringEscapeUtils.escapeEcmaScript;
import static org.apache.hadoop.yarn.util.StringHelper.djoin;
import static org.apache.hadoop.yarn.util.StringHelper.join;
import static org.apache.hadoop.yarn.util.StringHelper.split;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
index e67f960..4b08220 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
@@ -20,7 +20,7 @@ package org.apache.hadoop.yarn.webapp.view;
import java.io.PrintWriter;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.yarn.webapp.View;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
index 38c79ba..2d53dc9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
@@ -25,7 +25,7 @@ import java.security.PrivilegedExceptionAction;
import java.util.Collection;
import java.util.List;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.api.ApplicationBaseProtocol;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
index 3c1018c..0c7a536 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
@@ -28,7 +28,7 @@ import java.util.Collection;
import java.util.List;
import java.util.Map;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.security.UserGroupInformation;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
index 291a572..29843b5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
@@ -32,7 +32,7 @@ import java.util.Collection;
import java.util.EnumSet;
import java.util.List;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.commons.lang3.Range;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.util.StringUtils;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
index 4bc3182..14ad277 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
@@ -29,7 +29,7 @@ import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
index 18595de..43a6ac9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
@@ -27,7 +27,7 @@ import java.io.IOException;
import java.util.Collection;
import java.util.List;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
index 80d27f7..d260400 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
@@ -25,7 +25,7 @@ import java.util.Collection;
import java.util.List;
import java.util.Set;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsRequest;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
index 25b3a4d..b1c0cd9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
@@ -26,7 +26,7 @@ import java.io.IOException;
import java.util.List;
import java.util.Set;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
index aafc5f6..028bacd 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
@@ -18,8 +18,8 @@
package org.apache.hadoop.yarn.server.router.webapp;
-import static org.apache.commons.lang3.StringEscapeUtils.escapeHtml4;
-import static org.apache.commons.lang3.StringEscapeUtils.escapeEcmaScript;
+import static org.apache.commons.text.StringEscapeUtils.escapeHtml4;
+import static org.apache.commons.text.StringEscapeUtils.escapeEcmaScript;
import static org.apache.hadoop.yarn.util.StringHelper.join;
import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR;
import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR_VALUE;
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[12/50] [abbrv] hadoop git commit: Revert "HDDS-224. Create metrics
for Event Watcher."
Posted by bo...@apache.org.
Revert "HDDS-224. Create metrics for Event Watcher."
This reverts commit cb5e225868a069d6d16244b462ebada44465dce8.
The JIRA number is wrong, reverting to fix it.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3c0a66ab
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3c0a66ab
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3c0a66ab
Branch: refs/heads/YARN-7402
Commit: 3c0a66abe632277e89fccd8dced9e71ca5d87df0
Parents: cb5e225
Author: Anu Engineer <ae...@apache.org>
Authored: Mon Jul 9 13:03:57 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Mon Jul 9 13:03:57 2018 -0700
----------------------------------------------------------------------
.../hadoop/hdds/server/events/EventQueue.java | 108 ++++++++-----------
.../server/events/SingleThreadExecutor.java | 35 ++----
.../hdds/server/events/TestEventQueue.java | 35 +++++-
3 files changed, 87 insertions(+), 91 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c0a66ab/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
index 7e29223..44d85f5 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/EventQueue.java
@@ -18,11 +18,7 @@
package org.apache.hadoop.hdds.server.events;
import com.google.common.annotations.VisibleForTesting;
-
-import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.util.Time;
-
-import com.google.common.base.Preconditions;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -46,8 +42,6 @@ public class EventQueue implements EventPublisher, AutoCloseable {
private static final Logger LOG =
LoggerFactory.getLogger(EventQueue.class);
- private static final String EXECUTOR_NAME_SEPARATOR = "For";
-
private final Map<Event, Map<EventExecutor, List<EventHandler>>> executors =
new HashMap<>();
@@ -57,73 +51,37 @@ public class EventQueue implements EventPublisher, AutoCloseable {
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
EVENT_TYPE event, EventHandler<PAYLOAD> handler) {
- this.addHandler(event, handler, generateHandlerName(handler));
- }
-
- /**
- * Add new handler to the event queue.
- * <p>
- * By default a separated single thread executor will be dedicated to
- * deliver the events to the registered event handler.
- *
- * @param event Triggering event.
- * @param handler Handler of event (will be called from a separated
- * thread)
- * @param handlerName The name of handler (should be unique together with
- * the event name)
- * @param <PAYLOAD> The type of the event payload.
- * @param <EVENT_TYPE> The type of the event identifier.
- */
- public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
- EVENT_TYPE event, EventHandler<PAYLOAD> handler, String handlerName) {
- validateEvent(event);
- Preconditions.checkNotNull(handler, "Handler name should not be null.");
- String executorName =
- StringUtils.camelize(event.getName()) + EXECUTOR_NAME_SEPARATOR
- + handlerName;
- this.addHandler(event, new SingleThreadExecutor<>(executorName), handler);
- }
-
- private <EVENT_TYPE extends Event<?>> void validateEvent(EVENT_TYPE event) {
- Preconditions
- .checkArgument(!event.getName().contains(EXECUTOR_NAME_SEPARATOR),
- "Event name should not contain " + EXECUTOR_NAME_SEPARATOR
- + " string.");
+ this.addHandler(event, new SingleThreadExecutor<>(
+ event.getName()), handler);
}
- private <PAYLOAD> String generateHandlerName(EventHandler<PAYLOAD> handler) {
- if (!"".equals(handler.getClass().getSimpleName())) {
- return handler.getClass().getSimpleName();
- } else {
- return handler.getClass().getName();
- }
- }
-
- /**
- * Add event handler with custom executor.
- *
- * @param event Triggering event.
- * @param executor The executor imlementation to deliver events from a
- * separated threads. Please keep in your mind that
- * registering metrics is the responsibility of the
- * caller.
- * @param handler Handler of event (will be called from a separated
- * thread)
- * @param <PAYLOAD> The type of the event payload.
- * @param <EVENT_TYPE> The type of the event identifier.
- */
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void addHandler(
- EVENT_TYPE event, EventExecutor<PAYLOAD> executor,
+ EVENT_TYPE event,
+ EventExecutor<PAYLOAD> executor,
EventHandler<PAYLOAD> handler) {
- validateEvent(event);
+
executors.putIfAbsent(event, new HashMap<>());
executors.get(event).putIfAbsent(executor, new ArrayList<>());
- executors.get(event).get(executor).add(handler);
+ executors.get(event)
+ .get(executor)
+ .add(handler);
}
+ /**
+ * Creates one executor with multiple event handlers.
+ */
+ public void addHandlerGroup(String name, HandlerForEvent<?>...
+ eventsAndHandlers) {
+ SingleThreadExecutor sharedExecutor =
+ new SingleThreadExecutor(name);
+ for (HandlerForEvent handlerForEvent : eventsAndHandlers) {
+ addHandler(handlerForEvent.event, sharedExecutor,
+ handlerForEvent.handler);
+ }
+ }
/**
* Route an event with payload to the right listener(s).
@@ -225,5 +183,31 @@ public class EventQueue implements EventPublisher, AutoCloseable {
});
}
+ /**
+ * Event identifier together with the handler.
+ *
+ * @param <PAYLOAD>
+ */
+ public static class HandlerForEvent<PAYLOAD> {
+
+ private final Event<PAYLOAD> event;
+
+ private final EventHandler<PAYLOAD> handler;
+
+ public HandlerForEvent(
+ Event<PAYLOAD> event,
+ EventHandler<PAYLOAD> handler) {
+ this.event = event;
+ this.handler = handler;
+ }
+
+ public Event<PAYLOAD> getEvent() {
+ return event;
+ }
+
+ public EventHandler<PAYLOAD> getHandler() {
+ return handler;
+ }
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c0a66ab/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
index 3253f2d..a64e3d7 100644
--- a/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
+++ b/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/events/SingleThreadExecutor.java
@@ -23,18 +23,13 @@ import org.slf4j.LoggerFactory;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
-
-import org.apache.hadoop.metrics2.annotation.Metric;
-import org.apache.hadoop.metrics2.annotation.Metrics;
-import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
-import org.apache.hadoop.metrics2.lib.MutableCounterLong;
+import java.util.concurrent.atomic.AtomicLong;
/**
* Simple EventExecutor to call all the event handler one-by-one.
*
* @param <T>
*/
-@Metrics(context = "EventQueue")
public class SingleThreadExecutor<T> implements EventExecutor<T> {
public static final String THREAD_NAME_PREFIX = "EventQueue";
@@ -46,24 +41,14 @@ public class SingleThreadExecutor<T> implements EventExecutor<T> {
private final ThreadPoolExecutor executor;
- @Metric
- private MutableCounterLong queued;
+ private final AtomicLong queuedCount = new AtomicLong(0);
- @Metric
- private MutableCounterLong done;
+ private final AtomicLong successfulCount = new AtomicLong(0);
- @Metric
- private MutableCounterLong failed;
+ private final AtomicLong failedCount = new AtomicLong(0);
- /**
- * Create SingleThreadExecutor.
- *
- * @param name Unique name used in monitoring and metrics.
- */
public SingleThreadExecutor(String name) {
this.name = name;
- DefaultMetricsSystem.instance()
- .register("EventQueue" + name, "Event Executor metrics ", this);
LinkedBlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<>();
executor =
@@ -79,31 +64,31 @@ public class SingleThreadExecutor<T> implements EventExecutor<T> {
@Override
public void onMessage(EventHandler<T> handler, T message, EventPublisher
publisher) {
- queued.incr();
+ queuedCount.incrementAndGet();
executor.execute(() -> {
try {
handler.onMessage(message, publisher);
- done.incr();
+ successfulCount.incrementAndGet();
} catch (Exception ex) {
LOG.error("Error on execution message {}", message, ex);
- failed.incr();
+ failedCount.incrementAndGet();
}
});
}
@Override
public long failedEvents() {
- return failed.value();
+ return failedCount.get();
}
@Override
public long successfulEvents() {
- return done.value();
+ return successfulCount.get();
}
@Override
public long queuedEvents() {
- return queued.value();
+ return queuedCount.get();
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3c0a66ab/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
index 2bdf705..3944409 100644
--- a/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
+++ b/hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/events/TestEventQueue.java
@@ -25,8 +25,6 @@ import org.junit.Test;
import java.util.Set;
import java.util.stream.Collectors;
-import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem;
-
/**
* Testing the basic functionality of the event queue.
*/
@@ -46,13 +44,11 @@ public class TestEventQueue {
@Before
public void startEventQueue() {
- DefaultMetricsSystem.initialize(getClass().getSimpleName());
queue = new EventQueue();
}
@After
public void stopEventQueue() {
- DefaultMetricsSystem.shutdown();
queue.close();
}
@@ -83,4 +79,35 @@ public class TestEventQueue {
}
+ @Test
+ public void handlerGroup() {
+ final long[] result = new long[2];
+ queue.addHandlerGroup(
+ "group",
+ new EventQueue.HandlerForEvent<>(EVENT3, (payload, publisher) ->
+ result[0] = payload),
+ new EventQueue.HandlerForEvent<>(EVENT4, (payload, publisher) ->
+ result[1] = payload)
+ );
+
+ queue.fireEvent(EVENT3, 23L);
+ queue.fireEvent(EVENT4, 42L);
+
+ queue.processAll(1000);
+
+ Assert.assertEquals(23, result[0]);
+ Assert.assertEquals(42, result[1]);
+
+ Set<String> eventQueueThreadNames =
+ Thread.getAllStackTraces().keySet()
+ .stream()
+ .filter(t -> t.getName().startsWith(SingleThreadExecutor
+ .THREAD_NAME_PREFIX))
+ .map(Thread::getName)
+ .collect(Collectors.toSet());
+ System.out.println(eventQueueThreadNames);
+ Assert.assertEquals(1, eventQueueThreadNames.size());
+
+ }
+
}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[28/50] [abbrv] hadoop git commit: HDFS-13729. Fix broken links to
RBF documentation. Contributed by Gabor Bota.
Posted by bo...@apache.org.
HDFS-13729. Fix broken links to RBF documentation. Contributed by Gabor Bota.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/418cc7f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/418cc7f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/418cc7f3
Branch: refs/heads/YARN-7402
Commit: 418cc7f3aeabedc57c94aa9d4c4248c1476ac90e
Parents: 162228e
Author: Akira Ajisaka <aa...@apache.org>
Authored: Wed Jul 11 14:46:43 2018 -0400
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Wed Jul 11 14:46:43 2018 -0400
----------------------------------------------------------------------
.../hadoop-hdfs/src/site/markdown/HDFSCommands.md | 4 ++--
.../hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md | 2 +-
hadoop-project/src/site/markdown/index.md.vm | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/418cc7f3/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 9ed69bf..391b71b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -420,7 +420,7 @@ Runs a HDFS dfsadmin client.
Usage: `hdfs dfsrouter`
-Runs the DFS router. See [Router](./HDFSRouterFederation.html#Router) for more info.
+Runs the DFS router. See [Router](../hadoop-hdfs-rbf/HDFSRouterFederation.html#Router) for more info.
### `dfsrouteradmin`
@@ -449,7 +449,7 @@ Usage:
| `-nameservice` `disable` `enable` *nameservice* | Disable/enable a name service from the federation. If disabled, requests will not go to that name service. |
| `-getDisabledNameservices` | Get the name services that are disabled in the federation. |
-The commands for managing Router-based federation. See [Mount table management](./HDFSRouterFederation.html#Mount_table_management) for more info.
+The commands for managing Router-based federation. See [Mount table management](../hadoop-hdfs-rbf/HDFSRouterFederation.html#Mount_table_management) for more info.
### `diskbalancer`
http://git-wip-us.apache.org/repos/asf/hadoop/blob/418cc7f3/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
index 01e7076..b8d5321 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
@@ -38,7 +38,7 @@ is limited to creating a *read-only image* of a remote namespace that implements
to serve the image. Specifically, reads from a snapshot of a remote namespace are
supported. Adding a remote namespace to an existing/running namenode, refreshing the
remote snapshot, unmounting, and writes are not available in this release. One
-can use [ViewFs](./ViewFs.html) and [RBF](HDFSRouterFederation.html) to
+can use [ViewFs](./ViewFs.html) and [RBF](../hadoop-hdfs-rbf/HDFSRouterFederation.html) to
integrate namespaces with `PROVIDED` storage into an existing deployment.
Creating HDFS Clusters with `PROVIDED` Storage
http://git-wip-us.apache.org/repos/asf/hadoop/blob/418cc7f3/hadoop-project/src/site/markdown/index.md.vm
----------------------------------------------------------------------
diff --git a/hadoop-project/src/site/markdown/index.md.vm b/hadoop-project/src/site/markdown/index.md.vm
index 8b9cfda..438145a 100644
--- a/hadoop-project/src/site/markdown/index.md.vm
+++ b/hadoop-project/src/site/markdown/index.md.vm
@@ -225,7 +225,7 @@ cluster for existing HDFS clients.
See [HDFS-10467](https://issues.apache.org/jira/browse/HDFS-10467) and the
HDFS Router-based Federation
-[documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html) for
+[documentation](./hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html) for
more details.
API-based configuration of Capacity Scheduler queue configuration
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org