You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by zh...@apache.org on 2018/07/20 10:32:33 UTC
[01/50] hadoop git commit: YARN-8491.
TestServiceCLI#testEnableFastLaunch fail when umask is 077. Contributed by K
G Bakthavachalam.
Repository: hadoop
Updated Branches:
refs/heads/HDFS-13572 950dea86f -> 48c41c1ea
YARN-8491. TestServiceCLI#testEnableFastLaunch fail when umask is 077. Contributed by K G Bakthavachalam.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/52e1bc85
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/52e1bc85
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/52e1bc85
Branch: refs/heads/HDFS-13572
Commit: 52e1bc8539ce769f47743d8b2d318a54c3887ba0
Parents: 7f1d3d0
Author: bibinchundatt <bi...@apache.org>
Authored: Wed Jul 11 16:19:51 2018 +0530
Committer: bibinchundatt <bi...@apache.org>
Committed: Wed Jul 11 16:20:29 2018 +0530
----------------------------------------------------------------------
.../org/apache/hadoop/yarn/service/client/TestServiceCLI.java | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/52e1bc85/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
index 78a8198..363fe91 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
@@ -121,12 +121,16 @@ public class TestServiceCLI {
basedir = new File("target", "apps");
basedirProp = YARN_SERVICE_BASE_PATH + "=" + basedir.getAbsolutePath();
conf.set(YARN_SERVICE_BASE_PATH, basedir.getAbsolutePath());
+ fs = new SliderFileSystem(conf);
dependencyTarGzBaseDir = tmpFolder.getRoot();
+ fs.getFileSystem()
+ .setPermission(new Path(dependencyTarGzBaseDir.getAbsolutePath()),
+ new FsPermission("755"));
dependencyTarGz = getDependencyTarGz(dependencyTarGzBaseDir);
dependencyTarGzProp = DEPENDENCY_TARBALL_PATH + "=" + dependencyTarGz
.toString();
conf.set(DEPENDENCY_TARBALL_PATH, dependencyTarGz.toString());
- fs = new SliderFileSystem(conf);
+
if (basedir.exists()) {
FileUtils.deleteDirectory(basedir);
} else {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[33/50] hadoop git commit: YARN-8538. Fixed memory leaks in
container-executor and test cases. Contributed by Billie Rinaldi
Posted by zh...@apache.org.
YARN-8538. Fixed memory leaks in container-executor and test cases.
Contributed by Billie Rinaldi
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/efb4e274
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/efb4e274
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/efb4e274
Branch: refs/heads/HDFS-13572
Commit: efb4e274e505736cb9771a5654a0fc82bbd5396f
Parents: d2874e0
Author: Eric Yang <ey...@apache.org>
Authored: Mon Jul 16 17:38:49 2018 -0400
Committer: Eric Yang <ey...@apache.org>
Committed: Mon Jul 16 17:38:49 2018 -0400
----------------------------------------------------------------------
.../container-executor/impl/configuration.c | 3 +
.../main/native/container-executor/impl/main.c | 4 +-
.../impl/modules/cgroups/cgroups-operations.c | 2 +
.../container-executor/impl/utils/docker-util.c | 234 ++++++++++---------
.../test/modules/cgroups/test-cgroups-module.cc | 8 +
.../test/modules/fpga/test-fpga-module.cc | 24 +-
.../test/modules/gpu/test-gpu-module.cc | 24 +-
.../test/test_configuration.cc | 34 ++-
.../native/container-executor/test/test_util.cc | 5 +
.../test/utils/test-string-utils.cc | 6 +
.../test/utils/test_docker_util.cc | 128 ++++++----
11 files changed, 307 insertions(+), 165 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
index f23cff0..baaa4dc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
@@ -58,6 +58,7 @@ void free_section(struct section *section) {
section->name = NULL;
}
section->size = 0;
+ free(section);
}
//clean up method for freeing configuration
@@ -466,6 +467,7 @@ static void merge_sections(struct section *section1, struct section *section2, c
section1->size += section2->size;
if (free_second_section) {
free(section2->name);
+ free(section2->kv_pairs);
memset(section2, 0, sizeof(*section2));
free(section2);
}
@@ -708,6 +710,7 @@ char *get_config_path(const char *argv0) {
const char *orig_conf_file = HADOOP_CONF_DIR "/" CONF_FILENAME;
char *conf_file = resolve_config_path(orig_conf_file, executable_file);
+ free(executable_file);
if (conf_file == NULL) {
fprintf(ERRORFILE, "Configuration file %s not found.\n", orig_conf_file);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
index 6ab522f..76fa39f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
@@ -128,11 +128,13 @@ static void flush_and_close_log_files() {
LOGFILE = NULL;
}
-if (ERRORFILE != NULL) {
+ if (ERRORFILE != NULL) {
fflush(ERRORFILE);
fclose(ERRORFILE);
ERRORFILE = NULL;
}
+
+ free_executor_configurations();
}
/** Validates the current container-executor setup. Causes program exit
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/cgroups/cgroups-operations.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/cgroups/cgroups-operations.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/cgroups/cgroups-operations.c
index b234109..ea1d36d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/cgroups/cgroups-operations.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/cgroups/cgroups-operations.c
@@ -83,6 +83,8 @@ char* get_cgroups_path_to_write(
}
cleanup:
+ free((void *) cgroups_root);
+ free((void *) yarn_hierarchy_name);
if (failed) {
if (buffer) {
free(buffer);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
index d364227..580cd37 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
@@ -143,9 +143,9 @@ int check_trusted_image(const struct configuration *command_config, const struct
fprintf(ERRORFILE, "image: %s is not trusted.\n", image_name);
ret = INVALID_DOCKER_IMAGE_TRUST;
}
- free(image_name);
free_and_exit:
+ free(image_name);
free_values(privileged_registry);
return ret;
}
@@ -195,7 +195,8 @@ static int add_param_to_command_if_allowed(const struct configuration *command_c
if (strcmp(key, "net") != 0) {
if (check_trusted_image(command_config, executor_cfg) != 0) {
fprintf(ERRORFILE, "Disable %s for untrusted image\n", key);
- return INVALID_DOCKER_IMAGE_TRUST;
+ ret = INVALID_DOCKER_IMAGE_TRUST;
+ goto free_and_exit;
}
}
@@ -225,13 +226,13 @@ static int add_param_to_command_if_allowed(const struct configuration *command_c
dst = strndup(values[i], tmp_ptr - values[i]);
pattern = strdup(permitted_values[j] + 6);
ret = execute_regex_match(pattern, dst);
+ free(dst);
+ free(pattern);
} else {
ret = strncmp(values[i], permitted_values[j], tmp_ptr - values[i]);
}
}
if (ret == 0) {
- free(dst);
- free(pattern);
permitted = 1;
break;
}
@@ -259,7 +260,7 @@ static int add_param_to_command_if_allowed(const struct configuration *command_c
}
}
- free_and_exit:
+free_and_exit:
free_values(values);
free_values(permitted_values);
return ret;
@@ -379,6 +380,7 @@ int get_docker_command(const char *command_file, const struct configuration *con
ret = read_config(command_file, &command_config);
if (ret != 0) {
+ free_configuration(&command_config);
return INVALID_COMMAND_FILE;
}
@@ -392,36 +394,41 @@ int get_docker_command(const char *command_file, const struct configuration *con
ret = add_to_args(args, docker);
free(docker);
if (ret != 0) {
+ free_configuration(&command_config);
return BUFFER_TOO_SMALL;
}
ret = add_docker_config_param(&command_config, args);
if (ret != 0) {
+ free_configuration(&command_config);
return BUFFER_TOO_SMALL;
}
char *command = get_configuration_value("docker-command", DOCKER_COMMAND_FILE_SECTION, &command_config);
+ free_configuration(&command_config);
if (strcmp(DOCKER_INSPECT_COMMAND, command) == 0) {
- return get_docker_inspect_command(command_file, conf, args);
+ ret = get_docker_inspect_command(command_file, conf, args);
} else if (strcmp(DOCKER_KILL_COMMAND, command) == 0) {
- return get_docker_kill_command(command_file, conf, args);
+ ret = get_docker_kill_command(command_file, conf, args);
} else if (strcmp(DOCKER_LOAD_COMMAND, command) == 0) {
- return get_docker_load_command(command_file, conf, args);
+ ret = get_docker_load_command(command_file, conf, args);
} else if (strcmp(DOCKER_PULL_COMMAND, command) == 0) {
- return get_docker_pull_command(command_file, conf, args);
+ ret = get_docker_pull_command(command_file, conf, args);
} else if (strcmp(DOCKER_RM_COMMAND, command) == 0) {
- return get_docker_rm_command(command_file, conf, args);
+ ret = get_docker_rm_command(command_file, conf, args);
} else if (strcmp(DOCKER_RUN_COMMAND, command) == 0) {
- return get_docker_run_command(command_file, conf, args);
+ ret = get_docker_run_command(command_file, conf, args);
} else if (strcmp(DOCKER_STOP_COMMAND, command) == 0) {
- return get_docker_stop_command(command_file, conf, args);
+ ret = get_docker_stop_command(command_file, conf, args);
} else if (strcmp(DOCKER_VOLUME_COMMAND, command) == 0) {
- return get_docker_volume_command(command_file, conf, args);
+ ret = get_docker_volume_command(command_file, conf, args);
} else if (strcmp(DOCKER_START_COMMAND, command) == 0) {
- return get_docker_start_command(command_file, conf, args);
+ ret = get_docker_start_command(command_file, conf, args);
} else {
- return UNKNOWN_DOCKER_COMMAND;
+ ret = UNKNOWN_DOCKER_COMMAND;
}
+ free(command);
+ return ret;
}
// check if a key is permitted in the configuration
@@ -456,7 +463,7 @@ int get_docker_volume_command(const char *command_file, const struct configurati
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_VOLUME_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto cleanup;
}
sub_command = get_configuration_value("sub-command", DOCKER_COMMAND_FILE_SECTION, &command_config);
@@ -533,6 +540,7 @@ int get_docker_volume_command(const char *command_file, const struct configurati
}
cleanup:
+ free_configuration(&command_config);
free(driver);
free(volume_name);
free(sub_command);
@@ -548,18 +556,19 @@ int get_docker_inspect_command(const char *command_file, const struct configurat
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_INSPECT_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto free_and_exit;
}
container_name = get_configuration_value("name", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (container_name == NULL || validate_container_name(container_name) != 0) {
- return INVALID_DOCKER_CONTAINER_NAME;
+ ret = INVALID_DOCKER_CONTAINER_NAME;
+ goto free_and_exit;
}
format = get_configuration_value("format", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (format == NULL) {
- free(container_name);
- return INVALID_DOCKER_INSPECT_FORMAT;
+ ret = INVALID_DOCKER_INSPECT_FORMAT;
+ goto free_and_exit;
}
for (i = 0; i < 2; ++i) {
if (strcmp(format, valid_format_strings[i]) == 0) {
@@ -569,9 +578,8 @@ int get_docker_inspect_command(const char *command_file, const struct configurat
}
if (valid_format != 1) {
fprintf(ERRORFILE, "Invalid format option '%s' not permitted\n", format);
- free(container_name);
- free(format);
- return INVALID_DOCKER_INSPECT_FORMAT;
+ ret = INVALID_DOCKER_INSPECT_FORMAT;
+ goto free_and_exit;
}
ret = add_to_args(args, DOCKER_INSPECT_COMMAND);
@@ -588,14 +596,12 @@ int get_docker_inspect_command(const char *command_file, const struct configurat
if (ret != 0) {
goto free_and_exit;
}
- free(format);
- free(container_name);
- return 0;
- free_and_exit:
+free_and_exit:
+ free_configuration(&command_config);
free(format);
free(container_name);
- return BUFFER_TOO_SMALL;
+ return ret;
}
int get_docker_load_command(const char *command_file, const struct configuration *conf, args *args) {
@@ -604,12 +610,13 @@ int get_docker_load_command(const char *command_file, const struct configuration
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_LOAD_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto free_and_exit;
}
image_name = get_configuration_value("image", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (image_name == NULL) {
- return INVALID_DOCKER_IMAGE_NAME;
+ ret = INVALID_DOCKER_IMAGE_NAME;
+ goto free_and_exit;
}
ret = add_to_args(args, DOCKER_LOAD_COMMAND);
@@ -617,14 +624,14 @@ int get_docker_load_command(const char *command_file, const struct configuration
char *tmp_buffer = make_string("--i=%s", image_name);
ret = add_to_args(args, tmp_buffer);
free(tmp_buffer);
- free(image_name);
if (ret != 0) {
- return BUFFER_TOO_SMALL;
+ ret = BUFFER_TOO_SMALL;
}
- return 0;
}
+free_and_exit:
free(image_name);
- return BUFFER_TOO_SMALL;
+ free_configuration(&command_config);
+ return ret;
}
static int validate_docker_image_name(const char *image_name) {
@@ -638,26 +645,23 @@ int get_docker_pull_command(const char *command_file, const struct configuration
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_PULL_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto free_pull;
}
image_name = get_configuration_value("image", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (image_name == NULL || validate_docker_image_name(image_name) != 0) {
- return INVALID_DOCKER_IMAGE_NAME;
+ ret = INVALID_DOCKER_IMAGE_NAME;
+ goto free_pull;
}
ret = add_to_args(args, DOCKER_PULL_COMMAND);
if (ret == 0) {
ret = add_to_args(args, image_name);
- free(image_name);
- if (ret != 0) {
- goto free_pull;
- }
- return 0;
}
- free_pull:
+free_pull:
free(image_name);
- return BUFFER_TOO_SMALL;
+ free_configuration(&command_config);
+ return ret;
}
int get_docker_rm_command(const char *command_file, const struct configuration *conf, args *args) {
@@ -666,25 +670,26 @@ int get_docker_rm_command(const char *command_file, const struct configuration *
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_RM_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto free_and_exit;
}
container_name = get_configuration_value("name", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (container_name == NULL || validate_container_name(container_name) != 0) {
- return INVALID_DOCKER_CONTAINER_NAME;
+ ret = INVALID_DOCKER_CONTAINER_NAME;
+ goto free_and_exit;
}
ret = add_to_args(args, DOCKER_RM_COMMAND);
if (ret == 0) {
ret = add_to_args(args, container_name);
- free(container_name);
if (ret != 0) {
- return BUFFER_TOO_SMALL;
+ ret = BUFFER_TOO_SMALL;
}
- return 0;
}
+free_and_exit:
free(container_name);
- return BUFFER_TOO_SMALL;
+ free_configuration(&command_config);
+ return ret;
}
int get_docker_stop_command(const char *command_file, const struct configuration *conf,
@@ -696,12 +701,13 @@ int get_docker_stop_command(const char *command_file, const struct configuration
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_STOP_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto free_and_exit;
}
container_name = get_configuration_value("name", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (container_name == NULL || validate_container_name(container_name) != 0) {
- return INVALID_DOCKER_CONTAINER_NAME;
+ ret = INVALID_DOCKER_CONTAINER_NAME;
+ goto free_and_exit;
}
ret = add_to_args(args, DOCKER_STOP_COMMAND);
@@ -726,7 +732,9 @@ int get_docker_stop_command(const char *command_file, const struct configuration
ret = add_to_args(args, container_name);
}
free_and_exit:
+ free(value);
free(container_name);
+ free_configuration(&command_config);
return ret;
}
@@ -739,12 +747,13 @@ int get_docker_kill_command(const char *command_file, const struct configuration
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_KILL_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto free_and_exit;
}
container_name = get_configuration_value("name", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (container_name == NULL || validate_container_name(container_name) != 0) {
- return INVALID_DOCKER_CONTAINER_NAME;
+ ret = INVALID_DOCKER_CONTAINER_NAME;
+ goto free_and_exit;
}
ret = add_to_args(args, DOCKER_KILL_COMMAND);
@@ -770,7 +779,9 @@ int get_docker_kill_command(const char *command_file, const struct configuration
ret = add_to_args(args, container_name);
}
free_and_exit:
+ free(value);
free(container_name);
+ free_configuration(&command_config);
return ret;
}
@@ -780,12 +791,13 @@ int get_docker_start_command(const char *command_file, const struct configuratio
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_START_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto free_and_exit;
}
container_name = get_configuration_value("name", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (container_name == NULL || validate_container_name(container_name) != 0) {
- return INVALID_DOCKER_CONTAINER_NAME;
+ ret = INVALID_DOCKER_CONTAINER_NAME;
+ goto free_and_exit;
}
ret = add_to_args(args, DOCKER_START_COMMAND);
@@ -795,6 +807,7 @@ int get_docker_start_command(const char *command_file, const struct configuratio
ret = add_to_args(args, container_name);
free_and_exit:
free(container_name);
+ free_configuration(&command_config);
return ret;
}
@@ -1092,7 +1105,9 @@ static int add_mounts(const struct configuration *command_config, const struct c
char **permitted_rw_mounts = get_configuration_values_delimiter("docker.allowed.rw-mounts",
CONTAINER_EXECUTOR_CFG_DOCKER_SECTION, conf, ",");
char **values = get_configuration_values_delimiter(key, DOCKER_COMMAND_FILE_SECTION, command_config, ",");
- const char *container_executor_cfg_path = normalize_mount(get_config_path(""), 0);
+ char *config_path = get_config_path("");
+ const char *container_executor_cfg_path = normalize_mount(config_path, 0);
+ free(config_path);
int i = 0, permitted_rw = 0, permitted_ro = 0, ret = 0;
if (ro != 0) {
ro_suffix = ":ro";
@@ -1172,6 +1187,8 @@ static int add_mounts(const struct configuration *command_config, const struct c
ret = BUFFER_TOO_SMALL;
goto free_and_exit;
}
+ free(mount_src);
+ mount_src = NULL;
}
}
@@ -1312,7 +1329,7 @@ static int set_privileged(const struct configuration *command_config, const stru
}
}
- free_and_exit:
+free_and_exit:
free(value);
free(privileged_container_enabled);
free(user);
@@ -1325,42 +1342,41 @@ int get_docker_run_command(const char *command_file, const struct configuration
char *tmp_buffer = NULL;
char **launch_command = NULL;
char *privileged = NULL;
+ char *no_new_privileges_enabled = NULL;
struct configuration command_config = {0, NULL};
ret = read_and_verify_command_file(command_file, DOCKER_RUN_COMMAND, &command_config);
if (ret != 0) {
- return ret;
+ goto free_and_exit;
}
container_name = get_configuration_value("name", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (container_name == NULL || validate_container_name(container_name) != 0) {
- if (container_name != NULL) {
- free(container_name);
- }
- return INVALID_DOCKER_CONTAINER_NAME;
+ ret = INVALID_DOCKER_CONTAINER_NAME;
+ goto free_and_exit;
}
user = get_configuration_value("user", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (user == NULL) {
- return INVALID_DOCKER_USER_NAME;
+ ret = INVALID_DOCKER_USER_NAME;
+ goto free_and_exit;
}
image = get_configuration_value("image", DOCKER_COMMAND_FILE_SECTION, &command_config);
if (image == NULL || validate_docker_image_name(image) != 0) {
- if (image != NULL) {
- free(image);
- }
- return INVALID_DOCKER_IMAGE_NAME;
+ ret = INVALID_DOCKER_IMAGE_NAME;
+ goto free_and_exit;
}
ret = add_to_args(args, DOCKER_RUN_COMMAND);
if(ret != 0) {
- reset_args(args);
- return BUFFER_TOO_SMALL;
+ ret = BUFFER_TOO_SMALL;
+ goto free_and_exit;
}
tmp_buffer = make_string("--name=%s", container_name);
ret = add_to_args(args, tmp_buffer);
+ free(tmp_buffer);
if (ret != 0) {
- reset_args(args);
- return BUFFER_TOO_SMALL;
+ ret = BUFFER_TOO_SMALL;
+ goto free_and_exit;
}
privileged = get_configuration_value("privileged", DOCKER_COMMAND_FILE_SECTION, &command_config);
@@ -1370,111 +1386,95 @@ int get_docker_run_command(const char *command_file, const struct configuration
ret = add_to_args(args, user_buffer);
free(user_buffer);
if (ret != 0) {
- reset_args(args);
- return BUFFER_TOO_SMALL;
+ ret = BUFFER_TOO_SMALL;
+ goto free_and_exit;
}
- char *no_new_privileges_enabled =
+ no_new_privileges_enabled =
get_configuration_value("docker.no-new-privileges.enabled",
CONTAINER_EXECUTOR_CFG_DOCKER_SECTION, conf);
if (no_new_privileges_enabled != NULL &&
strcasecmp(no_new_privileges_enabled, "True") == 0) {
ret = add_to_args(args, "--security-opt=no-new-privileges");
if (ret != 0) {
- reset_args(args);
- return BUFFER_TOO_SMALL;
+ ret = BUFFER_TOO_SMALL;
+ goto free_and_exit;
}
}
- free(no_new_privileges_enabled);
}
- free(privileged);
ret = detach_container(&command_config, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = rm_container_on_exit(&command_config, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_container_workdir(&command_config, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_network(&command_config, conf, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_pid_namespace(&command_config, conf, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = add_ro_mounts(&command_config, conf, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = add_rw_mounts(&command_config, conf, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_cgroup_parent(&command_config, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_privileged(&command_config, conf, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_capabilities(&command_config, conf, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_hostname(&command_config, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_group_add(&command_config, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_devices(&command_config, conf, args);
if (ret != 0) {
- reset_args(args);
- return ret;
+ goto free_and_exit;
}
ret = set_env(&command_config, args);
if (ret != 0) {
- return BUFFER_TOO_SMALL;
+ goto free_and_exit;
}
ret = add_to_args(args, image);
if (ret != 0) {
- reset_args(args);
- return BUFFER_TOO_SMALL;
+ goto free_and_exit;
}
launch_command = get_configuration_values_delimiter("launch-command", DOCKER_COMMAND_FILE_SECTION, &command_config,
@@ -1483,13 +1483,21 @@ int get_docker_run_command(const char *command_file, const struct configuration
for (i = 0; launch_command[i] != NULL; ++i) {
ret = add_to_args(args, launch_command[i]);
if (ret != 0) {
- free_values(launch_command);
- reset_args(args);
- return BUFFER_TOO_SMALL;
+ ret = BUFFER_TOO_SMALL;
+ goto free_and_exit;
}
}
- free_values(launch_command);
}
- free(tmp_buffer);
- return 0;
+free_and_exit:
+ if (ret != 0) {
+ reset_args(args);
+ }
+ free(user);
+ free(image);
+ free(privileged);
+ free(no_new_privileges_enabled);
+ free(container_name);
+ free_values(launch_command);
+ free_configuration(&command_config);
+ return ret;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/cgroups/test-cgroups-module.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/cgroups/test-cgroups-module.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/cgroups/test-cgroups-module.cc
index 8ffbe88..078456d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/cgroups/test-cgroups-module.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/cgroups/test-cgroups-module.cc
@@ -73,6 +73,8 @@ TEST_F(TestCGroupsModule, test_cgroups_get_path_without_define_root) {
char* path = get_cgroups_path_to_write("devices", "deny", "container_1");
ASSERT_TRUE(NULL == path) << "Should fail.\n";
+
+ free_executor_configurations();
}
TEST_F(TestCGroupsModule, test_cgroups_get_path_without_define_yarn_hierarchy) {
@@ -92,6 +94,8 @@ TEST_F(TestCGroupsModule, test_cgroups_get_path_without_define_yarn_hierarchy) {
char* path = get_cgroups_path_to_write("devices", "deny", "container_1");
ASSERT_TRUE(NULL == path) << "Should fail.\n";
+
+ free_executor_configurations();
}
TEST_F(TestCGroupsModule, test_cgroups_get_path_succeeded) {
@@ -117,5 +121,9 @@ TEST_F(TestCGroupsModule, test_cgroups_get_path_succeeded) {
ASSERT_STREQ(EXPECTED, path)
<< "Return cgroup-path-to-write is not expected\n";
+
+ free(path);
+
+ free_executor_configurations();
}
} // namespace ContainerExecutor
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/fpga/test-fpga-module.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/fpga/test-fpga-module.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/fpga/test-fpga-module.cc
index 1e5c5ea..a5d1dff 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/fpga/test-fpga-module.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/fpga/test-fpga-module.cc
@@ -84,6 +84,13 @@ static int mock_update_cgroups_parameters(
return 0;
}
+static void clear_cgroups_parameters_invoked() {
+ for (std::vector<const char*>::size_type i = 0; i < cgroups_parameters_invoked.size(); i++) {
+ free((void *) cgroups_parameters_invoked[i]);
+ }
+ cgroups_parameters_invoked.clear();
+}
+
static void verify_param_updated_to_cgroups(
int argc, const char** argv) {
ASSERT_EQ(argc, cgroups_parameters_invoked.size());
@@ -133,6 +140,9 @@ static void test_fpga_module_enabled_disabled(int enabled) {
EXPECTED_RC = -1;
}
ASSERT_EQ(EXPECTED_RC, rc);
+
+ clear_cgroups_parameters_invoked();
+ free_executor_configurations();
}
TEST_F(TestFpgaModule, test_verify_fpga_module_calls_cgroup_parameter) {
@@ -146,7 +156,7 @@ TEST_F(TestFpgaModule, test_verify_fpga_module_calls_cgroup_parameter) {
container_id };
/* Test case 1: block 2 devices */
- cgroups_parameters_invoked.clear();
+ clear_cgroups_parameters_invoked();
int rc = handle_fpga_request(&mock_update_cgroups_parameters,
"fpga", 5, argv);
ASSERT_EQ(0, rc) << "Should success.\n";
@@ -157,7 +167,7 @@ TEST_F(TestFpgaModule, test_verify_fpga_module_calls_cgroup_parameter) {
verify_param_updated_to_cgroups(8, expected_cgroups_argv);
/* Test case 2: block 0 devices */
- cgroups_parameters_invoked.clear();
+ clear_cgroups_parameters_invoked();
char* argv_1[] = { (char*) "--module-fpga", (char*) "--container_id", container_id };
rc = handle_fpga_request(&mock_update_cgroups_parameters,
"fpga", 3, argv_1);
@@ -167,7 +177,7 @@ TEST_F(TestFpgaModule, test_verify_fpga_module_calls_cgroup_parameter) {
verify_param_updated_to_cgroups(0, NULL);
/* Test case 3: block 2 non-sequential devices */
- cgroups_parameters_invoked.clear();
+ clear_cgroups_parameters_invoked();
char* argv_2[] = { (char*) "--module-fpga", (char*) "--excluded_fpgas", (char*) "1,3",
(char*) "--container_id", container_id };
rc = handle_fpga_request(&mock_update_cgroups_parameters,
@@ -178,6 +188,9 @@ TEST_F(TestFpgaModule, test_verify_fpga_module_calls_cgroup_parameter) {
const char* expected_cgroups_argv_2[] = { "devices", "deny", container_id, "c 246:1 rwm",
"devices", "deny", container_id, "c 246:3 rwm"};
verify_param_updated_to_cgroups(8, expected_cgroups_argv_2);
+
+ clear_cgroups_parameters_invoked();
+ free_executor_configurations();
}
TEST_F(TestFpgaModule, test_illegal_cli_parameters) {
@@ -193,6 +206,7 @@ TEST_F(TestFpgaModule, test_illegal_cli_parameters) {
ASSERT_NE(0, rc) << "Should fail.\n";
// Illegal container id - 2
+ clear_cgroups_parameters_invoked();
char* argv_1[] = { (char*) "--module-fpga", (char*) "--excluded_fpgas", (char*) "0,1",
(char*) "--container_id", (char*) "container_1" };
rc = handle_fpga_request(&mock_update_cgroups_parameters,
@@ -200,10 +214,14 @@ TEST_F(TestFpgaModule, test_illegal_cli_parameters) {
ASSERT_NE(0, rc) << "Should fail.\n";
// Illegal container id - 3
+ clear_cgroups_parameters_invoked();
char* argv_2[] = { (char*) "--module-fpga", (char*) "--excluded_fpgas", (char*) "0,1" };
rc = handle_fpga_request(&mock_update_cgroups_parameters,
"fpga", 3, argv_2);
ASSERT_NE(0, rc) << "Should fail.\n";
+
+ clear_cgroups_parameters_invoked();
+ free_executor_configurations();
}
TEST_F(TestFpgaModule, test_fpga_module_disabled) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/gpu/test-gpu-module.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/gpu/test-gpu-module.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/gpu/test-gpu-module.cc
index b3d93dc..fcf8b0b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/gpu/test-gpu-module.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/modules/gpu/test-gpu-module.cc
@@ -84,6 +84,13 @@ static int mock_update_cgroups_parameters(
return 0;
}
+static void clear_cgroups_parameters_invoked() {
+ for (std::vector<const char*>::size_type i = 0; i < cgroups_parameters_invoked.size(); i++) {
+ free((void *) cgroups_parameters_invoked[i]);
+ }
+ cgroups_parameters_invoked.clear();
+}
+
static void verify_param_updated_to_cgroups(
int argc, const char** argv) {
ASSERT_EQ(argc, cgroups_parameters_invoked.size());
@@ -133,6 +140,9 @@ static void test_gpu_module_enabled_disabled(int enabled) {
EXPECTED_RC = -1;
}
ASSERT_EQ(EXPECTED_RC, rc);
+
+ clear_cgroups_parameters_invoked();
+ free_executor_configurations();
}
TEST_F(TestGpuModule, test_verify_gpu_module_calls_cgroup_parameter) {
@@ -146,7 +156,7 @@ TEST_F(TestGpuModule, test_verify_gpu_module_calls_cgroup_parameter) {
container_id };
/* Test case 1: block 2 devices */
- cgroups_parameters_invoked.clear();
+ clear_cgroups_parameters_invoked();
int rc = handle_gpu_request(&mock_update_cgroups_parameters,
"gpu", 5, argv);
ASSERT_EQ(0, rc) << "Should success.\n";
@@ -157,7 +167,7 @@ TEST_F(TestGpuModule, test_verify_gpu_module_calls_cgroup_parameter) {
verify_param_updated_to_cgroups(8, expected_cgroups_argv);
/* Test case 2: block 0 devices */
- cgroups_parameters_invoked.clear();
+ clear_cgroups_parameters_invoked();
char* argv_1[] = { (char*) "--module-gpu", (char*) "--container_id", container_id };
rc = handle_gpu_request(&mock_update_cgroups_parameters,
"gpu", 3, argv_1);
@@ -167,7 +177,7 @@ TEST_F(TestGpuModule, test_verify_gpu_module_calls_cgroup_parameter) {
verify_param_updated_to_cgroups(0, NULL);
/* Test case 3: block 2 non-sequential devices */
- cgroups_parameters_invoked.clear();
+ clear_cgroups_parameters_invoked();
char* argv_2[] = { (char*) "--module-gpu", (char*) "--excluded_gpus", (char*) "1,3",
(char*) "--container_id", container_id };
rc = handle_gpu_request(&mock_update_cgroups_parameters,
@@ -178,6 +188,9 @@ TEST_F(TestGpuModule, test_verify_gpu_module_calls_cgroup_parameter) {
const char* expected_cgroups_argv_2[] = { "devices", "deny", container_id, "c 195:1 rwm",
"devices", "deny", container_id, "c 195:3 rwm"};
verify_param_updated_to_cgroups(8, expected_cgroups_argv_2);
+
+ clear_cgroups_parameters_invoked();
+ free_executor_configurations();
}
TEST_F(TestGpuModule, test_illegal_cli_parameters) {
@@ -193,6 +206,7 @@ TEST_F(TestGpuModule, test_illegal_cli_parameters) {
ASSERT_NE(0, rc) << "Should fail.\n";
// Illegal container id - 2
+ clear_cgroups_parameters_invoked();
char* argv_1[] = { (char*) "--module-gpu", (char*) "--excluded_gpus", (char*) "0,1",
(char*) "--container_id", (char*) "container_1" };
rc = handle_gpu_request(&mock_update_cgroups_parameters,
@@ -200,10 +214,14 @@ TEST_F(TestGpuModule, test_illegal_cli_parameters) {
ASSERT_NE(0, rc) << "Should fail.\n";
// Illegal container id - 3
+ clear_cgroups_parameters_invoked();
char* argv_2[] = { (char*) "--module-gpu", (char*) "--excluded_gpus", (char*) "0,1" };
rc = handle_gpu_request(&mock_update_cgroups_parameters,
"gpu", 3, argv_2);
ASSERT_NE(0, rc) << "Should fail.\n";
+
+ clear_cgroups_parameters_invoked();
+ free_executor_configurations();
}
TEST_F(TestGpuModule, test_gpu_module_disabled) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
index 6ee0ab2..0e80fcd 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_configuration.cc
@@ -51,6 +51,7 @@ namespace ContainerExecutor {
virtual void TearDown() {
free_configuration(&new_config_format);
free_configuration(&old_config_format);
+ free_configuration(&mixed_config_format);
return;
}
@@ -84,12 +85,12 @@ namespace ContainerExecutor {
ASSERT_STREQ("/var/run/yarn", split_values[0]);
ASSERT_STREQ("/tmp/mydir", split_values[1]);
ASSERT_EQ(NULL, split_values[2]);
- free(split_values);
+ free_values(split_values);
split_values = get_configuration_values_delimiter("allowed.system.users",
"", &old_config_format, "%");
ASSERT_STREQ("nobody,daemon", split_values[0]);
ASSERT_EQ(NULL, split_values[1]);
- free(split_values);
+ free_values(split_values);
}
TEST_F(TestConfiguration, test_get_configuration_values) {
@@ -105,13 +106,13 @@ namespace ContainerExecutor {
split_values = get_configuration_values("yarn.local.dirs", "", &old_config_format);
ASSERT_STREQ("/var/run/yarn%/tmp/mydir", split_values[0]);
ASSERT_EQ(NULL, split_values[1]);
- free(split_values);
+ free_values(split_values);
split_values = get_configuration_values("allowed.system.users", "",
&old_config_format);
ASSERT_STREQ("nobody", split_values[0]);
ASSERT_STREQ("daemon", split_values[1]);
ASSERT_EQ(NULL, split_values[2]);
- free(split_values);
+ free_values(split_values);
}
TEST_F(TestConfiguration, test_get_configuration_value) {
@@ -149,21 +150,28 @@ namespace ContainerExecutor {
char *value = NULL;
value = get_section_value("yarn.nodemanager.linux-container-executor.group", executor_cfg);
ASSERT_STREQ("yarn", value);
+ free(value);
value = get_section_value("feature.docker.enabled", executor_cfg);
ASSERT_STREQ("1", value);
+ free(value);
value = get_section_value("feature.tc.enabled", executor_cfg);
ASSERT_STREQ("0", value);
+ free(value);
value = get_section_value("min.user.id", executor_cfg);
ASSERT_STREQ("1000", value);
+ free(value);
value = get_section_value("docker.binary", executor_cfg);
ASSERT_STREQ("/usr/bin/docker", value);
+ free(value);
char **list = get_section_values("allowed.system.users", executor_cfg);
ASSERT_STREQ("nobody", list[0]);
ASSERT_STREQ("daemon", list[1]);
+ free_values(list);
list = get_section_values("banned.users", executor_cfg);
ASSERT_STREQ("root", list[0]);
ASSERT_STREQ("testuser1", list[1]);
ASSERT_STREQ("testuser2", list[2]);
+ free_values(list);
}
TEST_F(TestConfiguration, test_get_section_values_delimiter) {
@@ -176,12 +184,16 @@ namespace ContainerExecutor {
free(value);
value = get_section_value("key2", section);
ASSERT_EQ(NULL, value);
+ free(value);
split_values = get_section_values_delimiter(NULL, section, "%");
ASSERT_EQ(NULL, split_values);
+ free_values(split_values);
split_values = get_section_values_delimiter("split-key", NULL, "%");
ASSERT_EQ(NULL, split_values);
+ free_values(split_values);
split_values = get_section_values_delimiter("split-key", section, NULL);
ASSERT_EQ(NULL, split_values);
+ free_values(split_values);
split_values = get_section_values_delimiter("split-key", section, "%");
ASSERT_FALSE(split_values == NULL);
ASSERT_STREQ("val1,val2,val3", split_values[0]);
@@ -192,6 +204,7 @@ namespace ContainerExecutor {
ASSERT_STREQ("perc-val1", split_values[0]);
ASSERT_STREQ("perc-val2", split_values[1]);
ASSERT_TRUE(split_values[2] == NULL);
+ free_values(split_values);
}
TEST_F(TestConfiguration, test_get_section_values) {
@@ -201,13 +214,16 @@ namespace ContainerExecutor {
section = get_configuration_section("section-1", &new_config_format);
value = get_section_value(NULL, section);
ASSERT_EQ(NULL, value);
+ free(value);
value = get_section_value("key1", NULL);
ASSERT_EQ(NULL, value);
+ free(value);
value = get_section_value("key1", section);
ASSERT_STREQ("value1", value);
free(value);
value = get_section_value("key2", section);
ASSERT_EQ(NULL, value);
+ free(value);
split_values = get_section_values("split-key", section);
ASSERT_FALSE(split_values == NULL);
ASSERT_STREQ("val1", split_values[0]);
@@ -235,14 +251,16 @@ namespace ContainerExecutor {
section = get_configuration_section("split-section", &new_config_format);
value = get_section_value(NULL, section);
ASSERT_EQ(NULL, value);
+ free(value);
value = get_section_value("key3", NULL);
ASSERT_EQ(NULL, value);
+ free(value);
value = get_section_value("key3", section);
ASSERT_STREQ("value3", value);
free(value);
value = get_section_value("key4", section);
ASSERT_STREQ("value4", value);
-
+ free(value);
}
TEST_F(TestConfiguration, test_get_configuration_section) {
@@ -343,6 +361,7 @@ namespace ContainerExecutor {
oss.str("");
oss << "value" << i;
ASSERT_STREQ(oss.str().c_str(), value);
+ free((void *) value);
}
remove(sample_file_name.c_str());
free_configuration(&cfg);
@@ -372,6 +391,7 @@ namespace ContainerExecutor {
oss.str("");
oss << "value" << i;
ASSERT_STREQ(oss.str().c_str(), value);
+ free((void *) value);
}
remove(sample_file_name.c_str());
free_configuration(&cfg);
@@ -415,18 +435,22 @@ namespace ContainerExecutor {
char *value = NULL;
value = get_section_value("key1", executor_cfg);
ASSERT_STREQ("value1", value);
+ free(value);
value = get_section_value("key2", executor_cfg);
ASSERT_STREQ("value2", value);
ASSERT_EQ(2, executor_cfg->size);
+ free(value);
executor_cfg = get_configuration_section("section-1",
&mixed_config_format);
value = get_section_value("key3", executor_cfg);
ASSERT_STREQ("value3", value);
+ free(value);
value = get_section_value("key1", executor_cfg);
ASSERT_STREQ("value4", value);
ASSERT_EQ(2, executor_cfg->size);
ASSERT_EQ(2, mixed_config_format.size);
ASSERT_STREQ("", mixed_config_format.sections[0]->name);
ASSERT_STREQ("section-1", mixed_config_format.sections[1]->name);
+ free(value);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
index 8cdbf2f..51e3d52 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
@@ -51,6 +51,7 @@ namespace ContainerExecutor {
ASSERT_STREQ(oss.str().c_str(), splits[i-1]);
}
ASSERT_EQ(NULL, splits[count]);
+ free(split_string);
free_values(splits);
split_string = (char *) calloc(str.length() + 1, sizeof(char));
@@ -59,6 +60,7 @@ namespace ContainerExecutor {
ASSERT_TRUE(splits != NULL);
ASSERT_TRUE(splits[1] == NULL);
ASSERT_STREQ(str.c_str(), splits[0]);
+ free(split_string);
free_values(splits);
splits = split_delimiter(NULL, ",");
@@ -82,6 +84,7 @@ namespace ContainerExecutor {
ASSERT_STREQ(oss.str().c_str(), splits[i-1]);
}
ASSERT_EQ(NULL, splits[count]);
+ free(split_string);
free_values(splits);
str = "1,2,3,4,5,6,7,8,9,10,11";
@@ -91,6 +94,8 @@ namespace ContainerExecutor {
ASSERT_TRUE(splits != NULL);
ASSERT_TRUE(splits[1] == NULL);
ASSERT_STREQ(str.c_str(), splits[0]);
+ free(split_string);
+ free_values(splits);
return;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test-string-utils.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test-string-utils.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test-string-utils.cc
index b259c6e..138e32a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test-string-utils.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test-string-utils.cc
@@ -59,6 +59,7 @@
ASSERT_EQ(1, numbers[0]);
ASSERT_EQ(-1, numbers[3]);
ASSERT_EQ(0, numbers[5]);
+ free(numbers);
input = "3";
rc = get_numbers_split_by_comma(input, &numbers, &n_numbers);
@@ -66,28 +67,33 @@
ASSERT_EQ(0, rc) << "Should succeeded\n";
ASSERT_EQ(1, n_numbers);
ASSERT_EQ(3, numbers[0]);
+ free(numbers);
input = "";
rc = get_numbers_split_by_comma(input, &numbers, &n_numbers);
std::cout << "Testing input=" << input << "\n";
ASSERT_EQ(0, rc) << "Should succeeded\n";
ASSERT_EQ(0, n_numbers);
+ free(numbers);
input = ",,";
rc = get_numbers_split_by_comma(input, &numbers, &n_numbers);
std::cout << "Testing input=" << input << "\n";
ASSERT_EQ(0, rc) << "Should succeeded\n";
ASSERT_EQ(0, n_numbers);
+ free(numbers);
input = "1,2,aa,bb";
rc = get_numbers_split_by_comma(input, &numbers, &n_numbers);
std::cout << "Testing input=" << input << "\n";
ASSERT_TRUE(0 != rc) << "Should failed\n";
+ free(numbers);
input = "1,2,3,-12312312312312312312321311231231231";
rc = get_numbers_split_by_comma(input, &numbers, &n_numbers);
std::cout << "Testing input=" << input << "\n";
ASSERT_TRUE(0 != rc) << "Should failed\n";
+ free(numbers);
}
TEST_F(TestStringUtils, test_validate_container_id) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/efb4e274/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
index cd671ce..007e737 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
@@ -119,23 +119,22 @@ namespace ContainerExecutor {
struct args tmp = ARGS_INITIAL_VALUE;
std::vector<std::pair<std::string, std::string> >::const_iterator itr;
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
- reset_args(&tmp);
write_command_file(itr->first);
int ret = (*docker_func)(docker_command_file.c_str(), &container_executor_cfg, &tmp);
ASSERT_EQ(0, ret) << "error message: " << get_docker_error_message(ret) << " for input " << itr->first;
char *actual = flatten(&tmp);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&tmp);
free(actual);
}
std::vector<std::pair<std::string, int> >::const_iterator itr2;
for (itr2 = bad_file_cmd_vec.begin(); itr2 != bad_file_cmd_vec.end(); ++itr2) {
- reset_args(&tmp);
write_command_file(itr2->first);
int ret = (*docker_func)(docker_command_file.c_str(), &container_executor_cfg, &tmp);
ASSERT_EQ(itr2->second, ret) << " for " << itr2->first << std::endl;
+ reset_args(&tmp);
}
- reset_args(&tmp);
int ret = (*docker_func)("unknown-file", &container_executor_cfg, &tmp);
ASSERT_EQ(static_cast<int>(INVALID_COMMAND_FILE), ret);
reset_args(&tmp);
@@ -147,7 +146,6 @@ namespace ContainerExecutor {
for(itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
struct configuration cfg;
struct args buff = ARGS_INITIAL_VALUE;
- reset_args(&buff);
write_command_file(itr->first);
int ret = read_config(docker_command_file.c_str(), &cfg);
if(ret == 0) {
@@ -155,7 +153,9 @@ namespace ContainerExecutor {
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cfg);
}
}
}
@@ -445,7 +445,6 @@ namespace ContainerExecutor {
TEST_F(TestDockerUtil, test_set_network) {
struct configuration container_cfg;
struct args buff = ARGS_INITIAL_VALUE;
- reset_args(&buff);
int ret = 0;
std::string container_executor_cfg_contents = "[docker]\n docker.allowed.networks=sdn1,bridge";
std::vector<std::pair<std::string, std::string> > file_cmd_vec;
@@ -464,7 +463,6 @@ namespace ContainerExecutor {
}
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
struct configuration cmd_cfg;
- reset_args(&buff);
write_command_file(itr->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -474,7 +472,9 @@ namespace ContainerExecutor {
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
struct configuration cmd_cfg_1;
write_command_file("[docker-command-execution]\n docker-command=run\n net=sdn2");
@@ -482,10 +482,11 @@ namespace ContainerExecutor {
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_network(&cmd_cfg_1, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_NETWORK, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&container_cfg);
container_executor_cfg_contents = "[docker]\n";
write_container_executor_cfg(container_executor_cfg_contents);
@@ -493,10 +494,12 @@ namespace ContainerExecutor {
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_network(&cmd_cfg_1, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_NETWORK, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg_1);
+ free_configuration(&container_cfg);
}
TEST_F(TestDockerUtil, test_set_pid_namespace) {
@@ -529,7 +532,6 @@ namespace ContainerExecutor {
FAIL();
}
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
- reset_args(&buff);
write_command_file(itr->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -539,10 +541,11 @@ namespace ContainerExecutor {
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
for (itr2 = bad_file_cmd_vec.begin(); itr2 != bad_file_cmd_vec.end(); ++itr2) {
- reset_args(&buff);
write_command_file(itr2->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -551,7 +554,10 @@ namespace ContainerExecutor {
ret = set_pid_namespace(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(itr2->second, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
}
+ free_configuration(&container_cfg);
}
// check default case and when it's turned off
@@ -575,6 +581,7 @@ namespace ContainerExecutor {
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
free(actual);
+ free_configuration(&cmd_cfg);
}
bad_file_cmd_vec.clear();
bad_file_cmd_vec.push_back(std::make_pair<std::string, int>(
@@ -584,7 +591,6 @@ namespace ContainerExecutor {
"[docker-command-execution]\n docker-command=run\n pid=host",
static_cast<int>(PID_HOST_DISABLED)));
for (itr2 = bad_file_cmd_vec.begin(); itr2 != bad_file_cmd_vec.end(); ++itr2) {
- reset_args(&buff);
write_command_file(itr2->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -593,7 +599,10 @@ namespace ContainerExecutor {
ret = set_pid_namespace(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(itr2->second, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
}
+ free_configuration(&container_cfg);
}
}
@@ -633,6 +642,7 @@ namespace ContainerExecutor {
for (int i = 0; i < entries; ++i) {
ASSERT_STREQ(expected[i], ptr[i]);
}
+ free_values(ptr);
}
TEST_F(TestDockerUtil, test_set_privileged) {
@@ -665,7 +675,6 @@ namespace ContainerExecutor {
FAIL();
}
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
- reset_args(&buff);
write_command_file(itr->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -674,19 +683,22 @@ namespace ContainerExecutor {
ret = set_privileged(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(6, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
}
write_command_file("[docker-command-execution]\n docker-command=run\n user=nobody\n privileged=true\n image=nothadoop/image");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_privileged(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(PRIVILEGED_CONTAINERS_DISABLED, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
+ free_configuration(&container_cfg);
}
-
// check default case and when it's turned off
for (int i = 3; i < 6; ++i) {
write_container_executor_cfg(container_executor_cfg_contents[i]);
@@ -698,7 +710,6 @@ namespace ContainerExecutor {
file_cmd_vec.push_back(std::make_pair<std::string, std::string>(
"[docker-command-execution]\n docker-command=run\n user=root\n privileged=false", ""));
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
- reset_args(&buff);
write_command_file(itr->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -708,7 +719,9 @@ namespace ContainerExecutor {
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
write_command_file("[docker-command-execution]\n docker-command=run\n user=root\n privileged=true");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
@@ -718,6 +731,9 @@ namespace ContainerExecutor {
ret = set_privileged(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(PRIVILEGED_CONTAINERS_DISABLED, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
+ free_configuration(&container_cfg);
}
}
@@ -752,7 +768,6 @@ namespace ContainerExecutor {
FAIL();
}
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
- reset_args(&buff);
write_command_file(itr->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -762,16 +777,19 @@ namespace ContainerExecutor {
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
write_command_file("[docker-command-execution]\n docker-command=run\n image=hadoop/docker-image\n cap-add=SETGID");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_capabilities(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_CAPABILITY, ret);
+ reset_args(&buff);
+ free_configuration(&container_cfg);
container_executor_cfg_contents = "[docker]\n docker.trusted.registries=hadoop\n";
write_container_executor_cfg(container_executor_cfg_contents);
@@ -779,15 +797,16 @@ namespace ContainerExecutor {
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_capabilities(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_CAPABILITY, ret);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
+ free_configuration(&container_cfg);
}
TEST_F(TestDockerUtil, test_set_devices) {
struct configuration container_cfg, cmd_cfg;
struct args buff = ARGS_INITIAL_VALUE;
- reset_args(&buff);
int ret = 0;
std::string container_executor_cfg_contents = "[docker]\n"
" docker.trusted.registries=hadoop\n"
@@ -821,7 +840,6 @@ namespace ContainerExecutor {
FAIL();
}
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
- reset_args(&buff);
write_command_file(itr->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -831,67 +849,75 @@ namespace ContainerExecutor {
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
write_command_file("[docker-command-execution]\n docker-command=run\n image=nothadoop/image\n devices=/dev/test-device:/dev/test-device");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_devices(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_DEVICE, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
write_command_file("[docker-command-execution]\n docker-command=run\n image=hadoop/image\n devices=/dev/device3:/dev/device3");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_devices(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_DEVICE, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
write_command_file("[docker-command-execution]\n docker-command=run\n image=hadoop/image\n devices=/dev/device1");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_devices(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_DEVICE, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
write_command_file("[docker-command-execution]\n docker-command=run\n image=hadoop/image\n devices=/dev/testnvidia:/dev/testnvidia");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_devices(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_DEVICE, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
write_command_file("[docker-command-execution]\n docker-command=run\n image=hadoop/image\n devices=/dev/gpu-nvidia-uvm:/dev/gpu-nvidia-uvm");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_devices(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_DEVICE, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
write_command_file("[docker-command-execution]\n docker-command=run\n image=hadoop/image\n devices=/dev/device1");
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_devices(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_DEVICE, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&container_cfg);
container_executor_cfg_contents = "[docker]\n";
write_container_executor_cfg(container_executor_cfg_contents);
@@ -899,10 +925,12 @@ namespace ContainerExecutor {
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = set_devices(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_DEVICE, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
+ free_configuration(&container_cfg);
}
@@ -951,7 +979,6 @@ namespace ContainerExecutor {
std::vector<std::pair<std::string, std::string> >::const_iterator itr;
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
- reset_args(&buff);
write_command_file(itr->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -961,7 +988,9 @@ namespace ContainerExecutor {
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
std::vector<std::pair<std::string, int> > bad_file_cmds_vec;
@@ -978,18 +1007,18 @@ namespace ContainerExecutor {
std::vector<std::pair<std::string, int> >::const_iterator itr2;
for (itr2 = bad_file_cmds_vec.begin(); itr2 != bad_file_cmds_vec.end(); ++itr2) {
- reset_args(&buff);
write_command_file(itr2->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = add_rw_mounts(&cmd_cfg, &container_cfg, &buff);
char *actual = flatten(&buff);
ASSERT_EQ(itr2->second, ret);
ASSERT_STREQ("", actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
// verify that you can't mount any directory in the container-executor.cfg path
@@ -997,17 +1026,17 @@ namespace ContainerExecutor {
while (strlen(ce_path) != 0) {
std::string cmd_file_contents = "[docker-command-execution]\n docker-command=run\n image=hadoop/image\n rw-mounts=";
cmd_file_contents.append(ce_path).append(":").append("/etc/hadoop");
- reset_args(&buff);
write_command_file(cmd_file_contents);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = add_rw_mounts(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_RW_MOUNT, ret) << " for input " << cmd_file_contents;
char *actual = flatten(&buff);
ASSERT_STREQ("", actual);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
free(actual);
char *tmp = strrchr(ce_path, '/');
if (tmp != NULL) {
@@ -1015,6 +1044,7 @@ namespace ContainerExecutor {
}
}
free(ce_path);
+ free_configuration(&container_cfg);
// For untrusted image, container add_rw_mounts will pass through
// without mounting or report error code.
@@ -1024,12 +1054,13 @@ namespace ContainerExecutor {
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = add_rw_mounts(&cmd_cfg, &container_cfg, &buff);
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ("", actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&container_cfg);
}
TEST_F(TestDockerUtil, test_add_ro_mounts) {
@@ -1080,7 +1111,6 @@ namespace ContainerExecutor {
std::vector<std::pair<std::string, std::string> >::const_iterator itr;
for (itr = file_cmd_vec.begin(); itr != file_cmd_vec.end(); ++itr) {
- reset_args(&buff);
write_command_file(itr->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
@@ -1090,7 +1120,9 @@ namespace ContainerExecutor {
char *actual = flatten(&buff);
ASSERT_EQ(0, ret);
ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
std::vector<std::pair<std::string, int> > bad_file_cmds_vec;
@@ -1104,19 +1136,20 @@ namespace ContainerExecutor {
std::vector<std::pair<std::string, int> >::const_iterator itr2;
for (itr2 = bad_file_cmds_vec.begin(); itr2 != bad_file_cmds_vec.end(); ++itr2) {
- reset_args(&buff);
write_command_file(itr2->first);
ret = read_config(docker_command_file.c_str(), &cmd_cfg);
if (ret != 0) {
FAIL();
}
- reset_args(&buff);
ret = add_ro_mounts(&cmd_cfg, &container_cfg, &buff);
char *actual = flatten(&buff);
ASSERT_EQ(itr2->second, ret);
ASSERT_STREQ("", actual);
+ reset_args(&buff);
free(actual);
+ free_configuration(&cmd_cfg);
}
+ free_configuration(&container_cfg);
container_executor_cfg_contents = "[docker]\n docker.trusted.registries=hadoop\n";
write_container_executor_cfg(container_executor_cfg_contents);
@@ -1125,10 +1158,16 @@ namespace ContainerExecutor {
FAIL();
}
write_command_file("[docker-command-execution]\n docker-command=run\n image=hadoop/image\n ro-mounts=/home:/home");
- reset_args(&buff);
+ ret = read_config(docker_command_file.c_str(), &cmd_cfg);
+ if (ret != 0) {
+ FAIL();
+ }
ret = add_ro_mounts(&cmd_cfg, &container_cfg, &buff);
ASSERT_EQ(INVALID_DOCKER_RO_MOUNT, ret);
ASSERT_EQ(0, buff.length);
+ reset_args(&buff);
+ free_configuration(&cmd_cfg);
+ free_configuration(&container_cfg);
}
TEST_F(TestDockerUtil, test_docker_run_privileged) {
@@ -1310,6 +1349,7 @@ namespace ContainerExecutor {
static_cast<int>(INVALID_DOCKER_NETWORK)));
run_docker_command_test(file_cmd_vec, bad_file_cmd_vec, get_docker_run_command);
+ free_configuration(&container_executor_cfg);
}
TEST_F(TestDockerUtil, test_docker_run_entry_point) {
@@ -1352,6 +1392,7 @@ namespace ContainerExecutor {
static_cast<int>(INVALID_DOCKER_CONTAINER_NAME)));
run_docker_command_test(file_cmd_vec, bad_file_cmd_vec, get_docker_run_command);
+ free_configuration(&container_executor_cfg);
}
TEST_F(TestDockerUtil, test_docker_run_no_privileged) {
@@ -1441,6 +1482,7 @@ namespace ContainerExecutor {
static_cast<int>(PRIVILEGED_CONTAINERS_DISABLED)));
run_docker_command_test(file_cmd_vec, bad_file_cmd_vec, get_docker_run_command);
+ free_configuration(&container_executor_cfg);
}
}
@@ -1471,13 +1513,14 @@ namespace ContainerExecutor {
struct args buffer = ARGS_INITIAL_VALUE;
struct configuration cfg = {0, NULL};
for (itr = input_output_map.begin(); itr != input_output_map.end(); ++itr) {
- reset_args(&buffer);
write_command_file(itr->first);
int ret = get_docker_command(docker_command_file.c_str(), &cfg, &buffer);
+ char *actual = flatten(&buffer);
ASSERT_EQ(0, ret) << "for input " << itr->first;
- ASSERT_STREQ(itr->second.c_str(), flatten(&buffer));
+ ASSERT_STREQ(itr->second.c_str(), actual);
+ reset_args(&buffer);
+ free(actual);
}
- reset_args(&buffer);
}
TEST_F(TestDockerUtil, test_docker_module_enabled) {
@@ -1497,6 +1540,7 @@ namespace ContainerExecutor {
ret = docker_module_enabled(&container_executor_cfg);
ASSERT_EQ(input_out_vec[i].second, ret) << " incorrect output for "
<< input_out_vec[i].first;
+ free_configuration(&container_executor_cfg);
}
}
@@ -1544,6 +1588,7 @@ namespace ContainerExecutor {
static_cast<int>(INVALID_DOCKER_VOLUME_DRIVER)));
run_docker_command_test(file_cmd_vec, bad_file_cmd_vec, get_docker_volume_command);
+ free_configuration(&container_executor_cfg);
}
TEST_F(TestDockerUtil, test_docker_no_new_privileges) {
@@ -1589,6 +1634,7 @@ namespace ContainerExecutor {
std::vector<std::pair<std::string, int> > bad_file_cmd_vec;
run_docker_command_test(file_cmd_vec, bad_file_cmd_vec, get_docker_run_command);
+ free_configuration(&container_executor_cfg);
}
for (int i = 2; i < 3; ++i) {
@@ -1611,6 +1657,7 @@ namespace ContainerExecutor {
std::vector<std::pair<std::string, int> > bad_file_cmd_vec;
run_docker_command_test(file_cmd_vec, bad_file_cmd_vec, get_docker_run_command);
+ free_configuration(&container_executor_cfg);
}
for (int i = 3; i < 5; ++i) {
@@ -1633,6 +1680,7 @@ namespace ContainerExecutor {
std::vector<std::pair<std::string, int> > bad_file_cmd_vec;
run_docker_command_test(file_cmd_vec, bad_file_cmd_vec, get_docker_run_command);
+ free_configuration(&container_executor_cfg);
}
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[46/50] hadoop git commit: MAPREDUCE-7118. Distributed cache
conflicts breaks backwards compatability. (Jason Lowe via wangda)
Posted by zh...@apache.org.
MAPREDUCE-7118. Distributed cache conflicts breaks backwards compatability. (Jason Lowe via wangda)
Change-Id: I89ab4852b4ad305fec19812e8931c59d96581376
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b3b4d4cc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b3b4d4cc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b3b4d4cc
Branch: refs/heads/HDFS-13572
Commit: b3b4d4ccb53fdf8dacc66e912822b34f8b3bf215
Parents: 2564884
Author: Wangda Tan <wa...@apache.org>
Authored: Thu Jul 19 12:03:24 2018 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Thu Jul 19 14:26:05 2018 -0700
----------------------------------------------------------------------
.../mapreduce/v2/util/LocalResourceBuilder.java | 8 +++-----
.../hadoop/mapreduce/v2/util/TestMRApps.java | 20 ++++++++++++++++++--
2 files changed, 21 insertions(+), 7 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3b4d4cc/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/LocalResourceBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/LocalResourceBuilder.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/LocalResourceBuilder.java
index 48b157e..48cc29e 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/LocalResourceBuilder.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/LocalResourceBuilder.java
@@ -27,7 +27,6 @@ import org.apache.hadoop.classification.InterfaceStability.Unstable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.mapred.InvalidJobConfException;
import org.apache.hadoop.mapreduce.MRJobConfig;
import org.apache.hadoop.mapreduce.filecache.DistributedCache;
import org.apache.hadoop.yarn.api.records.LocalResource;
@@ -144,10 +143,9 @@ class LocalResourceBuilder {
LocalResource orig = localResources.get(linkName);
if(orig != null && !orig.getResource().equals(URL.fromURI(p.toUri()))) {
- throw new InvalidJobConfException(
- getResourceDescription(orig.getType()) + orig.getResource()
- +
- " conflicts with " + getResourceDescription(type) + u);
+ LOG.warn(getResourceDescription(orig.getType()) + orig.getResource()
+ + " conflicts with " + getResourceDescription(type) + u);
+ continue;
}
Boolean sharedCachePolicy = sharedCacheUploadPolicies.get(u.toString());
sharedCachePolicy =
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b3b4d4cc/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java
index 3aadd63..c6a2874 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java
@@ -360,7 +360,7 @@ public class TestMRApps {
}
@SuppressWarnings("deprecation")
- @Test(timeout = 120000, expected = InvalidJobConfException.class)
+ @Test(timeout = 120000)
public void testSetupDistributedCacheConflicts() throws Exception {
Configuration conf = new Configuration();
conf.setClass("fs.mockfs.impl", MockFileSystem.class, FileSystem.class);
@@ -388,10 +388,18 @@ public class TestMRApps {
Map<String, LocalResource> localResources =
new HashMap<String, LocalResource>();
MRApps.setupDistributedCache(conf, localResources);
+
+ assertEquals(1, localResources.size());
+ LocalResource lr = localResources.get("something");
+ //Archive wins
+ assertNotNull(lr);
+ assertEquals(10l, lr.getSize());
+ assertEquals(10l, lr.getTimestamp());
+ assertEquals(LocalResourceType.ARCHIVE, lr.getType());
}
@SuppressWarnings("deprecation")
- @Test(timeout = 120000, expected = InvalidJobConfException.class)
+ @Test(timeout = 120000)
public void testSetupDistributedCacheConflictsFiles() throws Exception {
Configuration conf = new Configuration();
conf.setClass("fs.mockfs.impl", MockFileSystem.class, FileSystem.class);
@@ -416,6 +424,14 @@ public class TestMRApps {
Map<String, LocalResource> localResources =
new HashMap<String, LocalResource>();
MRApps.setupDistributedCache(conf, localResources);
+
+ assertEquals(1, localResources.size());
+ LocalResource lr = localResources.get("something");
+ //First one wins
+ assertNotNull(lr);
+ assertEquals(10l, lr.getSize());
+ assertEquals(10l, lr.getTimestamp());
+ assertEquals(LocalResourceType.FILE, lr.getType());
}
@SuppressWarnings("deprecation")
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[10/50] hadoop git commit: HADOOP-15316. GenericTestUtils can exceed
maxSleepTime. Contributed by Adam Antal.
Posted by zh...@apache.org.
HADOOP-15316. GenericTestUtils can exceed maxSleepTime. Contributed by Adam Antal.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4f3f9391
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4f3f9391
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4f3f9391
Branch: refs/heads/HDFS-13572
Commit: 4f3f9391b035d7f7e285c332770c6c1ede9a5a85
Parents: b37074b
Author: Sean Mackrory <ma...@apache.org>
Authored: Thu Jul 12 16:45:07 2018 +0200
Committer: Sean Mackrory <ma...@apache.org>
Committed: Thu Jul 12 17:24:01 2018 +0200
----------------------------------------------------------------------
.../src/test/java/org/apache/hadoop/test/GenericTestUtils.java | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4f3f9391/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
index 3e9da1b..0112894 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
@@ -661,7 +661,7 @@ public abstract class GenericTestUtils {
public Object answer(InvocationOnMock invocation) throws Throwable {
boolean interrupted = false;
try {
- Thread.sleep(r.nextInt(maxSleepTime) + minSleepTime);
+ Thread.sleep(r.nextInt(maxSleepTime - minSleepTime) + minSleepTime);
} catch (InterruptedException ie) {
interrupted = true;
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[23/50] hadoop git commit: HDDS-251. Integrate BlockDeletingService
in KeyValueHandler. Contributed by Lokesh Jain
Posted by zh...@apache.org.
HDDS-251. Integrate BlockDeletingService in KeyValueHandler. Contributed by Lokesh Jain
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0927bc4f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0927bc4f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0927bc4f
Branch: refs/heads/HDFS-13572
Commit: 0927bc4f76c6803be31855835a9191ab7ed47bc7
Parents: 4523cc5
Author: Bharat Viswanadham <bh...@apache.org>
Authored: Sun Jul 15 10:34:00 2018 -0700
Committer: Bharat Viswanadham <bh...@apache.org>
Committed: Sun Jul 15 10:34:00 2018 -0700
----------------------------------------------------------------------
.../apache/hadoop/hdds/scm/ScmConfigKeys.java | 5 +-
.../common/src/main/resources/ozone-default.xml | 4 +-
.../container/common/impl/ContainerSet.java | 16 +++--
.../container/common/interfaces/Container.java | 2 +-
.../ContainerDeletionChoosingPolicy.java | 1 -
.../container/keyvalue/KeyValueHandler.java | 24 ++++++++
.../background/BlockDeletingService.java | 11 +++-
.../common/TestBlockDeletingService.java | 26 ++++++--
.../TestContainerDeletionChoosingPolicy.java | 62 ++++++++------------
.../commandhandler/TestBlockDeletion.java | 14 ++---
.../TestCloseContainerByPipeline.java | 18 +++---
.../TestCloseContainerHandler.java | 12 ++--
.../ozone/om/TestContainerReportWithKeys.java | 11 ++--
.../hadoop/ozone/web/client/TestKeys.java | 2 -
14 files changed, 117 insertions(+), 91 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
index 9725d2c..46eb8aa 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/ScmConfigKeys.java
@@ -231,8 +231,9 @@ public final class ScmConfigKeys {
"ozone.scm.container.provision_batch_size";
public static final int OZONE_SCM_CONTAINER_PROVISION_BATCH_SIZE_DEFAULT = 20;
- public static final String OZONE_SCM_CONTAINER_DELETION_CHOOSING_POLICY =
- "ozone.scm.container.deletion-choosing.policy";
+ public static final String
+ OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY =
+ "ozone.scm.keyvalue.container.deletion-choosing.policy";
public static final String OZONE_SCM_CONTAINER_CREATION_LEASE_TIMEOUT =
"ozone.scm.container.creation.lease.timeout";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-hdds/common/src/main/resources/ozone-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index 1b6fb33..da3870e 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -541,13 +541,13 @@
<description>The port number of the Ozone SCM client service.</description>
</property>
<property>
- <name>ozone.scm.container.deletion-choosing.policy</name>
+ <name>ozone.scm.keyvalue.container.deletion-choosing.policy</name>
<value>
org.apache.hadoop.ozone.container.common.impl.TopNOrderedContainerDeletionChoosingPolicy
</value>
<tag>OZONE, MANAGEMENT</tag>
<description>
- The policy used for choosing desire containers for block deletion.
+ The policy used for choosing desired keyvalue containers for block deletion.
Datanode selects some containers to process block deletion
in a certain interval defined by ozone.block.deleting.service.interval.
The number of containers to process in each interval is defined
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
index bcba8c8..7a6cb2d 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java
@@ -30,6 +30,8 @@ import org.apache.hadoop.hdds.protocol.proto
import org.apache.hadoop.hdds.scm.container.common.helpers
.StorageContainerException;
import org.apache.hadoop.ozone.container.common.interfaces.Container;
+import org.apache.hadoop.ozone.container.common
+ .interfaces.ContainerDeletionChoosingPolicy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -247,9 +249,15 @@ public class ContainerSet {
return state;
}
- // TODO: Implement BlockDeletingService
- public List<ContainerData> chooseContainerForBlockDeletion(
- int count) throws StorageContainerException {
- return null;
+ public List<ContainerData> chooseContainerForBlockDeletion(int count,
+ ContainerDeletionChoosingPolicy deletionPolicy)
+ throws StorageContainerException {
+ Map<Long, ContainerData> containerDataMap = containerMap.entrySet().stream()
+ .filter(e -> e.getValue().getContainerType()
+ == ContainerProtos.ContainerType.KeyValueContainer)
+ .collect(Collectors.toMap(Map.Entry::getKey,
+ e -> e.getValue().getContainerData()));
+ return deletionPolicy
+ .chooseContainerForBlockDeletion(count, containerDataMap);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
index 03ed7b1..fe35e1d 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/Container.java
@@ -68,7 +68,7 @@ public interface Container extends RwLock {
* @return ContainerData - Container Data.
* @throws StorageContainerException
*/
- ContainerData getContainerData() throws StorageContainerException;
+ ContainerData getContainerData();
/**
* Get the Container Lifecycle state.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDeletionChoosingPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDeletionChoosingPolicy.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDeletionChoosingPolicy.java
index 2538368..dce86e9 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDeletionChoosingPolicy.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDeletionChoosingPolicy.java
@@ -28,7 +28,6 @@ import java.util.Map;
* This interface is used for choosing desired containers for
* block deletion.
*/
-// TODO: Fix ContainerDeletionChoosingPolicy to work with new StorageLayer
public interface ContainerDeletionChoosingPolicy {
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
index 3806ed6..84b3644 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
@@ -62,6 +62,8 @@ import org.apache.hadoop.ozone.container.keyvalue.impl.KeyManagerImpl;
import org.apache.hadoop.ozone.container.keyvalue.helpers.KeyUtils;
import org.apache.hadoop.ozone.container.keyvalue.interfaces.ChunkManager;
import org.apache.hadoop.ozone.container.keyvalue.interfaces.KeyManager;
+import org.apache.hadoop.ozone.container.keyvalue.statemachine
+ .background.BlockDeletingService;
import org.apache.hadoop.util.AutoCloseableLock;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -71,6 +73,7 @@ import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
@@ -90,6 +93,14 @@ import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
.Stage;
+import static org.apache.hadoop.ozone
+ .OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_INTERVAL;
+import static org.apache.hadoop.ozone
+ .OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_INTERVAL_DEFAULT;
+import static org.apache.hadoop.ozone
+ .OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_TIMEOUT;
+import static org.apache.hadoop.ozone
+ .OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_TIMEOUT_DEFAULT;
/**
* Handler for KeyValue Container type.
@@ -102,6 +113,7 @@ public class KeyValueHandler extends Handler {
private final ContainerType containerType;
private final KeyManager keyManager;
private final ChunkManager chunkManager;
+ private final BlockDeletingService blockDeletingService;
private VolumeChoosingPolicy volumeChoosingPolicy;
private final int maxContainerSizeGB;
private final AutoCloseableLock handlerLock;
@@ -113,6 +125,18 @@ public class KeyValueHandler extends Handler {
containerType = ContainerType.KeyValueContainer;
keyManager = new KeyManagerImpl(config);
chunkManager = new ChunkManagerImpl();
+ long svcInterval = config
+ .getTimeDuration(OZONE_BLOCK_DELETING_SERVICE_INTERVAL,
+ OZONE_BLOCK_DELETING_SERVICE_INTERVAL_DEFAULT,
+ TimeUnit.MILLISECONDS);
+ long serviceTimeout = config
+ .getTimeDuration(OZONE_BLOCK_DELETING_SERVICE_TIMEOUT,
+ OZONE_BLOCK_DELETING_SERVICE_TIMEOUT_DEFAULT,
+ TimeUnit.MILLISECONDS);
+ this.blockDeletingService =
+ new BlockDeletingService(containerSet, svcInterval, serviceTimeout,
+ config);
+ blockDeletingService.start();
// TODO: Add supoort for different volumeChoosingPolicies.
volumeChoosingPolicy = new RoundRobinVolumeChoosingPolicy();
maxContainerSizeGB = config.getInt(ScmConfigKeys
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
index 6aa54d1..151ef94 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
@@ -19,10 +19,14 @@
package org.apache.hadoop.ozone.container.keyvalue.statemachine.background;
import com.google.common.collect.Lists;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
import org.apache.hadoop.ozone.container.common.impl.ContainerData;
import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
+import org.apache.hadoop.ozone.container.common.impl.TopNOrderedContainerDeletionChoosingPolicy;
+import org.apache.hadoop.ozone.container.common.interfaces.ContainerDeletionChoosingPolicy;
import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
import org.apache.hadoop.ozone.container.keyvalue.helpers.KeyUtils;
+import org.apache.hadoop.util.ReflectionUtils;
import org.apache.ratis.shaded.com.google.protobuf
.InvalidProtocolBufferException;
import org.apache.commons.io.FileUtils;
@@ -69,6 +73,7 @@ public class BlockDeletingService extends BackgroundService{
LoggerFactory.getLogger(BlockDeletingService.class);
ContainerSet containerSet;
+ private ContainerDeletionChoosingPolicy containerDeletionPolicy;
private final Configuration conf;
// Throttle number of blocks to delete per task,
@@ -89,6 +94,10 @@ public class BlockDeletingService extends BackgroundService{
TimeUnit.MILLISECONDS, BLOCK_DELETING_SERVICE_CORE_POOL_SIZE,
serviceTimeout);
this.containerSet = containerSet;
+ containerDeletionPolicy = ReflectionUtils.newInstance(conf.getClass(
+ ScmConfigKeys.OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY,
+ TopNOrderedContainerDeletionChoosingPolicy.class,
+ ContainerDeletionChoosingPolicy.class), conf);
this.conf = conf;
this.blockLimitPerTask = conf.getInt(
OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER,
@@ -110,7 +119,7 @@ public class BlockDeletingService extends BackgroundService{
// The chosen result depends on what container deletion policy is
// configured.
containers = containerSet.chooseContainerForBlockDeletion(
- containerLimitPerInterval);
+ containerLimitPerInterval, containerDeletionPolicy);
LOG.info("Plan to choose {} containers for block deletion, "
+ "actually returns {} valid containers.",
containerLimitPerInterval, containers.size());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
index 724a682..1ddd39a 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
@@ -21,17 +21,16 @@ import com.google.common.collect.Lists;
import org.apache.commons.io.FileUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.client.BlockID;
-import org.apache.hadoop.hdds.scm.TestUtils;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
-import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
-import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.ozone.container.ContainerTestHelper;
import org.apache.hadoop.ozone.container.common.impl.ContainerData;
import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
import org.apache.hadoop.ozone.container.common.interfaces.Container;
+import org.apache.hadoop.ozone.container.common.volume.RoundRobinVolumeChoosingPolicy;
+import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer;
import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
import org.apache.hadoop.ozone.container.keyvalue.helpers.KeyUtils;
@@ -47,7 +46,6 @@ import org.apache.hadoop.utils.BackgroundService;
import org.apache.hadoop.utils.MetadataKeyFilters;
import org.apache.hadoop.utils.MetadataStore;
import org.junit.Assert;
-import org.junit.Ignore;
import org.junit.Test;
import org.junit.BeforeClass;
import org.junit.Before;
@@ -58,9 +56,9 @@ import org.slf4j.LoggerFactory;
import java.io.File;
import java.io.IOException;
import java.nio.charset.Charset;
-import java.util.LinkedList;
import java.util.List;
import java.util.Map;
+import java.util.UUID;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
@@ -75,7 +73,6 @@ import static org.apache.hadoop.ozone.OzoneConfigKeys
* Tests to test block deleting service.
*/
// TODO: Fix BlockDeletingService to work with new StorageLayer
-@Ignore
public class TestBlockDeletingService {
private static final Logger LOG =
@@ -120,6 +117,8 @@ public class TestBlockDeletingService {
KeyValueContainerData data = new KeyValueContainerData(containerID,
ContainerTestHelper.CONTAINER_MAX_SIZE_GB);
Container container = new KeyValueContainer(data, conf);
+ container.create(new VolumeSet(UUID.randomUUID().toString(), conf),
+ new RoundRobinVolumeChoosingPolicy(), UUID.randomUUID().toString());
containerSet.addContainer(container);
data = (KeyValueContainerData) containerSet.getContainer(
containerID).getContainerData();
@@ -188,6 +187,9 @@ public class TestBlockDeletingService {
@Test
public void testBlockDeletion() throws Exception {
Configuration conf = new OzoneConfiguration();
+ conf.set(
+ ScmConfigKeys.OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY,
+ RandomContainerDeletionChoosingPolicy.class.getName());
conf.setInt(OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL, 10);
conf.setInt(OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER, 2);
ContainerSet containerSet = new ContainerSet();
@@ -236,6 +238,9 @@ public class TestBlockDeletingService {
@Test
public void testShutdownService() throws Exception {
Configuration conf = new OzoneConfiguration();
+ conf.set(
+ ScmConfigKeys.OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY,
+ RandomContainerDeletionChoosingPolicy.class.getName());
conf.setTimeDuration(OZONE_BLOCK_DELETING_SERVICE_INTERVAL, 500,
TimeUnit.MILLISECONDS);
conf.setInt(OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL, 10);
@@ -264,6 +269,9 @@ public class TestBlockDeletingService {
@Test
public void testBlockDeletionTimeout() throws Exception {
Configuration conf = new OzoneConfiguration();
+ conf.set(
+ ScmConfigKeys.OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY,
+ RandomContainerDeletionChoosingPolicy.class.getName());
conf.setInt(OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL, 10);
conf.setInt(OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER, 2);
ContainerSet containerSet = new ContainerSet();
@@ -333,6 +341,9 @@ public class TestBlockDeletingService {
// 1 block from 1 container can be deleted.
Configuration conf = new OzoneConfiguration();
// Process 1 container per interval
+ conf.set(
+ ScmConfigKeys.OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY,
+ RandomContainerDeletionChoosingPolicy.class.getName());
conf.setInt(OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL, 1);
conf.setInt(OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER, 1);
ContainerSet containerSet = new ContainerSet();
@@ -366,6 +377,9 @@ public class TestBlockDeletingService {
// per container can be actually deleted. So it requires 2 waves
// to cleanup all blocks.
Configuration conf = new OzoneConfiguration();
+ conf.set(
+ ScmConfigKeys.OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY,
+ RandomContainerDeletionChoosingPolicy.class.getName());
conf.setInt(OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL, 10);
conf.setInt(OZONE_BLOCK_DELETING_LIMIT_PER_CONTAINER, 2);
ContainerSet containerSet = new ContainerSet();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
index c161551..b2e4c9a 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
@@ -27,29 +27,22 @@ import java.util.Random;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.RandomUtils;
-import org.apache.hadoop.hdds.scm.TestUtils;
-import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdfs.server.datanode.StorageLocation;
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.hdds.scm.ScmConfigKeys;
import org.apache.hadoop.ozone.container.ContainerTestHelper;
+import org.apache.hadoop.ozone.container.common.interfaces.ContainerDeletionChoosingPolicy;
import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer;
import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainerData;
-import org.apache.hadoop.ozone.container.keyvalue.helpers.KeyUtils;
import org.apache.hadoop.test.GenericTestUtils;
-import org.apache.hadoop.utils.MetadataStore;
import org.junit.Assert;
import org.junit.Before;
-import org.junit.Ignore;
import org.junit.Test;
/**
* The class for testing container deletion choosing policy.
*/
-@Ignore
public class TestContainerDeletionChoosingPolicy {
private static String path;
private static ContainerSet containerSet;
@@ -73,7 +66,8 @@ public class TestContainerDeletionChoosingPolicy {
}
Assert.assertTrue(containerDir.mkdirs());
- conf.set(ScmConfigKeys.OZONE_SCM_CONTAINER_DELETION_CHOOSING_POLICY,
+ conf.set(
+ ScmConfigKeys.OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY,
RandomContainerDeletionChoosingPolicy.class.getName());
List<StorageLocation> pathLists = new LinkedList<>();
pathLists.add(StorageLocation.parse(containerDir.getAbsolutePath()));
@@ -89,15 +83,17 @@ public class TestContainerDeletionChoosingPolicy {
containerSet.getContainerMap().containsKey(data.getContainerID()));
}
- List<ContainerData> result0 = containerSet
- .chooseContainerForBlockDeletion(5);
+ ContainerDeletionChoosingPolicy deletionPolicy =
+ new RandomContainerDeletionChoosingPolicy();
+ List<ContainerData> result0 =
+ containerSet.chooseContainerForBlockDeletion(5, deletionPolicy);
Assert.assertEquals(5, result0.size());
// test random choosing
List<ContainerData> result1 = containerSet
- .chooseContainerForBlockDeletion(numContainers);
+ .chooseContainerForBlockDeletion(numContainers, deletionPolicy);
List<ContainerData> result2 = containerSet
- .chooseContainerForBlockDeletion(numContainers);
+ .chooseContainerForBlockDeletion(numContainers, deletionPolicy);
boolean hasShuffled = false;
for (int i = 0; i < numContainers; i++) {
@@ -118,12 +114,12 @@ public class TestContainerDeletionChoosingPolicy {
}
Assert.assertTrue(containerDir.mkdirs());
- conf.set(ScmConfigKeys.OZONE_SCM_CONTAINER_DELETION_CHOOSING_POLICY,
+ conf.set(
+ ScmConfigKeys.OZONE_SCM_KEY_VALUE_CONTAINER_DELETION_CHOOSING_POLICY,
TopNOrderedContainerDeletionChoosingPolicy.class.getName());
List<StorageLocation> pathLists = new LinkedList<>();
pathLists.add(StorageLocation.parse(containerDir.getAbsolutePath()));
containerSet = new ContainerSet();
- DatanodeDetails datanodeDetails = TestUtils.getDatanodeDetails();
int numContainers = 10;
Random random = new Random();
@@ -131,38 +127,28 @@ public class TestContainerDeletionChoosingPolicy {
// create [numContainers + 1] containers
for (int i = 0; i <= numContainers; i++) {
long containerId = RandomUtils.nextLong();
- KeyValueContainerData data = new KeyValueContainerData(new Long(i),
- ContainerTestHelper.CONTAINER_MAX_SIZE_GB);
+ KeyValueContainerData data =
+ new KeyValueContainerData(new Long(containerId),
+ ContainerTestHelper.CONTAINER_MAX_SIZE_GB);
+ if (i != numContainers) {
+ int deletionBlocks = random.nextInt(numContainers) + 1;
+ data.incrPendingDeletionBlocks(deletionBlocks);
+ name2Count.put(containerId, deletionBlocks);
+ }
KeyValueContainer container = new KeyValueContainer(data, conf);
containerSet.addContainer(container);
Assert.assertTrue(
containerSet.getContainerMap().containsKey(containerId));
-
- // don't create deletion blocks in the last container.
- if (i == numContainers) {
- break;
- }
-
- // create random number of deletion blocks and write to container db
- int deletionBlocks = random.nextInt(numContainers) + 1;
- // record <ContainerName, DeletionCount> value
- name2Count.put(containerId, deletionBlocks);
- for (int j = 0; j <= deletionBlocks; j++) {
- MetadataStore metadata = KeyUtils.getDB(data, conf);
- String blk = "blk" + i + "-" + j;
- byte[] blkBytes = DFSUtil.string2Bytes(blk);
- metadata.put(
- DFSUtil.string2Bytes(OzoneConsts.DELETING_KEY_PREFIX + blk),
- blkBytes);
- }
}
- List<ContainerData> result0 = containerSet
- .chooseContainerForBlockDeletion(5);
+ ContainerDeletionChoosingPolicy deletionPolicy =
+ new TopNOrderedContainerDeletionChoosingPolicy();
+ List<ContainerData> result0 =
+ containerSet.chooseContainerForBlockDeletion(5, deletionPolicy);
Assert.assertEquals(5, result0.size());
List<ContainerData> result1 = containerSet
- .chooseContainerForBlockDeletion(numContainers + 1);
+ .chooseContainerForBlockDeletion(numContainers + 1, deletionPolicy);
// the empty deletion blocks container should not be chosen
Assert.assertEquals(numContainers, result1.size());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
index c60c6c4..4ae827b 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestBlockDeletion.java
@@ -47,7 +47,6 @@ import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.utils.MetadataStore;
import org.junit.Assert;
import org.junit.BeforeClass;
-import org.junit.Ignore;
import org.junit.Test;
import java.io.File;
@@ -58,11 +57,10 @@ import java.util.function.Consumer;
import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_BLOCK_DELETING_SERVICE_INTERVAL;
-@Ignore("Need to be fixed according to ContainerIO")
public class TestBlockDeletion {
private static OzoneConfiguration conf = null;
private static ObjectStore store;
- private static ContainerSet dnContainerManager = null;
+ private static ContainerSet dnContainerSet = null;
private static StorageContainerManager scm = null;
private static OzoneManager om = null;
private static Set<Long> containerIdsWithDeletedBlocks;
@@ -88,7 +86,7 @@ public class TestBlockDeletion {
MiniOzoneCluster.newBuilder(conf).setNumDatanodes(1).build();
cluster.waitForClusterToBeReady();
store = OzoneClientFactory.getRpcClient(conf).getObjectStore();
- dnContainerManager = cluster.getHddsDatanodes().get(0)
+ dnContainerSet = cluster.getHddsDatanodes().get(0)
.getDatanodeStateMachine().getContainer().getContainerSet();
om = cluster.getOzoneManager();
scm = cluster.getStorageContainerManager();
@@ -140,7 +138,7 @@ public class TestBlockDeletion {
private void matchContainerTransactionIds() throws IOException {
List<ContainerData> containerDataList = new ArrayList<>();
- dnContainerManager.listContainer(0, 10000, containerDataList);
+ dnContainerSet.listContainer(0, 10000, containerDataList);
for (ContainerData containerData : containerDataList) {
long containerId = containerData.getContainerID();
if (containerIdsWithDeletedBlocks.contains(containerId)) {
@@ -150,7 +148,7 @@ public class TestBlockDeletion {
Assert.assertEquals(
scm.getContainerInfo(containerId).getDeleteTransactionId(), 0);
}
- Assert.assertEquals(dnContainerManager.getContainer(containerId)
+ Assert.assertEquals(dnContainerSet.getContainer(containerId)
.getContainerData().getDeleteTransactionId(),
scm.getContainerInfo(containerId).getDeleteTransactionId());
}
@@ -162,7 +160,7 @@ public class TestBlockDeletion {
return performOperationOnKeyContainers((blockID) -> {
try {
MetadataStore db = KeyUtils.getDB((KeyValueContainerData)
- dnContainerManager.getContainer(blockID.getContainerID())
+ dnContainerSet.getContainer(blockID.getContainerID())
.getContainerData(), conf);
Assert.assertNotNull(db.get(Longs.toByteArray(blockID.getLocalID())));
} catch (IOException e) {
@@ -177,7 +175,7 @@ public class TestBlockDeletion {
return performOperationOnKeyContainers((blockID) -> {
try {
MetadataStore db = KeyUtils.getDB((KeyValueContainerData)
- dnContainerManager.getContainer(blockID.getContainerID())
+ dnContainerSet.getContainer(blockID.getContainerID())
.getContainerData(), conf);
Assert.assertNull(db.get(Longs.toByteArray(blockID.getLocalID())));
Assert.assertNull(db.get(DFSUtil.string2Bytes(
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
index 30b18c2..61bd935 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerByPipeline.java
@@ -252,17 +252,13 @@ public class TestCloseContainerByPipeline {
private Boolean isContainerClosed(MiniOzoneCluster cluster, long containerID,
DatanodeDetails datanode) {
ContainerData containerData;
- try {
- for (HddsDatanodeService datanodeService : cluster.getHddsDatanodes())
- if (datanode.equals(datanodeService.getDatanodeDetails())) {
- containerData =
- datanodeService.getDatanodeStateMachine().getContainer()
- .getContainerSet().getContainer(containerID).getContainerData();
- return !containerData.isOpen();
- }
- } catch (StorageContainerException e) {
- throw new AssertionError(e);
- }
+ for (HddsDatanodeService datanodeService : cluster.getHddsDatanodes())
+ if (datanode.equals(datanodeService.getDatanodeDetails())) {
+ containerData =
+ datanodeService.getDatanodeStateMachine().getContainer()
+ .getContainerSet().getContainer(containerID).getContainerData();
+ return !containerData.isOpen();
+ }
return false;
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
index 682bd63..c0c9bc4 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/TestCloseContainerHandler.java
@@ -102,14 +102,10 @@ public class TestCloseContainerHandler {
private Boolean isContainerClosed(MiniOzoneCluster cluster,
long containerID) {
ContainerData containerData;
- try {
- containerData = cluster.getHddsDatanodes().get(0)
- .getDatanodeStateMachine().getContainer().getContainerSet()
- .getContainer(containerID).getContainerData();
- return !containerData.isOpen();
- } catch (StorageContainerException e) {
- throw new AssertionError(e);
- }
+ containerData = cluster.getHddsDatanodes().get(0)
+ .getDatanodeStateMachine().getContainer().getContainerSet()
+ .getContainer(containerID).getContainerData();
+ return !containerData.isOpen();
}
}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
index c25b00e..c66b3de 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestContainerReportWithKeys.java
@@ -131,13 +131,10 @@ public class TestContainerReportWithKeys {
private static ContainerData getContainerData(long containerID) {
ContainerData containerData;
- try {
- ContainerSet containerManager = cluster.getHddsDatanodes().get(0)
- .getDatanodeStateMachine().getContainer().getContainerSet();
- containerData = containerManager.getContainer(containerID).getContainerData();
- } catch (StorageContainerException e) {
- throw new AssertionError(e);
- }
+ ContainerSet containerManager = cluster.getHddsDatanodes().get(0)
+ .getDatanodeStateMachine().getContainer().getContainerSet();
+ containerData =
+ containerManager.getContainer(containerID).getContainerData();
return containerData;
}
}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0927bc4f/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
index c144db2..540a564 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
@@ -67,7 +67,6 @@ import org.apache.log4j.Logger;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
-import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.Timeout;
@@ -663,7 +662,6 @@ public class TestKeys {
}
@Test
- @Ignore("Needs to be fixed for new SCM and Storage design")
public void testDeleteKey() throws Exception {
OzoneManager ozoneManager = ozoneCluster.getOzoneManager();
// To avoid interference from other test cases,
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[48/50] hadoop git commit: YARN-7300. DiskValidator is not used in
LocalDirAllocator. (Szilard Nemeth via Haibo Chen)
Posted by zh...@apache.org.
YARN-7300. DiskValidator is not used in LocalDirAllocator. (Szilard Nemeth via Haibo Chen)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e6873dfd
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e6873dfd
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e6873dfd
Branch: refs/heads/HDFS-13572
Commit: e6873dfde057e63ce5efa91f3061db3ee1b2e236
Parents: f354f47
Author: Haibo Chen <ha...@apache.org>
Authored: Thu Jul 19 16:27:11 2018 -0700
Committer: Haibo Chen <ha...@apache.org>
Committed: Thu Jul 19 16:27:11 2018 -0700
----------------------------------------------------------------------
.../org/apache/hadoop/fs/LocalDirAllocator.java | 28 +++++++++++++++-----
.../nodemanager/LocalDirsHandlerService.java | 27 ++++++++++++++-----
2 files changed, 42 insertions(+), 13 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e6873dfd/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
index 1c216f4..a4b158a 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java
@@ -24,8 +24,6 @@ import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReference;
import org.apache.hadoop.util.*;
-import org.apache.hadoop.fs.FileSystem;
-import org.apache.hadoop.fs.Path;
import org.apache.hadoop.util.DiskChecker.DiskErrorException;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
@@ -78,11 +76,25 @@ public class LocalDirAllocator {
/** Used when size of file to be allocated is unknown. */
public static final int SIZE_UNKNOWN = -1;
+ private final DiskValidator diskValidator;
+
/**Create an allocator object
* @param contextCfgItemName
*/
public LocalDirAllocator(String contextCfgItemName) {
this.contextCfgItemName = contextCfgItemName;
+ try {
+ this.diskValidator = DiskValidatorFactory.getInstance(
+ BasicDiskValidator.NAME);
+ } catch (DiskErrorException e) {
+ throw new RuntimeException(e);
+ }
+ }
+
+ public LocalDirAllocator(String contextCfgItemName,
+ DiskValidator diskValidator) {
+ this.contextCfgItemName = contextCfgItemName;
+ this.diskValidator = diskValidator;
}
/** This method must be used to obtain the dir allocation context for a
@@ -96,7 +108,8 @@ public class LocalDirAllocator {
AllocatorPerContext l = contexts.get(contextCfgItemName);
if (l == null) {
contexts.put(contextCfgItemName,
- (l = new AllocatorPerContext(contextCfgItemName)));
+ (l = new AllocatorPerContext(contextCfgItemName,
+ diskValidator)));
}
return l;
}
@@ -255,6 +268,7 @@ public class LocalDirAllocator {
// NOTE: the context must be accessed via a local reference as it
// may be updated at any time to reference a different context
private AtomicReference<Context> currentContext;
+ private final DiskValidator diskValidator;
private static class Context {
private AtomicInteger dirNumLastAccessed = new AtomicInteger(0);
@@ -280,9 +294,11 @@ public class LocalDirAllocator {
}
}
- public AllocatorPerContext(String contextCfgItemName) {
+ public AllocatorPerContext(String contextCfgItemName,
+ DiskValidator diskValidator) {
this.contextCfgItemName = contextCfgItemName;
this.currentContext = new AtomicReference<Context>(new Context());
+ this.diskValidator = diskValidator;
}
/** This method gets called everytime before any read/write to make sure
@@ -312,7 +328,7 @@ public class LocalDirAllocator {
? new File(ctx.localFS.makeQualified(tmpDir).toUri())
: new File(dirStrings[i]);
- DiskChecker.checkDir(tmpFile);
+ diskValidator.checkStatus(tmpFile);
dirs.add(new Path(tmpFile.getPath()));
dfList.add(new DF(tmpFile, 30000));
} catch (DiskErrorException de) {
@@ -348,7 +364,7 @@ public class LocalDirAllocator {
//check whether we are able to create a directory here. If the disk
//happens to be RDONLY we will fail
try {
- DiskChecker.checkDir(new File(file.getParent().toUri().getPath()));
+ diskValidator.checkStatus(new File(file.getParent().toUri().getPath()));
return file;
} catch (DiskErrorException d) {
LOG.warn("Disk Error Exception: ", d);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/e6873dfd/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
index 621cabc..6eabd0d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java
@@ -27,6 +27,9 @@ import java.util.List;
import java.util.Set;
import java.util.Timer;
import java.util.TimerTask;
+import org.apache.hadoop.util.DiskChecker.DiskErrorException;
+import org.apache.hadoop.util.DiskValidator;
+import org.apache.hadoop.util.DiskValidatorFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -155,13 +158,23 @@ public class LocalDirsHandlerService extends AbstractService {
String local = conf.get(YarnConfiguration.NM_LOCAL_DIRS);
conf.set(NM_GOOD_LOCAL_DIRS,
(local != null) ? local : "");
- localDirsAllocator = new LocalDirAllocator(
- NM_GOOD_LOCAL_DIRS);
- String log = conf.get(YarnConfiguration.NM_LOG_DIRS);
- conf.set(NM_GOOD_LOG_DIRS,
- (log != null) ? log : "");
- logDirsAllocator = new LocalDirAllocator(
- NM_GOOD_LOG_DIRS);
+ String diskValidatorName = conf.get(YarnConfiguration.DISK_VALIDATOR,
+ YarnConfiguration.DEFAULT_DISK_VALIDATOR);
+ try {
+ DiskValidator diskValidator =
+ DiskValidatorFactory.getInstance(diskValidatorName);
+ localDirsAllocator = new LocalDirAllocator(
+ NM_GOOD_LOCAL_DIRS, diskValidator);
+ String log = conf.get(YarnConfiguration.NM_LOG_DIRS);
+ conf.set(NM_GOOD_LOG_DIRS,
+ (log != null) ? log : "");
+ logDirsAllocator = new LocalDirAllocator(
+ NM_GOOD_LOG_DIRS, diskValidator);
+ } catch (DiskErrorException e) {
+ throw new YarnRuntimeException(
+ "Failed to create DiskValidator of type " + diskValidatorName + "!",
+ e);
+ }
}
@Override
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[13/50] hadoop git commit: YARN-8518. test-container-executor
test_is_empty() is broken (Jim_Brennan via rkanter)
Posted by zh...@apache.org.
YARN-8518. test-container-executor test_is_empty() is broken (Jim_Brennan via rkanter)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1bc106a7
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1bc106a7
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1bc106a7
Branch: refs/heads/HDFS-13572
Commit: 1bc106a738a6ce4f7ed025d556bb44c1ede022e3
Parents: 556d9b3
Author: Robert Kanter <rk...@apache.org>
Authored: Thu Jul 12 16:38:46 2018 -0700
Committer: Robert Kanter <rk...@apache.org>
Committed: Thu Jul 12 16:38:46 2018 -0700
----------------------------------------------------------------------
.../container-executor/test/test-container-executor.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1bc106a7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
index a199d84..5607823 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
@@ -1203,19 +1203,23 @@ void test_trim_function() {
free(trimmed);
}
+int is_empty(char *name);
+
void test_is_empty() {
printf("\nTesting is_empty function\n");
if (is_empty("/")) {
printf("FAIL: / should not be empty\n");
exit(1);
}
- if (is_empty("/tmp/2938rf2983hcqnw8ud/noexist")) {
- printf("FAIL: /tmp/2938rf2983hcqnw8ud/noexist should not exist\n");
+ char *noexist = TEST_ROOT "/noexist";
+ if (is_empty(noexist)) {
+ printf("%s should not exist\n", noexist);
exit(1);
}
- mkdir("/tmp/2938rf2983hcqnw8ud/emptydir", S_IRWXU);
- if (!is_empty("/tmp/2938rf2983hcqnw8ud/emptydir")) {
- printf("FAIL: /tmp/2938rf2983hcqnw8ud/emptydir be empty\n");
+ char *emptydir = TEST_ROOT "/emptydir";
+ mkdir(emptydir, S_IRWXU);
+ if (!is_empty(emptydir)) {
+ printf("FAIL: %s should be empty\n", emptydir);
exit(1);
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[35/50] hadoop git commit: HDFS-13485. DataNode WebHDFS endpoint
throws NPE. Contributed by Siyao Meng.
Posted by zh...@apache.org.
HDFS-13485. DataNode WebHDFS endpoint throws NPE. Contributed by Siyao Meng.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d2153577
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d2153577
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d2153577
Branch: refs/heads/HDFS-13572
Commit: d2153577181f900ee6d8bf67d254e408bbaad243
Parents: 121865c
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Mon Jul 16 15:45:55 2018 -0700
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Mon Jul 16 15:45:55 2018 -0700
----------------------------------------------------------------------
.../org/apache/hadoop/security/token/Token.java | 5 +++++
.../apache/hadoop/security/token/TestToken.java | 18 ++++++++++++++++++
2 files changed, 23 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2153577/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
index 33cb9ec..25aac88 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java
@@ -23,6 +23,7 @@ import com.google.protobuf.ByteString;
import com.google.common.primitives.Bytes;
import org.apache.commons.codec.binary.Base64;
+import org.apache.hadoop.HadoopIllegalArgumentException;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.conf.Configuration;
@@ -358,6 +359,10 @@ public class Token<T extends TokenIdentifier> implements Writable {
*/
private static void decodeWritable(Writable obj,
String newValue) throws IOException {
+ if (newValue == null) {
+ throw new HadoopIllegalArgumentException(
+ "Invalid argument, newValue is null");
+ }
Base64 decoder = new Base64(0, null, true);
DataInputBuffer buf = new DataInputBuffer();
byte[] decoded = decoder.decode(newValue);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2153577/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestToken.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestToken.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestToken.java
index f6e5133..3a3567c 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestToken.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/TestToken.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.security.token;
import java.io.*;
import java.util.Arrays;
+import org.apache.hadoop.HadoopIllegalArgumentException;
import org.apache.hadoop.io.*;
import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
import org.apache.hadoop.security.token.delegation.TestDelegationToken.TestDelegationTokenIdentifier;
@@ -100,6 +101,23 @@ public class TestToken {
}
}
+ /*
+ * Test decodeWritable() with null newValue string argument,
+ * should throw HadoopIllegalArgumentException.
+ */
+ @Test
+ public void testDecodeWritableArgSanityCheck() throws Exception {
+ Token<AbstractDelegationTokenIdentifier> token =
+ new Token<AbstractDelegationTokenIdentifier>();
+ try {
+ token.decodeFromUrlString(null);
+ fail("Should have thrown HadoopIllegalArgumentException");
+ }
+ catch (HadoopIllegalArgumentException e) {
+ Token.LOG.info("Test decodeWritable() sanity check success.");
+ }
+ }
+
@Test
public void testDecodeIdentifier() throws IOException {
TestDelegationTokenSecretManager secretManager =
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[19/50] hadoop git commit: YARN-8515. container-executor can crash
with SIGPIPE after nodemanager restart. Contributed by Jim Brennan
Posted by zh...@apache.org.
YARN-8515. container-executor can crash with SIGPIPE after nodemanager restart. Contributed by Jim Brennan
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/17118f44
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/17118f44
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/17118f44
Branch: refs/heads/HDFS-13572
Commit: 17118f446c2387aa796849da8b69a845d9d307d3
Parents: d185072
Author: Jason Lowe <jl...@apache.org>
Authored: Fri Jul 13 10:05:25 2018 -0500
Committer: Jason Lowe <jl...@apache.org>
Committed: Fri Jul 13 10:05:25 2018 -0500
----------------------------------------------------------------------
.../src/main/native/container-executor/impl/main.c | 6 ++++++
1 file changed, 6 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/17118f44/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
index 2099ace..6ab522f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c
@@ -31,6 +31,7 @@
#include <unistd.h>
#include <stdlib.h>
#include <string.h>
+#include <signal.h>
static void display_usage(FILE *stream) {
fprintf(stream,
@@ -112,6 +113,11 @@ static void open_log_files() {
if (ERRORFILE == NULL) {
ERRORFILE = stderr;
}
+
+ // There may be a process reading from stdout/stderr, and if it
+ // exits, we will crash on a SIGPIPE when we try to write to them.
+ // By ignoring SIGPIPE, we can handle the EPIPE instead of crashing.
+ signal(SIGPIPE, SIG_IGN);
}
/* Flushes and closes log files */
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[07/50] hadoop git commit: HDDS-242. Introduce NEW_NODE,
STALE_NODE and DEAD_NODE event and corresponding event handlers in
SCM. Contributed by Nanda Kumar.
Posted by zh...@apache.org.
HDDS-242. Introduce NEW_NODE, STALE_NODE and DEAD_NODE event
and corresponding event handlers in SCM.
Contributed by Nanda Kumar.
Recommitting after making sure that patch is clean.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/632aca57
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/632aca57
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/632aca57
Branch: refs/heads/HDFS-13572
Commit: 632aca5793d391c741c0bce3d2e70ae6e03fe306
Parents: b567858
Author: Anu Engineer <ae...@apache.org>
Authored: Wed Jul 11 12:08:50 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Wed Jul 11 12:08:50 2018 -0700
----------------------------------------------------------------------
.../container/CloseContainerEventHandler.java | 7 ++-
.../hdds/scm/container/ContainerMapping.java | 5 --
.../scm/container/ContainerReportHandler.java | 47 ++++++++++++++++++
.../hadoop/hdds/scm/container/Mapping.java | 9 +---
.../scm/container/closer/ContainerCloser.java | 1 -
.../hadoop/hdds/scm/events/SCMEvents.java | 22 +++++++++
.../hadoop/hdds/scm/node/DatanodeInfo.java | 11 +++++
.../hadoop/hdds/scm/node/DeadNodeHandler.java | 42 ++++++++++++++++
.../hadoop/hdds/scm/node/NewNodeHandler.java | 50 +++++++++++++++++++
.../hadoop/hdds/scm/node/NodeManager.java | 4 +-
.../hadoop/hdds/scm/node/NodeReportHandler.java | 42 ++++++++++++++++
.../hadoop/hdds/scm/node/NodeStateManager.java | 32 +++++++++++-
.../hadoop/hdds/scm/node/SCMNodeManager.java | 24 ++++++---
.../hadoop/hdds/scm/node/StaleNodeHandler.java | 42 ++++++++++++++++
.../server/SCMDatanodeHeartbeatDispatcher.java | 20 ++++++--
.../scm/server/SCMDatanodeProtocolServer.java | 18 ++-----
.../scm/server/StorageContainerManager.java | 51 +++++++++++++++-----
.../hdds/scm/container/MockNodeManager.java | 9 ++++
.../TestCloseContainerEventHandler.java | 2 +
.../hdds/scm/node/TestContainerPlacement.java | 12 ++++-
.../hadoop/hdds/scm/node/TestNodeManager.java | 11 ++++-
.../TestSCMDatanodeHeartbeatDispatcher.java | 8 ++-
.../testutils/ReplicationNodeManagerMock.java | 7 +++
23 files changed, 417 insertions(+), 59 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
index f1053d5..859e5d5 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/CloseContainerEventHandler.java
@@ -25,9 +25,12 @@ import org.apache.hadoop.hdds.scm.exceptions.SCMException;
import org.apache.hadoop.hdds.server.events.EventHandler;
import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.DATANODE_COMMAND;
+
/**
* In case of a node failure, volume failure, volume out of spapce, node
* out of space etc, CLOSE_CONTAINER will be triggered.
@@ -73,9 +76,11 @@ public class CloseContainerEventHandler implements EventHandler<ContainerID> {
if (info.getState() == HddsProtos.LifeCycleState.OPEN) {
for (DatanodeDetails datanode :
containerWithPipeline.getPipeline().getMachines()) {
- containerManager.getNodeManager().addDatanodeCommand(datanode.getUuid(),
+ CommandForDatanode closeContainerCommand = new CommandForDatanode<>(
+ datanode.getUuid(),
new CloseContainerCommand(containerID.getId(),
info.getReplicationType()));
+ publisher.fireEvent(DATANODE_COMMAND, closeContainerCommand);
}
try {
// Finalize event will make sure the state of the container transitions
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
index e25c5b4..abad32c 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
@@ -709,11 +709,6 @@ public class ContainerMapping implements Mapping {
}
}
- @Override
- public NodeManager getNodeManager() {
- return nodeManager;
- }
-
@VisibleForTesting
public MetadataStore getContainerStore() {
return containerStore;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
new file mode 100644
index 0000000..486162e
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
@@ -0,0 +1,47 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .ContainerReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles container reports from datanode.
+ */
+public class ContainerReportHandler implements
+ EventHandler<ContainerReportFromDatanode> {
+
+ private final Mapping containerMapping;
+ private final Node2ContainerMap node2ContainerMap;
+
+ public ContainerReportHandler(Mapping containerMapping,
+ Node2ContainerMap node2ContainerMap) {
+ this.containerMapping = containerMapping;
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(ContainerReportFromDatanode containerReportFromDatanode,
+ EventPublisher publisher) {
+ // TODO: process container report.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
index f52eb05..ac84be4 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
-import org.apache.hadoop.hdds.scm.node.NodeManager;
import java.io.Closeable;
import java.io.IOException;
@@ -130,16 +129,10 @@ public interface Mapping extends Closeable {
throws IOException;
/**
- * Returns the nodeManager.
- * @return NodeManager
- */
- NodeManager getNodeManager();
-
- /**
* Returns the ContainerWithPipeline.
* @return NodeManager
*/
- public ContainerWithPipeline getMatchingContainerWithPipeline(final long size,
+ ContainerWithPipeline getMatchingContainerWithPipeline(long size,
String owner, ReplicationType type, ReplicationFactor factor,
LifeCycleState state) throws IOException;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
index 3ca8ba9..eb591be 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/closer/ContainerCloser.java
@@ -26,7 +26,6 @@ import org.apache.hadoop.hdds.protocol.proto.HddsProtos.SCMContainerInfo;
import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
import org.apache.hadoop.util.Time;
import org.slf4j.Logger;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
index 2c9c431..0afd675 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
@@ -19,6 +19,7 @@
package org.apache.hadoop.hdds.scm.events;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.container.ContainerID;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.ContainerReportFromDatanode;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.NodeReportFromDatanode;
@@ -72,6 +73,27 @@ public final class SCMEvents {
new TypedEvent<>(ContainerID.class, "Close_Container");
/**
+ * This event will be triggered whenever a new datanode is
+ * registered with SCM.
+ */
+ public static final TypedEvent<DatanodeDetails> NEW_NODE =
+ new TypedEvent<>(DatanodeDetails.class, "New_Node");
+
+ /**
+ * This event will be triggered whenever a datanode is moved from healthy to
+ * stale state.
+ */
+ public static final TypedEvent<DatanodeDetails> STALE_NODE =
+ new TypedEvent<>(DatanodeDetails.class, "Stale_Node");
+
+ /**
+ * This event will be triggered whenever a datanode is moved from stale to
+ * dead state.
+ */
+ public static final TypedEvent<DatanodeDetails> DEAD_NODE =
+ new TypedEvent<>(DatanodeDetails.class, "Dead_Node");
+
+ /**
* Private Ctor. Never Constructed.
*/
private SCMEvents() {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
index 51465ee..6d5575b 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeInfo.java
@@ -106,4 +106,15 @@ public class DatanodeInfo extends DatanodeDetails {
lock.readLock().unlock();
}
}
+
+ @Override
+ public int hashCode() {
+ return super.hashCode();
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ return super.equals(obj);
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
new file mode 100644
index 0000000..427aef8
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Dead Node event.
+ */
+public class DeadNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public DeadNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ //TODO: add logic to handle dead node.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
new file mode 100644
index 0000000..79b75a5
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
@@ -0,0 +1,50 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+import java.util.Collections;
+
+/**
+ * Handles New Node event.
+ */
+public class NewNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public NewNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ try {
+ node2ContainerMap.insertNewDatanode(datanodeDetails.getUuid(),
+ Collections.emptySet());
+ } catch (SCMException e) {
+ // TODO: log exception message.
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index c13c37c..5e2969d 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -22,7 +22,9 @@ import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
+import org.apache.hadoop.hdds.server.events.EventHandler;
import org.apache.hadoop.ozone.protocol.StorageContainerNodeProtocol;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
import java.io.Closeable;
@@ -53,7 +55,7 @@ import java.util.UUID;
* list, by calling removeNode. We will throw away this nodes info soon.
*/
public interface NodeManager extends StorageContainerNodeProtocol,
- NodeManagerMXBean, Closeable {
+ EventHandler<CommandForDatanode>, NodeManagerMXBean, Closeable {
/**
* Removes a data node from the management of this Node Manager.
*
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
new file mode 100644
index 0000000..aa78d53
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .NodeReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Node Reports from datanode.
+ */
+public class NodeReportHandler implements EventHandler<NodeReportFromDatanode> {
+
+ private final NodeManager nodeManager;
+
+ public NodeReportHandler(NodeManager nodeManager) {
+ this.nodeManager = nodeManager;
+ }
+
+ @Override
+ public void onMessage(NodeReportFromDatanode nodeReportFromDatanode,
+ EventPublisher publisher) {
+ //TODO: process node report.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
index 5543c04..77f939e 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeStateManager.java
@@ -24,9 +24,12 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
import org.apache.hadoop.hdds.scm.HddsServerUtil;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
import org.apache.hadoop.hdds.scm.node.states.NodeAlreadyExistsException;
import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
import org.apache.hadoop.hdds.scm.node.states.NodeStateMap;
+import org.apache.hadoop.hdds.server.events.Event;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.ozone.common.statemachine
.InvalidStateTransitionException;
import org.apache.hadoop.ozone.common.statemachine.StateMachine;
@@ -36,9 +39,11 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.Closeable;
+import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
+import java.util.Map;
import java.util.Set;
import java.util.UUID;
import java.util.concurrent.ScheduledExecutorService;
@@ -87,6 +92,14 @@ public class NodeStateManager implements Runnable, Closeable {
*/
private final NodeStateMap nodeStateMap;
/**
+ * Used for publishing node state change events.
+ */
+ private final EventPublisher eventPublisher;
+ /**
+ * Maps the event to be triggered when a node state us updated.
+ */
+ private final Map<NodeState, Event<DatanodeDetails>> state2EventMap;
+ /**
* ExecutorService used for scheduling heartbeat processing thread.
*/
private final ScheduledExecutorService executorService;
@@ -108,8 +121,11 @@ public class NodeStateManager implements Runnable, Closeable {
*
* @param conf Configuration
*/
- public NodeStateManager(Configuration conf) {
- nodeStateMap = new NodeStateMap();
+ public NodeStateManager(Configuration conf, EventPublisher eventPublisher) {
+ this.nodeStateMap = new NodeStateMap();
+ this.eventPublisher = eventPublisher;
+ this.state2EventMap = new HashMap<>();
+ initialiseState2EventMap();
Set<NodeState> finalStates = new HashSet<>();
finalStates.add(NodeState.DECOMMISSIONED);
this.stateMachine = new StateMachine<>(NodeState.HEALTHY, finalStates);
@@ -130,6 +146,14 @@ public class NodeStateManager implements Runnable, Closeable {
TimeUnit.MILLISECONDS);
}
+ /**
+ * Populates state2event map.
+ */
+ private void initialiseState2EventMap() {
+ state2EventMap.put(NodeState.STALE, SCMEvents.STALE_NODE);
+ state2EventMap.put(NodeState.DEAD, SCMEvents.DEAD_NODE);
+ }
+
/*
*
* Node and State Transition Mapping:
@@ -220,6 +244,7 @@ public class NodeStateManager implements Runnable, Closeable {
public void addNode(DatanodeDetails datanodeDetails)
throws NodeAlreadyExistsException {
nodeStateMap.addNode(datanodeDetails, stateMachine.getInitialState());
+ eventPublisher.fireEvent(SCMEvents.NEW_NODE, datanodeDetails);
}
/**
@@ -548,6 +573,9 @@ public class NodeStateManager implements Runnable, Closeable {
if (condition.test(node.getLastHeartbeatTime())) {
NodeState newState = stateMachine.getNextState(state, lifeCycleEvent);
nodeStateMap.updateNodeState(node.getUuid(), state, newState);
+ if (state2EventMap.containsKey(newState)) {
+ eventPublisher.fireEvent(state2EventMap.get(newState), node);
+ }
}
} catch (InvalidStateTransitionException e) {
LOG.warn("Invalid state transition of node {}." +
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index d787d14..2ba8067 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
import org.apache.hadoop.hdds.scm.VersionInfo;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
-import org.apache.hadoop.hdds.server.events.EventHandler;
import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -78,8 +77,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
* as soon as you read it.
*/
public class SCMNodeManager
- implements NodeManager, StorageContainerNodeProtocol,
- EventHandler<CommandForDatanode> {
+ implements NodeManager, StorageContainerNodeProtocol {
@VisibleForTesting
static final Logger LOG =
@@ -117,14 +115,13 @@ public class SCMNodeManager
// Node pool manager.
private final StorageContainerManager scmManager;
-
-
/**
* Constructs SCM machine Manager.
*/
public SCMNodeManager(OzoneConfiguration conf, String clusterID,
- StorageContainerManager scmManager) throws IOException {
- this.nodeStateManager = new NodeStateManager(conf);
+ StorageContainerManager scmManager, EventPublisher eventPublisher)
+ throws IOException {
+ this.nodeStateManager = new NodeStateManager(conf, eventPublisher);
this.nodeStats = new ConcurrentHashMap<>();
this.scmStat = new SCMNodeStat();
this.clusterID = clusterID;
@@ -462,14 +459,25 @@ public class SCMNodeManager
return nodeCountMap;
}
+ // TODO:
+ // Since datanode commands are added through event queue, onMessage method
+ // should take care of adding commands to command queue.
+ // Refactor and remove all the usage of this method and delete this method.
@Override
public void addDatanodeCommand(UUID dnId, SCMCommand command) {
this.commandQueue.addCommand(dnId, command);
}
+ /**
+ * This method is called by EventQueue whenever someone adds a new
+ * DATANODE_COMMAND to the Queue.
+ *
+ * @param commandForDatanode DatanodeCommand
+ * @param ignored publisher
+ */
@Override
public void onMessage(CommandForDatanode commandForDatanode,
- EventPublisher publisher) {
+ EventPublisher ignored) {
addDatanodeCommand(commandForDatanode.getDatanodeId(),
commandForDatanode.getCommand());
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
new file mode 100644
index 0000000..b37dd93
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.node;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
+import org.apache.hadoop.hdds.server.events.EventHandler;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+
+/**
+ * Handles Stale node event.
+ */
+public class StaleNodeHandler implements EventHandler<DatanodeDetails> {
+
+ private final Node2ContainerMap node2ContainerMap;
+
+ public StaleNodeHandler(Node2ContainerMap node2ContainerMap) {
+ this.node2ContainerMap = node2ContainerMap;
+ }
+
+ @Override
+ public void onMessage(DatanodeDetails datanodeDetails,
+ EventPublisher publisher) {
+ //TODO: logic to handle stale node.
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
index a6354af..4cfa98f 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
@@ -17,6 +17,7 @@
package org.apache.hadoop.hdds.scm.server;
+import com.google.common.base.Preconditions;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
@@ -24,12 +25,16 @@ import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.NodeReportProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMHeartbeatRequestProto;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.server.events.EventPublisher;
import com.google.protobuf.GeneratedMessage;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import java.util.List;
+
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CONTAINER_REPORT;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.NODE_REPORT;
@@ -42,10 +47,15 @@ public final class SCMDatanodeHeartbeatDispatcher {
private static final Logger LOG =
LoggerFactory.getLogger(SCMDatanodeHeartbeatDispatcher.class);
- private EventPublisher eventPublisher;
+ private final NodeManager nodeManager;
+ private final EventPublisher eventPublisher;
- public SCMDatanodeHeartbeatDispatcher(EventPublisher eventPublisher) {
+ public SCMDatanodeHeartbeatDispatcher(NodeManager nodeManager,
+ EventPublisher eventPublisher) {
+ Preconditions.checkNotNull(nodeManager);
+ Preconditions.checkNotNull(eventPublisher);
+ this.nodeManager = nodeManager;
this.eventPublisher = eventPublisher;
}
@@ -54,11 +64,14 @@ public final class SCMDatanodeHeartbeatDispatcher {
* Dispatches heartbeat to registered event handlers.
*
* @param heartbeat heartbeat to be dispatched.
+ *
+ * @return list of SCMCommand
*/
- public void dispatch(SCMHeartbeatRequestProto heartbeat) {
+ public List<SCMCommand> dispatch(SCMHeartbeatRequestProto heartbeat) {
DatanodeDetails datanodeDetails =
DatanodeDetails.getFromProtoBuf(heartbeat.getDatanodeDetails());
// should we dispatch heartbeat through eventPublisher?
+ List<SCMCommand> commands = nodeManager.processHeartbeat(datanodeDetails);
if (heartbeat.hasNodeReport()) {
LOG.debug("Dispatching Node Report.");
eventPublisher.fireEvent(NODE_REPORT,
@@ -73,6 +86,7 @@ public final class SCMDatanodeHeartbeatDispatcher {
heartbeat.getContainerReport()));
}
+ return commands;
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
index aef5b03..aee64b9 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeProtocolServer.java
@@ -133,7 +133,8 @@ public class SCMDatanodeProtocolServer implements
conf.getInt(OZONE_SCM_HANDLER_COUNT_KEY,
OZONE_SCM_HANDLER_COUNT_DEFAULT);
- heartbeatDispatcher = new SCMDatanodeHeartbeatDispatcher(eventPublisher);
+ heartbeatDispatcher = new SCMDatanodeHeartbeatDispatcher(
+ scm.getScmNodeManager(), eventPublisher);
RPC.setProtocolEngine(conf, StorageContainerDatanodeProtocolPB.class,
ProtobufRpcEngine.class);
@@ -214,22 +215,13 @@ public class SCMDatanodeProtocolServer implements
@Override
public SCMHeartbeatResponseProto sendHeartbeat(
- SCMHeartbeatRequestProto heartbeat)
- throws IOException {
- heartbeatDispatcher.dispatch(heartbeat);
-
- // TODO: Remove the below code after SCM refactoring.
- DatanodeDetails datanodeDetails = DatanodeDetails
- .getFromProtoBuf(heartbeat.getDatanodeDetails());
- NodeReportProto nodeReport = heartbeat.getNodeReport();
- List<SCMCommand> commands =
- scm.getScmNodeManager().processHeartbeat(datanodeDetails);
+ SCMHeartbeatRequestProto heartbeat) throws IOException {
List<SCMCommandProto> cmdResponses = new LinkedList<>();
- for (SCMCommand cmd : commands) {
+ for (SCMCommand cmd : heartbeatDispatcher.dispatch(heartbeat)) {
cmdResponses.add(getCommandResponse(cmd));
}
return SCMHeartbeatResponseProto.newBuilder()
- .setDatanodeUUID(datanodeDetails.getUuidString())
+ .setDatanodeUUID(heartbeat.getDatanodeDetails().getUuid())
.addAllCommands(cmdResponses).build();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
index 49d3a40..5f511ee 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
@@ -33,15 +33,23 @@ import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
import org.apache.hadoop.hdds.scm.block.BlockManager;
import org.apache.hadoop.hdds.scm.block.BlockManagerImpl;
+import org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler;
import org.apache.hadoop.hdds.scm.container.ContainerMapping;
+import org.apache.hadoop.hdds.scm.container.ContainerReportHandler;
import org.apache.hadoop.hdds.scm.container.Mapping;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.container.placement.metrics.ContainerStat;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMMetrics;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
import org.apache.hadoop.hdds.scm.exceptions.SCMException;
import org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes;
+import org.apache.hadoop.hdds.scm.node.DeadNodeHandler;
+import org.apache.hadoop.hdds.scm.node.NewNodeHandler;
import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.NodeReportHandler;
import org.apache.hadoop.hdds.scm.node.SCMNodeManager;
+import org.apache.hadoop.hdds.scm.node.StaleNodeHandler;
+import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
import org.apache.hadoop.hdds.server.ServiceRuntimeInfoImpl;
import org.apache.hadoop.hdds.server.events.EventQueue;
import org.apache.hadoop.hdfs.DFSUtil;
@@ -71,7 +79,6 @@ import java.util.concurrent.TimeUnit;
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_DEFAULT;
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_MB;
-import static org.apache.hadoop.hdds.scm.events.SCMEvents.DATANODE_COMMAND;
import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ENABLED;
import static org.apache.hadoop.util.ExitUtil.terminate;
@@ -126,6 +133,8 @@ public final class StorageContainerManager extends ServiceRuntimeInfoImpl
private final Mapping scmContainerManager;
private final BlockManager scmBlockManager;
private final SCMStorage scmStorage;
+
+ private final EventQueue eventQueue;
/*
* HTTP endpoint for JMX access.
*/
@@ -164,18 +173,35 @@ public final class StorageContainerManager extends ServiceRuntimeInfoImpl
throw new SCMException("SCM not initialized.", ResultCodes
.SCM_NOT_INITIALIZED);
}
- EventQueue eventQueue = new EventQueue();
-
- SCMNodeManager nm =
- new SCMNodeManager(conf, scmStorage.getClusterID(), this);
- scmNodeManager = nm;
- eventQueue.addHandler(DATANODE_COMMAND, nm);
- scmContainerManager = new ContainerMapping(conf, getScmNodeManager(),
- cacheSize);
-
- scmBlockManager =
- new BlockManagerImpl(conf, getScmNodeManager(), scmContainerManager);
+ eventQueue = new EventQueue();
+
+ scmNodeManager = new SCMNodeManager(
+ conf, scmStorage.getClusterID(), this, eventQueue);
+ scmContainerManager = new ContainerMapping(
+ conf, getScmNodeManager(), cacheSize);
+ scmBlockManager = new BlockManagerImpl(
+ conf, getScmNodeManager(), scmContainerManager);
+
+ Node2ContainerMap node2ContainerMap = new Node2ContainerMap();
+
+ CloseContainerEventHandler closeContainerHandler =
+ new CloseContainerEventHandler(scmContainerManager);
+ NodeReportHandler nodeReportHandler =
+ new NodeReportHandler(scmNodeManager);
+ ContainerReportHandler containerReportHandler =
+ new ContainerReportHandler(scmContainerManager, node2ContainerMap);
+ NewNodeHandler newNodeHandler = new NewNodeHandler(node2ContainerMap);
+ StaleNodeHandler staleNodeHandler = new StaleNodeHandler(node2ContainerMap);
+ DeadNodeHandler deadNodeHandler = new DeadNodeHandler(node2ContainerMap);
+
+ eventQueue.addHandler(SCMEvents.DATANODE_COMMAND, scmNodeManager);
+ eventQueue.addHandler(SCMEvents.NODE_REPORT, nodeReportHandler);
+ eventQueue.addHandler(SCMEvents.CONTAINER_REPORT, containerReportHandler);
+ eventQueue.addHandler(SCMEvents.CLOSE_CONTAINER, closeContainerHandler);
+ eventQueue.addHandler(SCMEvents.NEW_NODE, newNodeHandler);
+ eventQueue.addHandler(SCMEvents.STALE_NODE, staleNodeHandler);
+ eventQueue.addHandler(SCMEvents.DEAD_NODE, deadNodeHandler);
scmAdminUsernames = conf.getTrimmedStringCollection(OzoneConfigKeys
.OZONE_ADMINISTRATORS);
@@ -189,7 +215,6 @@ public final class StorageContainerManager extends ServiceRuntimeInfoImpl
blockProtocolServer = new SCMBlockProtocolServer(conf, this);
clientProtocolServer = new SCMClientProtocolServer(conf, this);
httpServer = new StorageContainerManagerHttpServer(conf);
-
registerMXBean();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
index 3357992..5e83c28 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
@@ -26,8 +26,10 @@ import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.NodeReportProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMVersionRequestProto;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.ozone.protocol.VersionResponse;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
import org.assertj.core.util.Preconditions;
@@ -399,6 +401,13 @@ public class MockNodeManager implements NodeManager {
}
}
+ @Override
+ public void onMessage(CommandForDatanode commandForDatanode,
+ EventPublisher publisher) {
+ addDatanodeCommand(commandForDatanode.getDatanodeId(),
+ commandForDatanode.getCommand());
+ }
+
/**
* A class to declare some values for the nodes so that our tests
* won't fail.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
index 0d46ffa..0764b12 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestCloseContainerEventHandler.java
@@ -41,6 +41,7 @@ import static org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleEvent.CR
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT;
import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_GB;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CLOSE_CONTAINER;
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.DATANODE_COMMAND;
/**
* Tests the closeContainerEventHandler class.
@@ -69,6 +70,7 @@ public class TestCloseContainerEventHandler {
eventQueue = new EventQueue();
eventQueue.addHandler(CLOSE_CONTAINER,
new CloseContainerEventHandler(mapping));
+ eventQueue.addHandler(DATANODE_COMMAND, nodeManager);
}
@AfterClass
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
index c6ea2af..48567ee 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestContainerPlacement.java
@@ -34,6 +34,8 @@ import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.StorageReportProto;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventQueue;
import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.test.PathUtils;
@@ -41,6 +43,7 @@ import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
+import org.mockito.Mockito;
import java.io.File;
import java.io.IOException;
@@ -86,8 +89,15 @@ public class TestContainerPlacement {
SCMNodeManager createNodeManager(OzoneConfiguration config)
throws IOException {
+ EventQueue eventQueue = new EventQueue();
+ eventQueue.addHandler(SCMEvents.NEW_NODE,
+ Mockito.mock(NewNodeHandler.class));
+ eventQueue.addHandler(SCMEvents.STALE_NODE,
+ Mockito.mock(StaleNodeHandler.class));
+ eventQueue.addHandler(SCMEvents.DEAD_NODE,
+ Mockito.mock(DeadNodeHandler.class));
SCMNodeManager nodeManager = new SCMNodeManager(config,
- UUID.randomUUID().toString(), null);
+ UUID.randomUUID().toString(), null, eventQueue);
assertFalse("Node manager should be in chill mode",
nodeManager.isOutOfChillMode());
return nodeManager;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
index d72309e..cefd179 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeManager.java
@@ -30,6 +30,7 @@ import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.StorageReportProto;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
import org.apache.hadoop.hdds.server.events.EventQueue;
import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
@@ -45,6 +46,7 @@ import org.junit.Ignore;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
+import org.mockito.Mockito;
import java.io.File;
import java.io.IOException;
@@ -124,8 +126,15 @@ public class TestNodeManager {
SCMNodeManager createNodeManager(OzoneConfiguration config)
throws IOException {
+ EventQueue eventQueue = new EventQueue();
+ eventQueue.addHandler(SCMEvents.NEW_NODE,
+ Mockito.mock(NewNodeHandler.class));
+ eventQueue.addHandler(SCMEvents.STALE_NODE,
+ Mockito.mock(StaleNodeHandler.class));
+ eventQueue.addHandler(SCMEvents.DEAD_NODE,
+ Mockito.mock(DeadNodeHandler.class));
SCMNodeManager nodeManager = new SCMNodeManager(config,
- UUID.randomUUID().toString(), null);
+ UUID.randomUUID().toString(), null, eventQueue);
assertFalse("Node manager should be in chill mode",
nodeManager.isOutOfChillMode());
return nodeManager;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
index a77ed04..042e3cc 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.hdds.protocol.proto
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMHeartbeatRequestProto;
import org.apache.hadoop.hdds.scm.TestUtils;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
.ContainerReportFromDatanode;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
@@ -37,6 +38,7 @@ import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.junit.Assert;
import org.junit.Test;
+import org.mockito.Mockito;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CONTAINER_REPORT;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.NODE_REPORT;
@@ -55,7 +57,8 @@ public class TestSCMDatanodeHeartbeatDispatcher {
NodeReportProto nodeReport = NodeReportProto.getDefaultInstance();
SCMDatanodeHeartbeatDispatcher dispatcher =
- new SCMDatanodeHeartbeatDispatcher(new EventPublisher() {
+ new SCMDatanodeHeartbeatDispatcher(Mockito.mock(NodeManager.class),
+ new EventPublisher() {
@Override
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void fireEvent(
EVENT_TYPE event, PAYLOAD payload) {
@@ -90,7 +93,8 @@ public class TestSCMDatanodeHeartbeatDispatcher {
ContainerReportsProto.getDefaultInstance();
SCMDatanodeHeartbeatDispatcher dispatcher =
- new SCMDatanodeHeartbeatDispatcher(new EventPublisher() {
+ new SCMDatanodeHeartbeatDispatcher(Mockito.mock(NodeManager.class),
+ new EventPublisher() {
@Override
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void fireEvent(
EVENT_TYPE event, PAYLOAD payload) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/632aca57/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
index e15e0fc..2d27d71 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
@@ -28,7 +28,9 @@ import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.NodeReportProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMVersionRequestProto;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.ozone.protocol.VersionResponse;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.apache.hadoop.ozone.protocol.commands.RegisteredCommand;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
@@ -287,4 +289,9 @@ public class ReplicationNodeManagerMock implements NodeManager {
this.commandQueue.addCommand(dnId, command);
}
+ @Override
+ public void onMessage(CommandForDatanode commandForDatanode,
+ EventPublisher publisher) {
+ // do nothing.
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[09/50] hadoop git commit: HADOOP-15349. S3Guard DDB retryBackoff to
be more informative on limits exceeded. Contributed by Gabor Bota.
Posted by zh...@apache.org.
HADOOP-15349. S3Guard DDB retryBackoff to be more informative on limits exceeded. Contributed by Gabor Bota.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a08812a1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a08812a1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a08812a1
Branch: refs/heads/HDFS-13572
Commit: a08812a1b10df059b26f6a216e6339490298ba28
Parents: 4f3f939
Author: Sean Mackrory <ma...@apache.org>
Authored: Thu Jul 12 16:46:02 2018 +0200
Committer: Sean Mackrory <ma...@apache.org>
Committed: Thu Jul 12 17:24:01 2018 +0200
----------------------------------------------------------------------
.../org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a08812a1/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
index 116827d..43849b1 100644
--- a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
+++ b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
@@ -655,7 +655,8 @@ public class DynamoDBMetadataStore implements MetadataStore {
retryCount, 0, true);
if (action.action == RetryPolicy.RetryAction.RetryDecision.FAIL) {
throw new IOException(
- String.format("Max retries exceeded (%d) for DynamoDB",
+ String.format("Max retries exceeded (%d) for DynamoDB. This may be"
+ + " because write threshold of DynamoDB is set too low.",
retryCount));
} else {
LOG.debug("Sleeping {} msec before next retry", action.delayMillis);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[45/50] hadoop git commit: YARN-8436. FSParentQueue: Comparison
method violates its general contract. (Wilfred Spiegelenburg via Haibo Chen)
Posted by zh...@apache.org.
YARN-8436. FSParentQueue: Comparison method violates its general contract. (Wilfred Spiegelenburg via Haibo Chen)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/25648847
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/25648847
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/25648847
Branch: refs/heads/HDFS-13572
Commit: 2564884757fbf4df7718f814cc448f7f23dad875
Parents: 45d9568
Author: Haibo Chen <ha...@apache.org>
Authored: Thu Jul 19 13:21:57 2018 -0700
Committer: Haibo Chen <ha...@apache.org>
Committed: Thu Jul 19 13:22:31 2018 -0700
----------------------------------------------------------------------
.../scheduler/fair/FSParentQueue.java | 30 +++-----
.../scheduler/fair/FakeSchedulable.java | 4 +
.../TestDominantResourceFairnessPolicy.java | 77 ++++++++++++++++++++
3 files changed, 93 insertions(+), 18 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/25648847/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java
index 26c5630..d5df549 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java
@@ -20,8 +20,8 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
import java.util.ArrayList;
import java.util.Collection;
-import java.util.Collections;
import java.util.List;
+import java.util.TreeSet;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
@@ -188,25 +188,19 @@ public class FSParentQueue extends FSQueue {
return assigned;
}
- // Hold the write lock when sorting childQueues
- writeLock.lock();
- try {
- Collections.sort(childQueues, policy.getComparator());
- } finally {
- writeLock.unlock();
- }
-
- /*
- * We are releasing the lock between the sort and iteration of the
- * "sorted" list. There could be changes to the list here:
- * 1. Add a child queue to the end of the list, this doesn't affect
- * container assignment.
- * 2. Remove a child queue, this is probably good to take care of so we
- * don't assign to a queue that is going to be removed shortly.
- */
+ // Sort the queues while holding a read lock on this parent only.
+ // The individual entries are not locked and can change which means that
+ // the collection of childQueues can not be sorted by calling Sort().
+ // Locking each childqueue to prevent changes would have a large
+ // performance impact.
+ // We do not have to handle the queue removal case as a queue must be
+ // empty before removal. Assigning an application to a queue and removal of
+ // that queue both need the scheduler lock.
+ TreeSet<FSQueue> sortedChildQueues = new TreeSet<>(policy.getComparator());
readLock.lock();
try {
- for (FSQueue child : childQueues) {
+ sortedChildQueues.addAll(childQueues);
+ for (FSQueue child : sortedChildQueues) {
assigned = child.assignContainer(node);
if (!Resources.equals(assigned, Resources.none())) {
break;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/25648847/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java
index 03332b2..01eec73 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FakeSchedulable.java
@@ -143,4 +143,8 @@ public class FakeSchedulable implements Schedulable {
public boolean isPreemptable() {
return true;
}
+
+ public void setResourceUsage(Resource usage) {
+ this.usage = usage;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/25648847/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java
index 03fd1ef..55b7163 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/TestDominantResourceFairnessPolicy.java
@@ -19,11 +19,16 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
+import java.util.ArrayList;
+import java.util.Collections;
import java.util.Comparator;
+import java.util.List;
import java.util.Map;
+import java.util.TreeSet;
import org.apache.curator.shaded.com.google.common.base.Joiner;
import org.apache.hadoop.conf.Configuration;
@@ -443,4 +448,76 @@ public class TestDominantResourceFairnessPolicy {
conf.set(YarnConfiguration.RESOURCE_TYPES, Joiner.on(',').join(resources));
ResourceUtils.resetResourceTypes(conf);
}
+
+ @Test
+ public void testModWhileSorting(){
+ final List<FakeSchedulable> schedulableList = new ArrayList<>();
+ for (int i=0; i<10000; i++) {
+ schedulableList.add(
+ (FakeSchedulable)createSchedulable((i%10)*100, (i%3)*2));
+ }
+ Comparator DRFComparator = createComparator(100000, 50000);
+
+ // To simulate unallocated resource changes
+ Thread modThread = modificationThread(schedulableList);
+ modThread.start();
+
+ // This should fail: make sure that we do test correctly
+ // TimSort which is used does not handle the concurrent modification of
+ // objects it is sorting.
+ try {
+ Collections.sort(schedulableList, DRFComparator);
+ fail("Sorting should have failed and did not");
+ } catch (IllegalArgumentException iae) {
+ assertEquals(iae.getMessage(), "Comparison method violates its general contract!");
+ }
+ try {
+ modThread.join();
+ } catch (InterruptedException ie) {
+ fail("ModThread join failed: " + ie.getMessage());
+ }
+
+ // clean up and try again using TreeSet which should work
+ schedulableList.clear();
+ for (int i=0; i<10000; i++) {
+ schedulableList.add(
+ (FakeSchedulable)createSchedulable((i%10)*100, (i%3)*2));
+ }
+ TreeSet<Schedulable> sortedSchedulable = new TreeSet<>(DRFComparator);
+ modThread = modificationThread(schedulableList);
+ modThread.start();
+ sortedSchedulable.addAll(schedulableList);
+ try {
+ modThread.join();
+ } catch (InterruptedException ie) {
+ fail("ModThread join failed: " + ie.getMessage());
+ }
+ }
+
+ /**
+ * Thread to simulate concurrent schedulable changes while sorting
+ */
+ private Thread modificationThread(final List<FakeSchedulable> schedulableList) {
+ Thread modThread = new Thread() {
+ @Override
+ public void run() {
+ try {
+ // This sleep is needed to make sure the sort has started before the
+ // modifications start and finish
+ Thread.sleep(500);
+ } catch (InterruptedException ie) {
+ fail("Modification thread interrupted while asleep " +
+ ie.getMessage());
+ }
+ Resource newUsage = Resources.createResource(0, 0);
+ for (int j = 0; j < 1000; j++) {
+ FakeSchedulable sched = schedulableList.get(j * 10);
+ newUsage.setMemorySize(20000);
+ newUsage.setVirtualCores(j % 10);
+ sched.setResourceUsage(newUsage);
+ }
+ }
+ };
+ return modThread;
+ }
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[22/50] hadoop git commit: YARN-8434. Update federation documentation
of Nodemanager configurations. Contributed by Bibin A Chundatt.
Posted by zh...@apache.org.
YARN-8434. Update federation documentation of Nodemanager configurations. Contributed by Bibin A Chundatt.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4523cc56
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4523cc56
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4523cc56
Branch: refs/heads/HDFS-13572
Commit: 4523cc5637bc3558aa5796150b358ca8471773bb
Parents: 103f2ee
Author: bibinchundatt <bi...@apache.org>
Authored: Sun Jul 15 13:53:53 2018 +0530
Committer: bibinchundatt <bi...@apache.org>
Committed: Sun Jul 15 13:53:53 2018 +0530
----------------------------------------------------------------------
.../hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md | 1 -
1 file changed, 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/4523cc56/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md
index 953f826..aeb7677 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md
@@ -267,7 +267,6 @@ These are extra configurations that should appear in the **conf/yarn-site.xml**
|:---- |:---- |
| `yarn.nodemanager.amrmproxy.enabled` | `true` | Whether or not the AMRMProxy is enabled. |
| `yarn.nodemanager.amrmproxy.interceptor-class.pipeline` | `org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor` | A comma-separated list of interceptors to be run at the amrmproxy. For federation the last step in the pipeline should be the FederationInterceptor. |
-| `yarn.client.failover-proxy-provider` | `org.apache.hadoop.yarn.server.federation.failover.FederationRMFailoverProxyProvider` | The class used to connect to the RMs by looking up the membership information in federation state-store. This must be set if federation is enabled, even if RM HA is not enabled.|
Optional:
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[42/50] hadoop git commit: HADOOP-15614.
TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails.
Contributed by Weiwei Yang.
Posted by zh...@apache.org.
HADOOP-15614. TestGroupsCaching.testExceptionOnBackgroundRefreshHandled reliably fails. Contributed by Weiwei Yang.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ccf2db7f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ccf2db7f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ccf2db7f
Branch: refs/heads/HDFS-13572
Commit: ccf2db7fc2688d262df3309007cb12a4dfedc179
Parents: ba1ab08
Author: Kihwal Lee <ki...@apache.org>
Authored: Thu Jul 19 11:13:37 2018 -0500
Committer: Kihwal Lee <ki...@apache.org>
Committed: Thu Jul 19 11:13:37 2018 -0500
----------------------------------------------------------------------
.../apache/hadoop/security/TestGroupsCaching.java | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ccf2db7f/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupsCaching.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupsCaching.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupsCaching.java
index 46e36b3..bba8152 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupsCaching.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestGroupsCaching.java
@@ -561,23 +561,28 @@ public class TestGroupsCaching {
// Then expire that entry
timer.advance(4 * 1000);
+ // Pause the getGroups operation and this will delay the cache refresh
+ FakeGroupMapping.pause();
// Now get the cache entry - it should return immediately
// with the old value and the cache will not have completed
// a request to getGroups yet.
assertEquals(groups.getGroups("me").size(), 2);
assertEquals(startingRequestCount, FakeGroupMapping.getRequestCount());
+ // Resume the getGroups operation and the cache can get refreshed
+ FakeGroupMapping.resume();
- // Now sleep for a short time and re-check the request count. It should have
- // increased, but the exception means the cache will not have updated
- Thread.sleep(50);
+ // Now wait for the refresh done, because of the exception, we expect
+ // a onFailure callback gets called and the counter for failure is 1
+ waitForGroupCounters(groups, 0, 0, 0, 1);
FakeGroupMapping.setThrowException(false);
assertEquals(startingRequestCount + 1, FakeGroupMapping.getRequestCount());
assertEquals(groups.getGroups("me").size(), 2);
- // Now sleep another short time - the 3rd call to getGroups above
- // will have kicked off another refresh that updates the cache
- Thread.sleep(50);
+ // Now the 3rd call to getGroups above will have kicked off
+ // another refresh that updates the cache, since it no longer gives
+ // exception, we now expect the counter for success is 1.
+ waitForGroupCounters(groups, 0, 0, 1, 1);
assertEquals(startingRequestCount + 2, FakeGroupMapping.getRequestCount());
assertEquals(groups.getGroups("me").size(), 3);
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[16/50] hadoop git commit: HDDS-238. Add Node2Pipeline Map in SCM to
track ratis/standalone pipelines. Contributed by Mukul Kumar Singh.
Posted by zh...@apache.org.
HDDS-238. Add Node2Pipeline Map in SCM to track ratis/standalone pipelines. Contributed by Mukul Kumar Singh.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/3f3f7222
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/3f3f7222
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/3f3f7222
Branch: refs/heads/HDFS-13572
Commit: 3f3f72221ffd11cc6bfa0e010e3c5b0e14911102
Parents: f89e265
Author: Xiaoyu Yao <xy...@apache.org>
Authored: Thu Jul 12 22:02:57 2018 -0700
Committer: Xiaoyu Yao <xy...@apache.org>
Committed: Thu Jul 12 22:14:03 2018 -0700
----------------------------------------------------------------------
.../container/common/helpers/ContainerInfo.java | 11 ++
.../hdds/scm/container/ContainerMapping.java | 11 +-
.../scm/container/ContainerStateManager.java | 6 +
.../scm/container/states/ContainerStateMap.java | 36 +++++-
.../hdds/scm/pipelines/Node2PipelineMap.java | 121 +++++++++++++++++++
.../hdds/scm/pipelines/PipelineManager.java | 22 ++--
.../hdds/scm/pipelines/PipelineSelector.java | 24 +++-
.../scm/pipelines/ratis/RatisManagerImpl.java | 11 +-
.../standalone/StandaloneManagerImpl.java | 7 +-
.../hdds/scm/pipeline/TestNode2PipelineMap.java | 117 ++++++++++++++++++
10 files changed, 343 insertions(+), 23 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
index 9593717..4074b21 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerInfo.java
@@ -456,4 +456,15 @@ public class ContainerInfo implements Comparator<ContainerInfo>,
replicationFactor, replicationType);
}
}
+
+ /**
+ * Check if a container is in open state, this will check if the
+ * container is either open or allocated or creating. Any containers in
+ * these states is managed as an open container by SCM.
+ */
+ public boolean isContainerOpen() {
+ return state == HddsProtos.LifeCycleState.ALLOCATED ||
+ state == HddsProtos.LifeCycleState.CREATING ||
+ state == HddsProtos.LifeCycleState.OPEN;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
index abad32c..26f4d86 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerMapping.java
@@ -477,7 +477,7 @@ public class ContainerMapping implements Mapping {
List<StorageContainerDatanodeProtocolProtos.ContainerInfo>
containerInfos = reports.getReportsList();
- for (StorageContainerDatanodeProtocolProtos.ContainerInfo datanodeState :
+ for (StorageContainerDatanodeProtocolProtos.ContainerInfo datanodeState :
containerInfos) {
byte[] dbKey = Longs.toByteArray(datanodeState.getContainerID());
lock.lock();
@@ -498,7 +498,9 @@ public class ContainerMapping implements Mapping {
containerStore.put(dbKey, newState.toByteArray());
// If the container is closed, then state is already written to SCM
- Pipeline pipeline = pipelineSelector.getPipeline(newState.getPipelineName(), newState.getReplicationType());
+ Pipeline pipeline =
+ pipelineSelector.getPipeline(newState.getPipelineName(),
+ newState.getReplicationType());
if(pipeline == null) {
pipeline = pipelineSelector
.getReplicationPipeline(newState.getReplicationType(),
@@ -713,4 +715,9 @@ public class ContainerMapping implements Mapping {
public MetadataStore getContainerStore() {
return containerStore;
}
+
+ @VisibleForTesting
+ public PipelineSelector getPipelineSelector() {
+ return pipelineSelector;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
index 223deac..b2431dc 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
@@ -17,6 +17,7 @@
package org.apache.hadoop.hdds.scm.container;
+import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -522,4 +523,9 @@ public class ContainerStateManager implements Closeable {
DatanodeDetails dn) throws SCMException {
return containers.removeContainerReplica(containerID, dn);
}
+
+ @VisibleForTesting
+ public ContainerStateMap getContainerStateMap() {
+ return containers;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
index 1c92861..46fe2ab 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
@@ -51,7 +51,7 @@ import static org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes
* Container State Map acts like a unified map for various attributes that are
* used to select containers when we need allocated blocks.
* <p>
- * This class provides the ability to query 4 classes of attributes. They are
+ * This class provides the ability to query 5 classes of attributes. They are
* <p>
* 1. LifeCycleStates - LifeCycle States of container describe in which state
* a container is. For example, a container needs to be in Open State for a
@@ -72,6 +72,9 @@ import static org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes
* Replica and THREE Replica. User can specify how many copies should be made
* for a ozone key.
* <p>
+ * 5.Pipeline - The pipeline constitute the set of Datanodes on which the
+ * open container resides physically.
+ * <p>
* The most common access pattern of this class is to select a container based
* on all these parameters, for example, when allocating a block we will
* select a container that belongs to user1, with Ratis replication which can
@@ -86,6 +89,14 @@ public class ContainerStateMap {
private final ContainerAttribute<String> ownerMap;
private final ContainerAttribute<ReplicationFactor> factorMap;
private final ContainerAttribute<ReplicationType> typeMap;
+ // This map constitutes the pipeline to open container mappings.
+ // This map will be queried for the list of open containers on a particular
+ // pipeline and issue a close on corresponding containers in case of
+ // following events:
+ //1. Dead datanode.
+ //2. Datanode out of space.
+ //3. Volume loss or volume out of space.
+ private final ContainerAttribute<String> openPipelineMap;
private final Map<ContainerID, ContainerInfo> containerMap;
// Map to hold replicas of given container.
@@ -106,6 +117,7 @@ public class ContainerStateMap {
ownerMap = new ContainerAttribute<>();
factorMap = new ContainerAttribute<>();
typeMap = new ContainerAttribute<>();
+ openPipelineMap = new ContainerAttribute<>();
containerMap = new HashMap<>();
autoLock = new AutoCloseableLock();
contReplicaMap = new HashMap<>();
@@ -140,6 +152,9 @@ public class ContainerStateMap {
ownerMap.insert(info.getOwner(), id);
factorMap.insert(info.getReplicationFactor(), id);
typeMap.insert(info.getReplicationType(), id);
+ if (info.isContainerOpen()) {
+ openPipelineMap.insert(info.getPipelineName(), id);
+ }
LOG.trace("Created container with {} successfully.", id);
}
}
@@ -329,6 +344,11 @@ public class ContainerStateMap {
throw new SCMException("Updating the container map failed.", ex,
FAILED_TO_CHANGE_CONTAINER_STATE);
}
+ // In case the container is set to closed state, it needs to be removed from
+ // the pipeline Map.
+ if (newState == LifeCycleState.CLOSED) {
+ openPipelineMap.remove(info.getPipelineName(), id);
+ }
}
/**
@@ -360,6 +380,20 @@ public class ContainerStateMap {
}
/**
+ * Returns Open containers in the SCM by the Pipeline
+ *
+ * @param pipeline - Pipeline name.
+ * @return NavigableSet<ContainerID>
+ */
+ public NavigableSet<ContainerID> getOpenContainerIDsByPipeline(String pipeline) {
+ Preconditions.checkNotNull(pipeline);
+
+ try (AutoCloseableLock lock = autoLock.acquire()) {
+ return openPipelineMap.getCollection(pipeline);
+ }
+ }
+
+ /**
* Returns Containers by replication factor.
*
* @param factor - Replication Factor.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/Node2PipelineMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/Node2PipelineMap.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/Node2PipelineMap.java
new file mode 100644
index 0000000..2e89616
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/Node2PipelineMap.java
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ *
+ */
+
+package org.apache.hadoop.hdds.scm.pipelines;
+
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
+
+import java.util.Set;
+import java.util.UUID;
+import java.util.Map;
+import java.util.HashSet;
+import java.util.Collections;
+
+import java.util.concurrent.ConcurrentHashMap;
+
+import static org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes
+ .DUPLICATE_DATANODE;
+
+
+/**
+ * This data structure maintains the list of pipelines which the given datanode
+ * is a part of.
+ * This information will be added whenever a new pipeline allocation happens.
+ *
+ * TODO: this information needs to be regenerated from pipeline reports on
+ * SCM restart
+ */
+public class Node2PipelineMap {
+ private final Map<UUID, Set<Pipeline>> dn2PipelineMap;
+
+ /**
+ * Constructs a Node2PipelineMap Object.
+ */
+ public Node2PipelineMap() {
+ dn2PipelineMap = new ConcurrentHashMap<>();
+ }
+
+ /**
+ * Returns true if this a datanode that is already tracked by
+ * Node2PipelineMap.
+ *
+ * @param datanodeID - UUID of the Datanode.
+ * @return True if this is tracked, false if this map does not know about it.
+ */
+ private boolean isKnownDatanode(UUID datanodeID) {
+ Preconditions.checkNotNull(datanodeID);
+ return dn2PipelineMap.containsKey(datanodeID);
+ }
+
+ /**
+ * Insert a new datanode into Node2Pipeline Map.
+ *
+ * @param datanodeID -- Datanode UUID
+ * @param pipelines - set of pipelines.
+ */
+ private void insertNewDatanode(UUID datanodeID, Set<Pipeline> pipelines)
+ throws SCMException {
+ Preconditions.checkNotNull(pipelines);
+ Preconditions.checkNotNull(datanodeID);
+ if(dn2PipelineMap.putIfAbsent(datanodeID, pipelines) != null) {
+ throw new SCMException("Node already exists in the map",
+ DUPLICATE_DATANODE);
+ }
+ }
+
+ /**
+ * Removes datanode Entry from the map.
+ * @param datanodeID - Datanode ID.
+ */
+ public synchronized void removeDatanode(UUID datanodeID) {
+ Preconditions.checkNotNull(datanodeID);
+ dn2PipelineMap.computeIfPresent(datanodeID, (k, v) -> null);
+ }
+
+ /**
+ * Returns null if there no pipelines associated with this datanode ID.
+ *
+ * @param datanode - UUID
+ * @return Set of pipelines or Null.
+ */
+ public Set<Pipeline> getPipelines(UUID datanode) {
+ Preconditions.checkNotNull(datanode);
+ return dn2PipelineMap.computeIfPresent(datanode, (k, v) ->
+ Collections.unmodifiableSet(v));
+ }
+
+/**
+ * Adds a pipeline entry to a given dataNode in the map.
+ * @param pipeline Pipeline to be added
+ */
+ public synchronized void addPipeline(Pipeline pipeline) throws SCMException {
+ for (DatanodeDetails details : pipeline.getDatanodes().values()) {
+ UUID dnId = details.getUuid();
+ dn2PipelineMap
+ .computeIfAbsent(dnId,k->Collections.synchronizedSet(new HashSet<>()))
+ .add(pipeline);
+ }
+ }
+
+ public Map<UUID, Set<Pipeline>> getDn2PipelineMap() {
+ return Collections.unmodifiableMap(dn2PipelineMap);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
index a1fbce6..a041973 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineManager.java
@@ -40,11 +40,13 @@ public abstract class PipelineManager {
private final List<Pipeline> activePipelines;
private final Map<String, Pipeline> activePipelineMap;
private final AtomicInteger pipelineIndex;
+ private final Node2PipelineMap node2PipelineMap;
- public PipelineManager() {
+ public PipelineManager(Node2PipelineMap map) {
activePipelines = new LinkedList<>();
pipelineIndex = new AtomicInteger(0);
activePipelineMap = new WeakHashMap<>();
+ node2PipelineMap = map;
}
/**
@@ -66,24 +68,23 @@ public abstract class PipelineManager {
*
* 2. This allows all nodes to part of a pipeline quickly.
*
- * 3. if there are not enough free nodes, return conduits in a
+ * 3. if there are not enough free nodes, return pipeline in a
* round-robin fashion.
*
* TODO: Might have to come up with a better algorithm than this.
- * Create a new placement policy that returns conduits in round robin
+ * Create a new placement policy that returns pipelines in round robin
* fashion.
*/
- Pipeline pipeline =
- allocatePipeline(replicationFactor);
+ Pipeline pipeline = allocatePipeline(replicationFactor);
if (pipeline != null) {
LOG.debug("created new pipeline:{} for container with " +
"replicationType:{} replicationFactor:{}",
pipeline.getPipelineName(), replicationType, replicationFactor);
activePipelines.add(pipeline);
activePipelineMap.put(pipeline.getPipelineName(), pipeline);
+ node2PipelineMap.addPipeline(pipeline);
} else {
- pipeline =
- findOpenPipeline(replicationType, replicationFactor);
+ pipeline = findOpenPipeline(replicationType, replicationFactor);
if (pipeline != null) {
LOG.debug("re-used pipeline:{} for container with " +
"replicationType:{} replicationFactor:{}",
@@ -133,6 +134,11 @@ public abstract class PipelineManager {
public abstract Pipeline allocatePipeline(
ReplicationFactor replicationFactor) throws IOException;
+ public void removePipeline(Pipeline pipeline) {
+ activePipelines.remove(pipeline);
+ activePipelineMap.remove(pipeline.getPipelineName());
+ }
+
/**
* Find a Pipeline that is operational.
*
@@ -143,7 +149,7 @@ public abstract class PipelineManager {
Pipeline pipeline = null;
final int sentinal = -1;
if (activePipelines.size() == 0) {
- LOG.error("No Operational conduits found. Returning null.");
+ LOG.error("No Operational pipelines found. Returning null.");
return null;
}
int startIndex = getNextIndex();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java
index 3846a84..2955af5 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/PipelineSelector.java
@@ -19,7 +19,6 @@ package org.apache.hadoop.hdds.scm.pipelines;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.scm.ScmConfigKeys;
-import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
import org.apache.hadoop.hdds.scm.container.placement.algorithms
.ContainerPlacementPolicy;
@@ -41,6 +40,8 @@ import java.io.IOException;
import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
import java.util.List;
+import java.util.Set;
+import java.util.UUID;
import java.util.stream.Collectors;
/**
@@ -55,7 +56,7 @@ public class PipelineSelector {
private final RatisManagerImpl ratisManager;
private final StandaloneManagerImpl standaloneManager;
private final long containerSize;
-
+ private final Node2PipelineMap node2PipelineMap;
/**
* Constructs a pipeline Selector.
*
@@ -69,12 +70,13 @@ public class PipelineSelector {
this.containerSize = OzoneConsts.GB * this.conf.getInt(
ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_GB,
ScmConfigKeys.OZONE_SCM_CONTAINER_SIZE_DEFAULT);
+ node2PipelineMap = new Node2PipelineMap();
this.standaloneManager =
new StandaloneManagerImpl(this.nodeManager, placementPolicy,
- containerSize);
+ containerSize, node2PipelineMap);
this.ratisManager =
new RatisManagerImpl(this.nodeManager, placementPolicy, containerSize,
- conf);
+ conf, node2PipelineMap);
}
/**
@@ -243,4 +245,18 @@ public class PipelineSelector {
.collect(Collectors.joining(",")));
manager.updatePipeline(pipelineID, newDatanodes);
}
+
+ public Node2PipelineMap getNode2PipelineMap() {
+ return node2PipelineMap;
+ }
+
+ public void removePipeline(UUID dnId) {
+ Set<Pipeline> pipelineChannelSet =
+ node2PipelineMap.getPipelines(dnId);
+ for (Pipeline pipelineChannel : pipelineChannelSet) {
+ getPipelineManager(pipelineChannel.getType())
+ .removePipeline(pipelineChannel);
+ }
+ node2PipelineMap.removeDatanode(dnId);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
index 189060e..a8f8b20 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
@@ -19,11 +19,11 @@ package org.apache.hadoop.hdds.scm.pipelines.ratis;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.scm.XceiverClientRatis;
-import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
import org.apache.hadoop.hdds.scm.container.placement.algorithms
.ContainerPlacementPolicy;
import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipelines.Node2PipelineMap;
import org.apache.hadoop.hdds.scm.pipelines.PipelineManager;
import org.apache.hadoop.hdds.scm.pipelines.PipelineSelector;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -60,8 +60,9 @@ public class RatisManagerImpl extends PipelineManager {
* @param nodeManager
*/
public RatisManagerImpl(NodeManager nodeManager,
- ContainerPlacementPolicy placementPolicy, long size, Configuration conf) {
- super();
+ ContainerPlacementPolicy placementPolicy, long size, Configuration conf,
+ Node2PipelineMap map) {
+ super(map);
this.conf = conf;
this.nodeManager = nodeManager;
ratisMembers = new HashSet<>();
@@ -89,11 +90,11 @@ public class RatisManagerImpl extends PipelineManager {
ratisMembers.addAll(newNodesList);
LOG.info("Allocating a new ratis pipeline of size: {}", count);
// Start all channel names with "Ratis", easy to grep the logs.
- String conduitName = PREFIX +
+ String pipelineName = PREFIX +
UUID.randomUUID().toString().substring(PREFIX.length());
Pipeline pipeline=
PipelineSelector.newPipelineFromNodes(newNodesList,
- LifeCycleState.OPEN, ReplicationType.RATIS, factor, conduitName);
+ LifeCycleState.OPEN, ReplicationType.RATIS, factor, pipelineName);
try (XceiverClientRatis client =
XceiverClientRatis.newXceiverClientRatis(pipeline, conf)) {
client.createPipeline(pipeline.getPipelineName(), newNodesList);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
index 579a3a2..cf691bf 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/standalone/StandaloneManagerImpl.java
@@ -17,11 +17,11 @@
package org.apache.hadoop.hdds.scm.pipelines.standalone;
import com.google.common.base.Preconditions;
-import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
import org.apache.hadoop.hdds.scm.container.placement.algorithms
.ContainerPlacementPolicy;
import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.pipelines.Node2PipelineMap;
import org.apache.hadoop.hdds.scm.pipelines.PipelineManager;
import org.apache.hadoop.hdds.scm.pipelines.PipelineSelector;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -58,8 +58,9 @@ public class StandaloneManagerImpl extends PipelineManager {
* @param containerSize - Container Size.
*/
public StandaloneManagerImpl(NodeManager nodeManager,
- ContainerPlacementPolicy placementPolicy, long containerSize) {
- super();
+ ContainerPlacementPolicy placementPolicy, long containerSize,
+ Node2PipelineMap map) {
+ super(map);
this.nodeManager = nodeManager;
this.placementPolicy = placementPolicy;
this.containerSize = containerSize;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/3f3f7222/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java
new file mode 100644
index 0000000..bc3505f
--- /dev/null
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestNode2PipelineMap.java
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ *
+ */
+package org.apache.hadoop.hdds.scm.pipeline;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerMapping;
+import org.apache.hadoop.hdds.scm.container.common.helpers
+ .ContainerWithPipeline;
+import org.apache.hadoop.hdds.scm.container.common.helpers.Pipeline;
+import org.apache.hadoop.hdds.scm.container.states.ContainerStateMap;
+import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.Set;
+
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+ .ReplicationType.RATIS;
+import static org.apache.hadoop.hdds.protocol.proto.HddsProtos
+ .ReplicationFactor.THREE;
+
+public class TestNode2PipelineMap {
+
+ private static MiniOzoneCluster cluster;
+ private static OzoneConfiguration conf;
+ private static StorageContainerManager scm;
+ private static ContainerWithPipeline ratisContainer;
+ private static ContainerStateMap stateMap;
+ private static ContainerMapping mapping;
+
+ /**
+ * Create a MiniDFSCluster for testing.
+ *
+ * @throws IOException
+ */
+ @BeforeClass
+ public static void init() throws Exception {
+ conf = new OzoneConfiguration();
+ cluster = MiniOzoneCluster.newBuilder(conf).setNumDatanodes(5).build();
+ cluster.waitForClusterToBeReady();
+ scm = cluster.getStorageContainerManager();
+ mapping = (ContainerMapping)scm.getScmContainerManager();
+ stateMap = mapping.getStateManager().getContainerStateMap();
+ ratisContainer = mapping.allocateContainer(RATIS, THREE, "testOwner");
+ }
+
+ /**
+ * Shutdown MiniDFSCluster.
+ */
+ @AfterClass
+ public static void shutdown() {
+ if (cluster != null) {
+ cluster.shutdown();
+ }
+ }
+
+
+ @Test
+ public void testPipelineMap() throws IOException {
+
+ NavigableSet<ContainerID> set = stateMap.getOpenContainerIDsByPipeline(
+ ratisContainer.getPipeline().getPipelineName());
+
+ long cId = ratisContainer.getContainerInfo().getContainerID();
+ Assert.assertEquals(1, set.size());
+ Assert.assertEquals(cId, set.first().getId());
+
+ List<DatanodeDetails> dns = ratisContainer.getPipeline().getMachines();
+ Assert.assertEquals(3, dns.size());
+
+ // get pipeline details by dnid
+ Set<Pipeline> pipelines = mapping.getPipelineSelector()
+ .getNode2PipelineMap().getPipelines(dns.get(0).getUuid());
+ Assert.assertEquals(1, pipelines.size());
+ pipelines.forEach(p -> Assert.assertEquals(p.getPipelineName(),
+ ratisContainer.getPipeline().getPipelineName()));
+
+
+ // Now close the container and it should not show up while fetching
+ // containers by pipeline
+ mapping
+ .updateContainerState(cId, HddsProtos.LifeCycleEvent.CREATE);
+ mapping
+ .updateContainerState(cId, HddsProtos.LifeCycleEvent.CREATED);
+ mapping
+ .updateContainerState(cId, HddsProtos.LifeCycleEvent.FINALIZE);
+ mapping
+ .updateContainerState(cId, HddsProtos.LifeCycleEvent.CLOSE);
+ NavigableSet<ContainerID> set2 = stateMap.getOpenContainerIDsByPipeline(
+ ratisContainer.getPipeline().getPipelineName());
+ Assert.assertEquals(0, set2.size());
+ }
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[11/50] hadoop git commit: HDDS-228. Add the ReplicaMaps to
ContainerStateManager. Contributed by Ajay Kumar.
Posted by zh...@apache.org.
HDDS-228. Add the ReplicaMaps to ContainerStateManager.
Contributed by Ajay Kumar.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5ee90efe
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5ee90efe
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5ee90efe
Branch: refs/heads/HDFS-13572
Commit: 5ee90efed385db4bf235816145b30a0f691fc91b
Parents: a08812a
Author: Anu Engineer <ae...@apache.org>
Authored: Thu Jul 12 10:43:24 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Thu Jul 12 10:43:24 2018 -0700
----------------------------------------------------------------------
.../scm/container/ContainerStateManager.java | 34 ++++++++
.../scm/container/states/ContainerStateMap.java | 86 ++++++++++++++++++++
.../container/TestContainerStateManager.java | 79 ++++++++++++++++++
3 files changed, 199 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ee90efe/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
index 870ab1d..223deac 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hdds.scm.container;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.ScmConfigKeys;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
@@ -488,4 +489,37 @@ public class ContainerStateManager implements Closeable {
public void close() throws IOException {
}
+ /**
+ * Returns the latest list of DataNodes where replica for given containerId
+ * exist. Throws an SCMException if no entry is found for given containerId.
+ *
+ * @param containerID
+ * @return Set<DatanodeDetails>
+ */
+ public Set<DatanodeDetails> getContainerReplicas(ContainerID containerID)
+ throws SCMException {
+ return containers.getContainerReplicas(containerID);
+ }
+
+ /**
+ * Add a container Replica for given DataNode.
+ *
+ * @param containerID
+ * @param dn
+ */
+ public void addContainerReplica(ContainerID containerID, DatanodeDetails dn) {
+ containers.addContainerReplica(containerID, dn);
+ }
+
+ /**
+ * Remove a container Replica for given DataNode.
+ *
+ * @param containerID
+ * @param dn
+ * @return True of dataNode is removed successfully else false.
+ */
+ public boolean removeContainerReplica(ContainerID containerID,
+ DatanodeDetails dn) throws SCMException {
+ return containers.removeContainerReplica(containerID, dn);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ee90efe/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
index c23b1fd..1c92861 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/states/ContainerStateMap.java
@@ -18,13 +18,18 @@
package org.apache.hadoop.hdds.scm.container.states;
+import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
+import java.util.HashSet;
+import java.util.Set;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.container.ContainerID;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
import org.apache.hadoop.hdds.scm.exceptions.SCMException;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException.ResultCodes;
import org.apache.hadoop.util.AutoCloseableLock;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -83,6 +88,8 @@ public class ContainerStateMap {
private final ContainerAttribute<ReplicationType> typeMap;
private final Map<ContainerID, ContainerInfo> containerMap;
+ // Map to hold replicas of given container.
+ private final Map<ContainerID, Set<DatanodeDetails>> contReplicaMap;
private final static NavigableSet<ContainerID> EMPTY_SET =
Collections.unmodifiableNavigableSet(new TreeSet<>());
@@ -101,6 +108,7 @@ public class ContainerStateMap {
typeMap = new ContainerAttribute<>();
containerMap = new HashMap<>();
autoLock = new AutoCloseableLock();
+ contReplicaMap = new HashMap<>();
// new InstrumentedLock(getClass().getName(), LOG,
// new ReentrantLock(),
// 1000,
@@ -158,6 +166,84 @@ public class ContainerStateMap {
}
/**
+ * Returns the latest list of DataNodes where replica for given containerId
+ * exist. Throws an SCMException if no entry is found for given containerId.
+ *
+ * @param containerID
+ * @return Set<DatanodeDetails>
+ */
+ public Set<DatanodeDetails> getContainerReplicas(ContainerID containerID)
+ throws SCMException {
+ Preconditions.checkNotNull(containerID);
+ try (AutoCloseableLock lock = autoLock.acquire()) {
+ if (contReplicaMap.containsKey(containerID)) {
+ return Collections
+ .unmodifiableSet(contReplicaMap.get(containerID));
+ }
+ }
+ throw new SCMException(
+ "No entry exist for containerId: " + containerID + " in replica map.",
+ ResultCodes.FAILED_TO_FIND_CONTAINER);
+ }
+
+ /**
+ * Adds given datanodes as nodes where replica for given containerId exist.
+ * Logs a debug entry if a datanode is already added as replica for given
+ * ContainerId.
+ *
+ * @param containerID
+ * @param dnList
+ */
+ public void addContainerReplica(ContainerID containerID,
+ DatanodeDetails... dnList) {
+ Preconditions.checkNotNull(containerID);
+ // Take lock to avoid race condition around insertion.
+ try (AutoCloseableLock lock = autoLock.acquire()) {
+ for (DatanodeDetails dn : dnList) {
+ Preconditions.checkNotNull(dn);
+ if (contReplicaMap.containsKey(containerID)) {
+ if(!contReplicaMap.get(containerID).add(dn)) {
+ LOG.debug("ReplicaMap already contains entry for container Id: "
+ + "{},DataNode: {}", containerID, dn);
+ }
+ } else {
+ Set<DatanodeDetails> dnSet = new HashSet<>();
+ dnSet.add(dn);
+ contReplicaMap.put(containerID, dnSet);
+ }
+ }
+ }
+ }
+
+ /**
+ * Remove a container Replica for given DataNode.
+ *
+ * @param containerID
+ * @param dn
+ * @return True of dataNode is removed successfully else false.
+ */
+ public boolean removeContainerReplica(ContainerID containerID,
+ DatanodeDetails dn) throws SCMException {
+ Preconditions.checkNotNull(containerID);
+ Preconditions.checkNotNull(dn);
+
+ // Take lock to avoid race condition.
+ try (AutoCloseableLock lock = autoLock.acquire()) {
+ if (contReplicaMap.containsKey(containerID)) {
+ return contReplicaMap.get(containerID).remove(dn);
+ }
+ }
+ throw new SCMException(
+ "No entry exist for containerId: " + containerID + " in replica map.",
+ ResultCodes.FAILED_TO_FIND_CONTAINER);
+ }
+
+ @VisibleForTesting
+ public static Logger getLOG() {
+ return LOG;
+ }
+
+ /**
* Returns the full container Map.
*
* @return - Map
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5ee90efe/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java
index bb85650..9e209af 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerStateManager.java
@@ -17,14 +17,22 @@
package org.apache.hadoop.hdds.scm.container;
import com.google.common.primitives.Longs;
+import java.util.Set;
+import java.util.UUID;
+import org.apache.commons.lang3.RandomUtils;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
+import org.apache.hadoop.hdds.scm.container.states.ContainerStateMap;
+import org.apache.hadoop.hdds.scm.exceptions.SCMException;
import org.apache.hadoop.ozone.MiniOzoneCluster;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.scm.server.StorageContainerManager;
import org.apache.hadoop.hdds.scm.XceiverClientManager;
import org.apache.hadoop.hdds.scm.container.common.helpers.ContainerInfo;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.test.LambdaTestUtils;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
@@ -35,6 +43,7 @@ import java.util.ArrayList;
import java.util.List;
import java.util.NavigableSet;
import java.util.Random;
+import org.slf4j.event.Level;
/**
* Tests for ContainerStateManager.
@@ -333,4 +342,74 @@ public class TestContainerStateManager {
Assert.assertEquals(allocatedSize, currentInfo.getAllocatedBytes());
}
}
+
+ @Test
+ public void testReplicaMap() throws Exception {
+ GenericTestUtils.setLogLevel(ContainerStateMap.getLOG(), Level.DEBUG);
+ GenericTestUtils.LogCapturer logCapturer = GenericTestUtils.LogCapturer
+ .captureLogs(ContainerStateMap.getLOG());
+ DatanodeDetails dn1 = DatanodeDetails.newBuilder().setHostName("host1")
+ .setIpAddress("1.1.1.1")
+ .setUuid(UUID.randomUUID().toString()).build();
+ DatanodeDetails dn2 = DatanodeDetails.newBuilder().setHostName("host2")
+ .setIpAddress("2.2.2.2")
+ .setUuid(UUID.randomUUID().toString()).build();
+
+ // Test 1: no replica's exist
+ ContainerID containerID = ContainerID.valueof(RandomUtils.nextLong());
+ Set<DatanodeDetails> replicaSet;
+ LambdaTestUtils.intercept(SCMException.class, "", () -> {
+ containerStateManager.getContainerReplicas(containerID);
+ });
+
+ // Test 2: Add replica nodes and then test
+ containerStateManager.addContainerReplica(containerID, dn1);
+ containerStateManager.addContainerReplica(containerID, dn2);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(2, replicaSet.size());
+ Assert.assertTrue(replicaSet.contains(dn1));
+ Assert.assertTrue(replicaSet.contains(dn2));
+
+ // Test 3: Remove one replica node and then test
+ containerStateManager.removeContainerReplica(containerID, dn1);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(1, replicaSet.size());
+ Assert.assertFalse(replicaSet.contains(dn1));
+ Assert.assertTrue(replicaSet.contains(dn2));
+
+ // Test 3: Remove second replica node and then test
+ containerStateManager.removeContainerReplica(containerID, dn2);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(0, replicaSet.size());
+ Assert.assertFalse(replicaSet.contains(dn1));
+ Assert.assertFalse(replicaSet.contains(dn2));
+
+ // Test 4: Re-insert dn1
+ containerStateManager.addContainerReplica(containerID, dn1);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(1, replicaSet.size());
+ Assert.assertTrue(replicaSet.contains(dn1));
+ Assert.assertFalse(replicaSet.contains(dn2));
+
+ // Re-insert dn2
+ containerStateManager.addContainerReplica(containerID, dn2);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(2, replicaSet.size());
+ Assert.assertTrue(replicaSet.contains(dn1));
+ Assert.assertTrue(replicaSet.contains(dn2));
+
+ Assert.assertFalse(logCapturer.getOutput().contains(
+ "ReplicaMap already contains entry for container Id: " + containerID
+ .toString() + ",DataNode: " + dn1.toString()));
+ // Re-insert dn1
+ containerStateManager.addContainerReplica(containerID, dn1);
+ replicaSet = containerStateManager.getContainerReplicas(containerID);
+ Assert.assertEquals(2, replicaSet.size());
+ Assert.assertTrue(replicaSet.contains(dn1));
+ Assert.assertTrue(replicaSet.contains(dn2));
+ Assert.assertTrue(logCapturer.getOutput().contains(
+ "ReplicaMap already contains entry for container Id: " + containerID
+ .toString() + ",DataNode: " + dn1.toString()));
+ }
+
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[15/50] hadoop git commit: HDDS-187. Command status publisher for
datanode. Contributed by Ajay Kumar.
Posted by zh...@apache.org.
HDDS-187. Command status publisher for datanode.
Contributed by Ajay Kumar.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f89e2659
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f89e2659
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f89e2659
Branch: refs/heads/HDFS-13572
Commit: f89e265905f39c8e51263a3946a8b8e6ab4ebad9
Parents: 87eeb26
Author: Anu Engineer <ae...@apache.org>
Authored: Thu Jul 12 21:34:32 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Thu Jul 12 21:35:12 2018 -0700
----------------------------------------------------------------------
.../org/apache/hadoop/hdds/HddsConfigKeys.java | 8 +
.../org/apache/hadoop/hdds/HddsIdFactory.java | 53 ++++++
.../common/src/main/resources/ozone-default.xml | 9 +
.../apache/hadoop/utils/TestHddsIdFactory.java | 77 +++++++++
.../report/CommandStatusReportPublisher.java | 71 ++++++++
.../common/report/ReportPublisher.java | 9 +
.../common/report/ReportPublisherFactory.java | 4 +
.../statemachine/DatanodeStateMachine.java | 2 +
.../common/statemachine/StateContext.java | 70 ++++++++
.../CloseContainerCommandHandler.java | 5 +-
.../commandhandler/CommandHandler.java | 11 ++
.../DeleteBlocksCommandHandler.java | 166 ++++++++++---------
.../ReplicateContainerCommandHandler.java | 7 +-
.../commands/CloseContainerCommand.java | 36 ++--
.../ozone/protocol/commands/CommandStatus.java | 141 ++++++++++++++++
.../protocol/commands/DeleteBlocksCommand.java | 13 +-
.../commands/ReplicateContainerCommand.java | 20 ++-
.../protocol/commands/ReregisterCommand.java | 10 ++
.../ozone/protocol/commands/SCMCommand.java | 19 +++
.../StorageContainerDatanodeProtocol.proto | 21 +++
.../ozone/container/common/ScmTestMock.java | 33 +++-
.../common/report/TestReportPublisher.java | 75 ++++++++-
.../hadoop/hdds/scm/events/SCMEvents.java | 57 ++++---
.../server/SCMDatanodeHeartbeatDispatcher.java | 23 ++-
.../TestSCMDatanodeHeartbeatDispatcher.java | 25 ++-
.../ozone/container/common/TestEndPoint.java | 111 ++++++++++++-
26 files changed, 935 insertions(+), 141 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
index dec2c1c..8b449fb 100644
--- a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
@@ -17,7 +17,15 @@
*/
package org.apache.hadoop.hdds;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+/**
+ * Config class for HDDS.
+ */
public final class HddsConfigKeys {
private HddsConfigKeys() {
}
+ public static final String HDDS_COMMAND_STATUS_REPORT_INTERVAL =
+ "hdds.command.status.report.interval";
+ public static final String HDDS_COMMAND_STATUS_REPORT_INTERVAL_DEFAULT =
+ ScmConfigKeys.OZONE_SCM_HEARBEAT_INTERVAL_DEFAULT;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsIdFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsIdFactory.java b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsIdFactory.java
new file mode 100644
index 0000000..b244b8c
--- /dev/null
+++ b/hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsIdFactory.java
@@ -0,0 +1,53 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds;
+
+import java.util.UUID;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * HDDS Id generator.
+ */
+public final class HddsIdFactory {
+ private HddsIdFactory() {
+ }
+
+ private static final AtomicLong LONG_COUNTER = new AtomicLong(
+ System.currentTimeMillis());
+
+ /**
+ * Returns an incrementing long. This class doesn't
+ * persist initial value for long Id's, so incremental id's after restart
+ * may collide with previously generated Id's.
+ *
+ * @return long
+ */
+ public static long getLongId() {
+ return LONG_COUNTER.incrementAndGet();
+ }
+
+ /**
+ * Returns a uuid.
+ *
+ * @return UUID.
+ */
+ public static UUID getUUId() {
+ return UUID.randomUUID();
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/common/src/main/resources/ozone-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/main/resources/ozone-default.xml b/hadoop-hdds/common/src/main/resources/ozone-default.xml
index d5ce9e6..1b6fb33 100644
--- a/hadoop-hdds/common/src/main/resources/ozone-default.xml
+++ b/hadoop-hdds/common/src/main/resources/ozone-default.xml
@@ -1061,4 +1061,13 @@
</description>
</property>
+ <property>
+ <name>hdds.command.status.report.interval</name>
+ <value>30s</value>
+ <tag>OZONE, DATANODE, MANAGEMENT</tag>
+ <description>Time interval of the datanode to send status of commands
+ executed since last report. Unit could be defined with
+ postfix (ns,ms,s,m,h,d)</description>
+ </property>
+
</configuration>
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestHddsIdFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestHddsIdFactory.java b/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestHddsIdFactory.java
new file mode 100644
index 0000000..a341ccc
--- /dev/null
+++ b/hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/TestHddsIdFactory.java
@@ -0,0 +1,77 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.utils;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import org.apache.hadoop.hdds.HddsIdFactory;
+import org.junit.After;
+import static org.junit.Assert.assertEquals;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Test the JMX interface for the rocksdb metastore implementation.
+ */
+public class TestHddsIdFactory {
+
+ private static final Set<Long> ID_SET = ConcurrentHashMap.newKeySet();
+ private static final int IDS_PER_THREAD = 10000;
+ private static final int NUM_OF_THREADS = 5;
+
+ @After
+ public void cleanup() {
+ ID_SET.clear();
+ }
+
+ @Test
+ public void testGetLongId() throws Exception {
+
+ ExecutorService executor = Executors.newFixedThreadPool(5);
+ List<Callable<Integer>> tasks = new ArrayList<>(5);
+ addTasks(tasks);
+ List<Future<Integer>> result = executor.invokeAll(tasks);
+ assertEquals(IDS_PER_THREAD * NUM_OF_THREADS, ID_SET.size());
+ for (Future<Integer> r : result) {
+ assertEquals(r.get().intValue(), IDS_PER_THREAD);
+ }
+ }
+
+ private void addTasks(List<Callable<Integer>> tasks) {
+ for (int i = 0; i < NUM_OF_THREADS; i++) {
+ Callable<Integer> task = () -> {
+ for (int idNum = 0; idNum < IDS_PER_THREAD; idNum++) {
+ long var = HddsIdFactory.getLongId();
+ if (ID_SET.contains(var)) {
+ Assert.fail("Duplicate id found");
+ }
+ ID_SET.add(var);
+ }
+ return IDS_PER_THREAD;
+ };
+ tasks.add(task);
+ }
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/CommandStatusReportPublisher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/CommandStatusReportPublisher.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/CommandStatusReportPublisher.java
new file mode 100644
index 0000000..ca5174a
--- /dev/null
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/CommandStatusReportPublisher.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.report;
+
+import java.util.Iterator;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
+
+/**
+ * Publishes CommandStatusReport which will be sent to SCM as part of
+ * heartbeat. CommandStatusReport consist of the following information:
+ * - type : type of command.
+ * - status : status of command execution (PENDING, EXECUTED, FAILURE).
+ * - cmdId : Command id.
+ * - msg : optional message.
+ */
+public class CommandStatusReportPublisher extends
+ ReportPublisher<CommandStatusReportsProto> {
+
+ private long cmdStatusReportInterval = -1;
+
+ @Override
+ protected long getReportFrequency() {
+ if (cmdStatusReportInterval == -1) {
+ cmdStatusReportInterval = getConf().getTimeDuration(
+ HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL,
+ HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL_DEFAULT,
+ TimeUnit.MILLISECONDS);
+ }
+ return cmdStatusReportInterval;
+ }
+
+ @Override
+ protected CommandStatusReportsProto getReport() {
+ Map<Long, CommandStatus> map = this.getContext()
+ .getCommandStatusMap();
+ Iterator<Long> iterator = map.keySet().iterator();
+ CommandStatusReportsProto.Builder builder = CommandStatusReportsProto
+ .newBuilder();
+
+ iterator.forEachRemaining(key -> {
+ CommandStatus cmdStatus = map.get(key);
+ builder.addCmdStatus(cmdStatus.getProtoBufMessage());
+ // If status is still pending then don't remove it from map as
+ // CommandHandler will change its status when it works on this command.
+ if (!cmdStatus.getStatus().equals(Status.PENDING)) {
+ map.remove(key);
+ }
+ });
+ return builder.build();
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
index 4ff47a0..105f073 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisher.java
@@ -93,4 +93,13 @@ public abstract class ReportPublisher<T extends GeneratedMessage>
*/
protected abstract T getReport();
+ /**
+ * Returns {@link StateContext}.
+ *
+ * @return stateContext report
+ */
+ protected StateContext getContext() {
+ return context;
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
index dc246d9..ea89280 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/report/ReportPublisherFactory.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.ozone.container.common.report;
import com.google.protobuf.GeneratedMessage;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
@@ -49,6 +51,8 @@ public class ReportPublisherFactory {
report2publisher.put(NodeReportProto.class, NodeReportPublisher.class);
report2publisher.put(ContainerReportsProto.class,
ContainerReportPublisher.class);
+ report2publisher.put(CommandStatusReportsProto.class,
+ CommandStatusReportPublisher.class);
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
index 245d76f..69a243e 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
@@ -21,6 +21,7 @@ import com.google.common.util.concurrent.ThreadFactoryBuilder;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
@@ -107,6 +108,7 @@ public class DatanodeStateMachine implements Closeable {
.setStateContext(context)
.addPublisherFor(NodeReportProto.class)
.addPublisherFor(ContainerReportsProto.class)
+ .addPublisherFor(CommandStatusReportsProto.class)
.build();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
index 98eb7a0..7ed30f8 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
@@ -17,12 +17,17 @@
package org.apache.hadoop.ozone.container.common.statemachine;
import com.google.protobuf.GeneratedMessage;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
import org.apache.hadoop.ozone.container.common.states.DatanodeState;
import org.apache.hadoop.ozone.container.common.states.datanode
.InitDatanodeState;
import org.apache.hadoop.ozone.container.common.states.datanode
.RunningDatanodeState;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus.CommandStatusBuilder;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -48,6 +53,7 @@ public class StateContext {
static final Logger LOG =
LoggerFactory.getLogger(StateContext.class);
private final Queue<SCMCommand> commandQueue;
+ private final Map<Long, CommandStatus> cmdStatusMap;
private final Lock lock;
private final DatanodeStateMachine parent;
private final AtomicLong stateExecutionCount;
@@ -68,6 +74,7 @@ public class StateContext {
this.state = state;
this.parent = parent;
commandQueue = new LinkedList<>();
+ cmdStatusMap = new ConcurrentHashMap<>();
reports = new LinkedList<>();
lock = new ReentrantLock();
stateExecutionCount = new AtomicLong(0);
@@ -269,6 +276,7 @@ public class StateContext {
} finally {
lock.unlock();
}
+ this.addCmdStatus(command);
}
/**
@@ -279,4 +287,66 @@ public class StateContext {
return stateExecutionCount.get();
}
+ /**
+ * Returns the next {@link CommandStatus} or null if it is empty.
+ *
+ * @return {@link CommandStatus} or Null.
+ */
+ public CommandStatus getCmdStatus(Long key) {
+ return cmdStatusMap.get(key);
+ }
+
+ /**
+ * Adds a {@link CommandStatus} to the State Machine.
+ *
+ * @param status - {@link CommandStatus}.
+ */
+ public void addCmdStatus(Long key, CommandStatus status) {
+ cmdStatusMap.put(key, status);
+ }
+
+ /**
+ * Adds a {@link CommandStatus} to the State Machine for given SCMCommand.
+ *
+ * @param cmd - {@link SCMCommand}.
+ */
+ public void addCmdStatus(SCMCommand cmd) {
+ this.addCmdStatus(cmd.getCmdId(),
+ CommandStatusBuilder.newBuilder()
+ .setCmdId(cmd.getCmdId())
+ .setStatus(Status.PENDING)
+ .setType(cmd.getType())
+ .build());
+ }
+
+ /**
+ * Get map holding all {@link CommandStatus} objects.
+ *
+ */
+ public Map<Long, CommandStatus> getCommandStatusMap() {
+ return cmdStatusMap;
+ }
+
+ /**
+ * Remove object from cache in StateContext#cmdStatusMap.
+ *
+ */
+ public void removeCommandStatus(Long cmdId) {
+ cmdStatusMap.remove(cmdId);
+ }
+
+ /**
+ * Updates status of a pending status command.
+ * @param cmdId command id
+ * @param cmdExecuted SCMCommand
+ * @return true if command status updated successfully else false.
+ */
+ public boolean updateCommandStatus(Long cmdId, boolean cmdExecuted) {
+ if(cmdStatusMap.containsKey(cmdId)) {
+ cmdStatusMap.get(cmdId)
+ .setStatus(cmdExecuted ? Status.EXECUTED : Status.FAILED);
+ return true;
+ }
+ return false;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
index 45f2bbd..f58cbae 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
@@ -41,6 +41,7 @@ public class CloseContainerCommandHandler implements CommandHandler {
LoggerFactory.getLogger(CloseContainerCommandHandler.class);
private int invocationCount;
private long totalTime;
+ private boolean cmdExecuted;
/**
* Constructs a ContainerReport handler.
@@ -61,6 +62,7 @@ public class CloseContainerCommandHandler implements CommandHandler {
StateContext context, SCMConnectionManager connectionManager) {
LOG.debug("Processing Close Container command.");
invocationCount++;
+ cmdExecuted = false;
long startTime = Time.monotonicNow();
// TODO: define this as INVALID_CONTAINER_ID in HddsConsts.java (TBA)
long containerID = -1;
@@ -88,10 +90,11 @@ public class CloseContainerCommandHandler implements CommandHandler {
// submit the close container request for the XceiverServer to handle
container.submitContainerRequest(
request.build(), replicationType);
-
+ cmdExecuted = true;
} catch (Exception e) {
LOG.error("Can't close container " + containerID, e);
} finally {
+ updateCommandStatus(context, command, cmdExecuted, LOG);
long endTime = Time.monotonicNow();
totalTime += endTime - startTime;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java
index 60e2dc4..2016419 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CommandHandler.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.ozone.container.common.statemachine
import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.slf4j.Logger;
/**
* Generic interface for handlers.
@@ -58,4 +59,14 @@ public interface CommandHandler {
*/
long getAverageRunTime();
+ /**
+ * Default implementation for updating command status.
+ */
+ default void updateCommandStatus(StateContext context, SCMCommand command,
+ boolean cmdExecuted, Logger log) {
+ if (!context.updateCommandStatus(command.getCmdId(), cmdExecuted)) {
+ log.debug("{} with cmdId:{} not found.", command.getType(),
+ command.getCmdId());
+ }
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
index c3d1596..9640f93 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/DeleteBlocksCommandHandler.java
@@ -21,7 +21,8 @@ import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMCommandProto;
-import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.hdds.scm.container.common.helpers
+ .StorageContainerException;
import org.apache.hadoop.hdfs.DFSUtil;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerBlocksDeletionACKProto;
@@ -54,7 +55,8 @@ import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.List;
-import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.Result.CONTAINER_NOT_FOUND;
+import static org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos
+ .Result.CONTAINER_NOT_FOUND;
/**
* Handle block deletion commands.
@@ -68,6 +70,7 @@ public class DeleteBlocksCommandHandler implements CommandHandler {
private final Configuration conf;
private int invocationCount;
private long totalTime;
+ private boolean cmdExecuted;
public DeleteBlocksCommandHandler(ContainerSet cset,
Configuration conf) {
@@ -78,93 +81,98 @@ public class DeleteBlocksCommandHandler implements CommandHandler {
@Override
public void handle(SCMCommand command, OzoneContainer container,
StateContext context, SCMConnectionManager connectionManager) {
- if (command.getType() != SCMCommandProto.Type.deleteBlocksCommand) {
- LOG.warn("Skipping handling command, expected command "
- + "type {} but found {}",
- SCMCommandProto.Type.deleteBlocksCommand, command.getType());
- return;
- }
- LOG.debug("Processing block deletion command.");
- invocationCount++;
+ cmdExecuted = false;
long startTime = Time.monotonicNow();
-
- // move blocks to deleting state.
- // this is a metadata update, the actual deletion happens in another
- // recycling thread.
- DeleteBlocksCommand cmd = (DeleteBlocksCommand) command;
- List<DeletedBlocksTransaction> containerBlocks = cmd.blocksTobeDeleted();
-
-
- DeletedContainerBlocksSummary summary =
- DeletedContainerBlocksSummary.getFrom(containerBlocks);
- LOG.info("Start to delete container blocks, TXIDs={}, "
- + "numOfContainers={}, numOfBlocks={}",
- summary.getTxIDSummary(),
- summary.getNumOfContainers(),
- summary.getNumOfBlocks());
-
- ContainerBlocksDeletionACKProto.Builder resultBuilder =
- ContainerBlocksDeletionACKProto.newBuilder();
- containerBlocks.forEach(entry -> {
- DeleteBlockTransactionResult.Builder txResultBuilder =
- DeleteBlockTransactionResult.newBuilder();
- txResultBuilder.setTxID(entry.getTxID());
- try {
- long containerId = entry.getContainerID();
- Container cont = containerSet.getContainer(containerId);
- if(cont == null) {
- throw new StorageContainerException("Unable to find the container "
- + containerId, CONTAINER_NOT_FOUND);
- }
- ContainerProtos.ContainerType containerType = cont.getContainerType();
- switch (containerType) {
- case KeyValueContainer:
- KeyValueContainerData containerData = (KeyValueContainerData)
- cont.getContainerData();
- deleteKeyValueContainerBlocks(containerData, entry);
- txResultBuilder.setSuccess(true);
- break;
- default:
- LOG.error("Delete Blocks Command Handler is not implemented for " +
- "containerType {}", containerType);
- }
- } catch (IOException e) {
- LOG.warn("Failed to delete blocks for container={}, TXID={}",
- entry.getContainerID(), entry.getTxID(), e);
- txResultBuilder.setSuccess(false);
+ try {
+ if (command.getType() != SCMCommandProto.Type.deleteBlocksCommand) {
+ LOG.warn("Skipping handling command, expected command "
+ + "type {} but found {}",
+ SCMCommandProto.Type.deleteBlocksCommand, command.getType());
+ return;
}
- resultBuilder.addResults(txResultBuilder.build());
- });
- ContainerBlocksDeletionACKProto blockDeletionACK = resultBuilder.build();
-
- // Send ACK back to SCM as long as meta updated
- // TODO Or we should wait until the blocks are actually deleted?
- if (!containerBlocks.isEmpty()) {
- for (EndpointStateMachine endPoint : connectionManager.getValues()) {
+ LOG.debug("Processing block deletion command.");
+ invocationCount++;
+
+ // move blocks to deleting state.
+ // this is a metadata update, the actual deletion happens in another
+ // recycling thread.
+ DeleteBlocksCommand cmd = (DeleteBlocksCommand) command;
+ List<DeletedBlocksTransaction> containerBlocks = cmd.blocksTobeDeleted();
+
+ DeletedContainerBlocksSummary summary =
+ DeletedContainerBlocksSummary.getFrom(containerBlocks);
+ LOG.info("Start to delete container blocks, TXIDs={}, "
+ + "numOfContainers={}, numOfBlocks={}",
+ summary.getTxIDSummary(),
+ summary.getNumOfContainers(),
+ summary.getNumOfBlocks());
+
+ ContainerBlocksDeletionACKProto.Builder resultBuilder =
+ ContainerBlocksDeletionACKProto.newBuilder();
+ containerBlocks.forEach(entry -> {
+ DeleteBlockTransactionResult.Builder txResultBuilder =
+ DeleteBlockTransactionResult.newBuilder();
+ txResultBuilder.setTxID(entry.getTxID());
try {
- if (LOG.isDebugEnabled()) {
- LOG.debug("Sending following block deletion ACK to SCM");
- for (DeleteBlockTransactionResult result :
- blockDeletionACK.getResultsList()) {
- LOG.debug(result.getTxID() + " : " + result.getSuccess());
- }
+ long containerId = entry.getContainerID();
+ Container cont = containerSet.getContainer(containerId);
+ if (cont == null) {
+ throw new StorageContainerException("Unable to find the container "
+ + containerId, CONTAINER_NOT_FOUND);
+ }
+ ContainerProtos.ContainerType containerType = cont.getContainerType();
+ switch (containerType) {
+ case KeyValueContainer:
+ KeyValueContainerData containerData = (KeyValueContainerData)
+ cont.getContainerData();
+ deleteKeyValueContainerBlocks(containerData, entry);
+ txResultBuilder.setSuccess(true);
+ break;
+ default:
+ LOG.error(
+ "Delete Blocks Command Handler is not implemented for " +
+ "containerType {}", containerType);
}
- endPoint.getEndPoint()
- .sendContainerBlocksDeletionACK(blockDeletionACK);
} catch (IOException e) {
- LOG.error("Unable to send block deletion ACK to SCM {}",
- endPoint.getAddress().toString(), e);
+ LOG.warn("Failed to delete blocks for container={}, TXID={}",
+ entry.getContainerID(), entry.getTxID(), e);
+ txResultBuilder.setSuccess(false);
+ }
+ resultBuilder.addResults(txResultBuilder.build());
+ });
+ ContainerBlocksDeletionACKProto blockDeletionACK = resultBuilder.build();
+
+ // Send ACK back to SCM as long as meta updated
+ // TODO Or we should wait until the blocks are actually deleted?
+ if (!containerBlocks.isEmpty()) {
+ for (EndpointStateMachine endPoint : connectionManager.getValues()) {
+ try {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Sending following block deletion ACK to SCM");
+ for (DeleteBlockTransactionResult result :
+ blockDeletionACK.getResultsList()) {
+ LOG.debug(result.getTxID() + " : " + result.getSuccess());
+ }
+ }
+ endPoint.getEndPoint()
+ .sendContainerBlocksDeletionACK(blockDeletionACK);
+ } catch (IOException e) {
+ LOG.error("Unable to send block deletion ACK to SCM {}",
+ endPoint.getAddress().toString(), e);
+ }
}
}
+ cmdExecuted = true;
+ } finally {
+ updateCommandStatus(context, command, cmdExecuted, LOG);
+ long endTime = Time.monotonicNow();
+ totalTime += endTime - startTime;
}
-
- long endTime = Time.monotonicNow();
- totalTime += endTime - startTime;
}
/**
- * Move a bunch of blocks from a container to deleting state.
- * This is a meta update, the actual deletes happen in async mode.
+ * Move a bunch of blocks from a container to deleting state. This is a meta
+ * update, the actual deletes happen in async mode.
*
* @param containerData - KeyValueContainerData
* @param delTX a block deletion transaction.
@@ -222,7 +230,7 @@ public class DeleteBlocksCommandHandler implements CommandHandler {
}
} else {
LOG.debug("Block {} not found or already under deletion in"
- + " container {}, skip deleting it.", blk, containerId);
+ + " container {}, skip deleting it.", blk, containerId);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
index b4e83b7..fe1d4e8 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ReplicateContainerCommandHandler.java
@@ -39,12 +39,17 @@ public class ReplicateContainerCommandHandler implements CommandHandler {
private int invocationCount;
private long totalTime;
+ private boolean cmdExecuted;
@Override
public void handle(SCMCommand command, OzoneContainer container,
StateContext context, SCMConnectionManager connectionManager) {
LOG.warn("Replicate command is not yet handled");
-
+ try {
+ cmdExecuted = true;
+ } finally {
+ updateCommandStatus(context, command, cmdExecuted, LOG);
+ }
}
@Override
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
index c7d8df5..6b7c22c 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CloseContainerCommand.java
@@ -1,19 +1,18 @@
/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
* <p>
* http://www.apache.org/licenses/LICENSE-2.0
* <p>
* Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
*/
package org.apache.hadoop.ozone.protocol.commands;
@@ -24,7 +23,6 @@ import org.apache.hadoop.hdds.protocol.proto
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.CloseContainerCommandProto;
-
/**
* Asks datanode to close a container.
*/
@@ -36,6 +34,15 @@ public class CloseContainerCommand
public CloseContainerCommand(long containerID,
HddsProtos.ReplicationType replicationType) {
+ super();
+ this.containerID = containerID;
+ this.replicationType = replicationType;
+ }
+
+ // Should be called only for protobuf conversion
+ private CloseContainerCommand(long containerID,
+ HddsProtos.ReplicationType replicationType, long cmdId) {
+ super(cmdId);
this.containerID = containerID;
this.replicationType = replicationType;
}
@@ -63,6 +70,7 @@ public class CloseContainerCommand
public CloseContainerCommandProto getProto() {
return CloseContainerCommandProto.newBuilder()
.setContainerID(containerID)
+ .setCmdId(getCmdId())
.setReplicationType(replicationType).build();
}
@@ -70,8 +78,8 @@ public class CloseContainerCommand
CloseContainerCommandProto closeContainerProto) {
Preconditions.checkNotNull(closeContainerProto);
return new CloseContainerCommand(closeContainerProto.getContainerID(),
- closeContainerProto.getReplicationType());
-
+ closeContainerProto.getReplicationType(), closeContainerProto
+ .getCmdId());
}
public long getContainerID() {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandStatus.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandStatus.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandStatus.java
new file mode 100644
index 0000000..bf99700
--- /dev/null
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CommandStatus.java
@@ -0,0 +1,141 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.ozone.protocol.commands;
+
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.protocol.proto
+ .StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
+
+/**
+ * A class that is used to communicate status of datanode commands.
+ */
+public class CommandStatus {
+
+ private SCMCommandProto.Type type;
+ private Long cmdId;
+ private Status status;
+ private String msg;
+
+ public Type getType() {
+ return type;
+ }
+
+ public Long getCmdId() {
+ return cmdId;
+ }
+
+ public Status getStatus() {
+ return status;
+ }
+
+ public String getMsg() {
+ return msg;
+ }
+
+ /**
+ * To allow change of status once commandStatus is initialized.
+ *
+ * @param status
+ */
+ public void setStatus(Status status) {
+ this.status = status;
+ }
+
+ /**
+ * Returns a CommandStatus from the protocol buffers.
+ *
+ * @param cmdStatusProto - protoBuf Message
+ * @return CommandStatus
+ */
+ public CommandStatus getFromProtoBuf(
+ StorageContainerDatanodeProtocolProtos.CommandStatus cmdStatusProto) {
+ return CommandStatusBuilder.newBuilder()
+ .setCmdId(cmdStatusProto.getCmdId())
+ .setStatus(cmdStatusProto.getStatus())
+ .setType(cmdStatusProto.getType())
+ .setMsg(cmdStatusProto.getMsg()).build();
+ }
+ /**
+ * Returns a CommandStatus from the protocol buffers.
+ *
+ * @return StorageContainerDatanodeProtocolProtos.CommandStatus
+ */
+ public StorageContainerDatanodeProtocolProtos.CommandStatus
+ getProtoBufMessage() {
+ StorageContainerDatanodeProtocolProtos.CommandStatus.Builder builder =
+ StorageContainerDatanodeProtocolProtos.CommandStatus.newBuilder()
+ .setCmdId(this.getCmdId())
+ .setStatus(this.getStatus())
+ .setType(this.getType());
+ if (this.getMsg() != null) {
+ builder.setMsg(this.getMsg());
+ }
+ return builder.build();
+ }
+
+ /**
+ * Builder class for CommandStatus.
+ */
+ public static final class CommandStatusBuilder {
+
+ private SCMCommandProto.Type type;
+ private Long cmdId;
+ private StorageContainerDatanodeProtocolProtos.CommandStatus.Status status;
+ private String msg;
+
+ private CommandStatusBuilder() {
+ }
+
+ public static CommandStatusBuilder newBuilder() {
+ return new CommandStatusBuilder();
+ }
+
+ public CommandStatusBuilder setType(Type type) {
+ this.type = type;
+ return this;
+ }
+
+ public CommandStatusBuilder setCmdId(Long cmdId) {
+ this.cmdId = cmdId;
+ return this;
+ }
+
+ public CommandStatusBuilder setStatus(Status status) {
+ this.status = status;
+ return this;
+ }
+
+ public CommandStatusBuilder setMsg(String msg) {
+ this.msg = msg;
+ return this;
+ }
+
+ public CommandStatus build() {
+ CommandStatus commandStatus = new CommandStatus();
+ commandStatus.type = this.type;
+ commandStatus.msg = this.msg;
+ commandStatus.status = this.status;
+ commandStatus.cmdId = this.cmdId;
+ return commandStatus;
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java
index 4fa33f6..46af794 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/DeleteBlocksCommand.java
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
- * http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -36,6 +36,14 @@ public class DeleteBlocksCommand extends
public DeleteBlocksCommand(List<DeletedBlocksTransaction> blocks) {
+ super();
+ this.blocksTobeDeleted = blocks;
+ }
+
+ // Should be called only for protobuf conversion
+ private DeleteBlocksCommand(List<DeletedBlocksTransaction> blocks,
+ long cmdId) {
+ super(cmdId);
this.blocksTobeDeleted = blocks;
}
@@ -56,11 +64,12 @@ public class DeleteBlocksCommand extends
public static DeleteBlocksCommand getFromProtobuf(
DeleteBlocksCommandProto deleteBlocksProto) {
return new DeleteBlocksCommand(deleteBlocksProto
- .getDeletedBlocksTransactionsList());
+ .getDeletedBlocksTransactionsList(), deleteBlocksProto.getCmdId());
}
public DeleteBlocksCommandProto getProto() {
return DeleteBlocksCommandProto.newBuilder()
+ .setCmdId(getCmdId())
.addAllDeletedBlocksTransactions(blocksTobeDeleted).build();
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java
index 834318b..e860c93 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReplicateContainerCommand.java
@@ -30,7 +30,6 @@ import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMCommandProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
-import org.apache.hadoop.hdds.scm.container.ContainerID;
import com.google.common.base.Preconditions;
@@ -41,11 +40,19 @@ public class ReplicateContainerCommand
extends SCMCommand<ReplicateContainerCommandProto> {
private final long containerID;
-
private final List<DatanodeDetails> sourceDatanodes;
public ReplicateContainerCommand(long containerID,
List<DatanodeDetails> sourceDatanodes) {
+ super();
+ this.containerID = containerID;
+ this.sourceDatanodes = sourceDatanodes;
+ }
+
+ // Should be called only for protobuf conversion
+ public ReplicateContainerCommand(long containerID,
+ List<DatanodeDetails> sourceDatanodes, long cmdId) {
+ super(cmdId);
this.containerID = containerID;
this.sourceDatanodes = sourceDatanodes;
}
@@ -62,6 +69,7 @@ public class ReplicateContainerCommand
public ReplicateContainerCommandProto getProto() {
Builder builder = ReplicateContainerCommandProto.newBuilder()
+ .setCmdId(getCmdId())
.setContainerID(containerID);
for (DatanodeDetails dd : sourceDatanodes) {
builder.addSources(dd.getProtoBufMessage());
@@ -75,12 +83,12 @@ public class ReplicateContainerCommand
List<DatanodeDetails> datanodeDetails =
protoMessage.getSourcesList()
- .stream()
- .map(DatanodeDetails::getFromProtoBuf)
- .collect(Collectors.toList());
+ .stream()
+ .map(DatanodeDetails::getFromProtoBuf)
+ .collect(Collectors.toList());
return new ReplicateContainerCommand(protoMessage.getContainerID(),
- datanodeDetails);
+ datanodeDetails, protoMessage.getCmdId());
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java
index 953e31a..d557104 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/ReregisterCommand.java
@@ -49,6 +49,16 @@ public class ReregisterCommand extends
return getProto().toByteArray();
}
+ /**
+ * Not implemented for ReregisterCommand.
+ *
+ * @return cmdId.
+ */
+ @Override
+ public long getCmdId() {
+ return 0;
+ }
+
public ReregisterCommandProto getProto() {
return ReregisterCommandProto
.newBuilder()
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java
index 35ca802..6cda591 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/SCMCommand.java
@@ -18,6 +18,7 @@
package org.apache.hadoop.ozone.protocol.commands;
import com.google.protobuf.GeneratedMessage;
+import org.apache.hadoop.hdds.HddsIdFactory;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMCommandProto;
@@ -27,6 +28,15 @@ import org.apache.hadoop.hdds.protocol.proto
* @param <T>
*/
public abstract class SCMCommand<T extends GeneratedMessage> {
+ private long cmdId;
+
+ SCMCommand() {
+ this.cmdId = HddsIdFactory.getLongId();
+ }
+
+ SCMCommand(long cmdId) {
+ this.cmdId = cmdId;
+ }
/**
* Returns the type of this command.
* @return Type
@@ -38,4 +48,13 @@ public abstract class SCMCommand<T extends GeneratedMessage> {
* @return A protobuf message.
*/
public abstract byte[] getProtoBufMessage();
+
+ /**
+ * Gets the commandId of this object.
+ * @return uuid.
+ */
+ public long getCmdId() {
+ return cmdId;
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto b/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
index 54230c1..4238389 100644
--- a/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
+++ b/hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
@@ -80,6 +80,7 @@ message SCMHeartbeatRequestProto {
optional NodeReportProto nodeReport = 2;
optional ContainerReportsProto containerReport = 3;
optional ContainerActionsProto containerActions = 4;
+ optional CommandStatusReportsProto commandStatusReport = 5;
}
/*
@@ -127,6 +128,22 @@ message ContainerReportsProto {
repeated ContainerInfo reports = 1;
}
+message CommandStatusReportsProto {
+ repeated CommandStatus cmdStatus = 1;
+}
+
+message CommandStatus {
+ enum Status {
+ PENDING = 1;
+ EXECUTED = 2;
+ FAILED = 3;
+ }
+ required int64 cmdId = 1;
+ required Status status = 2 [default = PENDING];
+ required SCMCommandProto.Type type = 3;
+ optional string msg = 4;
+}
+
message ContainerActionsProto {
repeated ContainerAction containerActions = 1;
}
@@ -193,6 +210,7 @@ message ReregisterCommandProto {}
// HB response from SCM, contains a list of block deletion transactions.
message DeleteBlocksCommandProto {
repeated DeletedBlocksTransaction deletedBlocksTransactions = 1;
+ required int64 cmdId = 3;
}
// The deleted blocks which are stored in deletedBlock.db of scm.
@@ -226,6 +244,7 @@ This command asks the datanode to close a specific container.
message CloseContainerCommandProto {
required int64 containerID = 1;
required hadoop.hdds.ReplicationType replicationType = 2;
+ required int64 cmdId = 3;
}
/**
@@ -233,6 +252,7 @@ This command asks the datanode to delete a specific container.
*/
message DeleteContainerCommandProto {
required int64 containerID = 1;
+ required int64 cmdId = 2;
}
/**
@@ -241,6 +261,7 @@ This command asks the datanode to replicate a container from specific sources.
message ReplicateContainerCommandProto {
required int64 containerID = 1;
repeated DatanodeDetailsProto sources = 2;
+ required int64 cmdId = 3;
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
index 8f4b0e3..fb8e7c1 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/ScmTestMock.java
@@ -18,6 +18,8 @@ package org.apache.hadoop.ozone.container.common;
import com.google.common.base.Preconditions;
import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatus;
import org.apache.hadoop.hdds.scm.VersionInfo;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.DatanodeDetailsProto;
@@ -59,6 +61,9 @@ public class ScmTestMock implements StorageContainerDatanodeProtocol {
private Map<DatanodeDetails, Map<String, ContainerInfo>> nodeContainers =
new HashMap();
private Map<DatanodeDetails, NodeReportProto> nodeReports = new HashMap<>();
+ private AtomicInteger commandStatusReport = new AtomicInteger(0);
+ private List<CommandStatus> cmdStatusList = new LinkedList<>();
+ private List<SCMCommandProto> scmCommandRequests = new LinkedList<>();
/**
* Returns the number of heartbeats made to this class.
*
@@ -180,10 +185,12 @@ public class ScmTestMock implements StorageContainerDatanodeProtocol {
sendHeartbeat(SCMHeartbeatRequestProto heartbeat) throws IOException {
rpcCount.incrementAndGet();
heartbeatCount.incrementAndGet();
+ if(heartbeat.hasCommandStatusReport()){
+ cmdStatusList.addAll(heartbeat.getCommandStatusReport().getCmdStatusList());
+ commandStatusReport.incrementAndGet();
+ }
sleepIfNeeded();
- List<SCMCommandProto>
- cmdResponses = new LinkedList<>();
- return SCMHeartbeatResponseProto.newBuilder().addAllCommands(cmdResponses)
+ return SCMHeartbeatResponseProto.newBuilder().addAllCommands(scmCommandRequests)
.setDatanodeUUID(heartbeat.getDatanodeDetails().getUuid())
.build();
}
@@ -302,4 +309,24 @@ public class ScmTestMock implements StorageContainerDatanodeProtocol {
nodeContainers.clear();
}
+
+ public int getCommandStatusReportCount() {
+ return commandStatusReport.get();
+ }
+
+ public List<CommandStatus> getCmdStatusList() {
+ return cmdStatusList;
+ }
+
+ public List<SCMCommandProto> getScmCommandRequests() {
+ return scmCommandRequests;
+ }
+
+ public void clearScmCommandRequests() {
+ scmCommandRequests.clear();
+ }
+
+ public void addScmCommandRequest(SCMCommandProto scmCmd) {
+ scmCommandRequests.add(scmCmd);
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
index 5fd9cf6..026e7aa 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
@@ -20,18 +20,27 @@ package org.apache.hadoop.ozone.container.common.report;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
import com.google.protobuf.Descriptors;
import com.google.protobuf.GeneratedMessage;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.HddsIdFactory;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.NodeReportProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.SCMHeartbeatRequestProto;
import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
import org.apache.hadoop.util.concurrent.HadoopExecutors;
import org.junit.Assert;
+import org.junit.BeforeClass;
import org.junit.Test;
import org.mockito.Mockito;
@@ -42,12 +51,20 @@ import java.util.concurrent.TimeUnit;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
/**
* Test cases to test {@link ReportPublisher}.
*/
public class TestReportPublisher {
+ private static Configuration config;
+
+ @BeforeClass
+ public static void setup() {
+ config = new OzoneConfiguration();
+ }
+
/**
* Dummy report publisher for testing.
*/
@@ -93,9 +110,9 @@ public class TestReportPublisher {
.setNameFormat("Unit test ReportManager Thread - %d").build());
publisher.init(dummyContext, executorService);
Thread.sleep(150);
- Assert.assertEquals(1, ((DummyReportPublisher)publisher).getReportCount);
+ Assert.assertEquals(1, ((DummyReportPublisher) publisher).getReportCount);
Thread.sleep(150);
- Assert.assertEquals(2, ((DummyReportPublisher)publisher).getReportCount);
+ Assert.assertEquals(2, ((DummyReportPublisher) publisher).getReportCount);
executorService.shutdown();
}
@@ -110,12 +127,58 @@ public class TestReportPublisher {
publisher.init(dummyContext, executorService);
Thread.sleep(150);
executorService.shutdown();
- Assert.assertEquals(1, ((DummyReportPublisher)publisher).getReportCount);
+ Assert.assertEquals(1, ((DummyReportPublisher) publisher).getReportCount);
verify(dummyContext, times(1)).addReport(null);
}
@Test
+ public void testCommandStatusPublisher() throws InterruptedException {
+ StateContext dummyContext = Mockito.mock(StateContext.class);
+ ReportPublisher publisher = new CommandStatusReportPublisher();
+ final Map<Long, CommandStatus> cmdStatusMap = new ConcurrentHashMap<>();
+ when(dummyContext.getCommandStatusMap()).thenReturn(cmdStatusMap);
+ publisher.setConf(config);
+
+ ScheduledExecutorService executorService = HadoopExecutors
+ .newScheduledThreadPool(1,
+ new ThreadFactoryBuilder().setDaemon(true)
+ .setNameFormat("Unit test ReportManager Thread - %d").build());
+ publisher.init(dummyContext, executorService);
+ Assert.assertEquals(0,
+ ((CommandStatusReportPublisher) publisher).getReport()
+ .getCmdStatusCount());
+
+ // Insert to status object to state context map and then get the report.
+ CommandStatus obj1 = CommandStatus.CommandStatusBuilder.newBuilder()
+ .setCmdId(HddsIdFactory.getLongId())
+ .setType(Type.deleteBlocksCommand)
+ .setStatus(Status.PENDING)
+ .build();
+ CommandStatus obj2 = CommandStatus.CommandStatusBuilder.newBuilder()
+ .setCmdId(HddsIdFactory.getLongId())
+ .setType(Type.closeContainerCommand)
+ .setStatus(Status.EXECUTED)
+ .build();
+ cmdStatusMap.put(obj1.getCmdId(), obj1);
+ cmdStatusMap.put(obj2.getCmdId(), obj2);
+ Assert.assertEquals("Should publish report with 2 status objects", 2,
+ ((CommandStatusReportPublisher) publisher).getReport()
+ .getCmdStatusCount());
+ Assert.assertEquals(
+ "Next report should have 1 status objects as command status o"
+ + "bjects are still in Pending state",
+ 1, ((CommandStatusReportPublisher) publisher).getReport()
+ .getCmdStatusCount());
+ Assert.assertTrue(
+ "Next report should have 1 status objects as command status "
+ + "objects are still in Pending state",
+ ((CommandStatusReportPublisher) publisher).getReport()
+ .getCmdStatusList().get(0).getStatus().equals(Status.PENDING));
+ executorService.shutdown();
+ }
+
+ @Test
public void testAddingReportToHeartbeat() {
Configuration conf = new OzoneConfiguration();
ReportPublisherFactory factory = new ReportPublisherFactory(conf);
@@ -168,10 +231,10 @@ public class TestReportPublisher {
* Adds the report to heartbeat.
*
* @param requestBuilder builder to which the report has to be added.
- * @param report the report to be added.
+ * @param report the report to be added.
*/
- private static void addReport(SCMHeartbeatRequestProto.Builder requestBuilder,
- GeneratedMessage report) {
+ private static void addReport(SCMHeartbeatRequestProto.Builder
+ requestBuilder, GeneratedMessage report) {
String reportName = report.getDescriptorForType().getFullName();
for (Descriptors.FieldDescriptor descriptor :
SCMHeartbeatRequestProto.getDescriptor().getFields()) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
index 0afd675..485b3f5 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/events/SCMEvents.java
@@ -21,8 +21,12 @@ package org.apache.hadoop.hdds.scm.events;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.container.ContainerID;
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.ContainerReportFromDatanode;
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.NodeReportFromDatanode;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .CommandStatusReportFromDatanode;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .ContainerReportFromDatanode;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
+ .NodeReportFromDatanode;
import org.apache.hadoop.hdds.server.events.Event;
import org.apache.hadoop.hdds.server.events.TypedEvent;
@@ -34,47 +38,54 @@ import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
public final class SCMEvents {
/**
- * NodeReports are sent out by Datanodes. This report is
- * received by SCMDatanodeHeartbeatDispatcher and NodeReport Event is
- * generated.
+ * NodeReports are sent out by Datanodes. This report is received by
+ * SCMDatanodeHeartbeatDispatcher and NodeReport Event is generated.
*/
public static final TypedEvent<NodeReportFromDatanode> NODE_REPORT =
new TypedEvent<>(NodeReportFromDatanode.class, "Node_Report");
/**
- * ContainerReports are send out by Datanodes. This report
- * is received by SCMDatanodeHeartbeatDispatcher and Container_Report Event
- * i generated.
+ * ContainerReports are send out by Datanodes. This report is received by
+ * SCMDatanodeHeartbeatDispatcher and Container_Report Event
+ * isTestSCMDatanodeHeartbeatDispatcher generated.
*/
public static final TypedEvent<ContainerReportFromDatanode> CONTAINER_REPORT =
new TypedEvent<>(ContainerReportFromDatanode.class, "Container_Report");
/**
+ * A Command status report will be sent by datanodes. This repoort is received
+ * by SCMDatanodeHeartbeatDispatcher and CommandReport event is generated.
+ */
+ public static final TypedEvent<CommandStatusReportFromDatanode>
+ CMD_STATUS_REPORT =
+ new TypedEvent<>(CommandStatusReportFromDatanode.class,
+ "Cmd_Status_Report");
+
+ /**
* When ever a command for the Datanode needs to be issued by any component
- * inside SCM, a Datanode_Command event is generated. NodeManager listens
- * to these events and dispatches them to Datanode for further processing.
+ * inside SCM, a Datanode_Command event is generated. NodeManager listens to
+ * these events and dispatches them to Datanode for further processing.
*/
public static final Event<CommandForDatanode> DATANODE_COMMAND =
new TypedEvent<>(CommandForDatanode.class, "Datanode_Command");
/**
- * A Close Container Event can be triggered under many condition.
- * Some of them are:
- * 1. A Container is full, then we stop writing further information to
- * that container. DN's let SCM know that current state and sends a
- * informational message that allows SCM to close the container.
- *
- * 2. If a pipeline is open; for example Ratis; if a single node fails,
- * we will proactively close these containers.
- *
- * Once a command is dispatched to DN, we will also listen to updates from
- * the datanode which lets us know that this command completed or timed out.
+ * A Close Container Event can be triggered under many condition. Some of them
+ * are: 1. A Container is full, then we stop writing further information to
+ * that container. DN's let SCM know that current state and sends a
+ * informational message that allows SCM to close the container.
+ * <p>
+ * 2. If a pipeline is open; for example Ratis; if a single node fails, we
+ * will proactively close these containers.
+ * <p>
+ * Once a command is dispatched to DN, we will also listen to updates from the
+ * datanode which lets us know that this command completed or timed out.
*/
public static final TypedEvent<ContainerID> CLOSE_CONTAINER =
new TypedEvent<>(ContainerID.class, "Close_Container");
/**
- * This event will be triggered whenever a new datanode is
- * registered with SCM.
+ * This event will be triggered whenever a new datanode is registered with
+ * SCM.
*/
public static final TypedEvent<DatanodeDetails> NEW_NODE =
new TypedEvent<>(DatanodeDetails.class, "New_Node");
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
index 4cfa98f..2461d37 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
@@ -19,6 +19,8 @@ package org.apache.hadoop.hdds.scm.server;
import com.google.common.base.Preconditions;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
@@ -37,7 +39,7 @@ import java.util.List;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CONTAINER_REPORT;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.NODE_REPORT;
-
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.CMD_STATUS_REPORT;
/**
* This class is responsible for dispatching heartbeat from datanode to
* appropriate EventHandler at SCM.
@@ -86,6 +88,13 @@ public final class SCMDatanodeHeartbeatDispatcher {
heartbeat.getContainerReport()));
}
+
+ if (heartbeat.hasCommandStatusReport()) {
+ eventPublisher.fireEvent(CMD_STATUS_REPORT,
+ new CommandStatusReportFromDatanode(datanodeDetails,
+ heartbeat.getCommandStatusReport()));
+ }
+
return commands;
}
@@ -136,4 +145,16 @@ public final class SCMDatanodeHeartbeatDispatcher {
}
}
+ /**
+ * Container report event payload with origin.
+ */
+ public static class CommandStatusReportFromDatanode
+ extends ReportFromDatanode<CommandStatusReportsProto> {
+
+ public CommandStatusReportFromDatanode(DatanodeDetails datanodeDetails,
+ CommandStatusReportsProto report) {
+ super(datanodeDetails, report);
+ }
+ }
+
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
index 042e3cc..1b79ebf 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestSCMDatanodeHeartbeatDispatcher.java
@@ -21,6 +21,10 @@ import java.io.IOException;
import java.util.concurrent.atomic.AtomicInteger;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatusReportsProto;
+import org.apache.hadoop.hdds.scm.server.
+ SCMDatanodeHeartbeatDispatcher.CommandStatusReportFromDatanode;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.ContainerReportsProto;
import org.apache.hadoop.hdds.protocol.proto
@@ -42,6 +46,7 @@ import org.mockito.Mockito;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.CONTAINER_REPORT;
import static org.apache.hadoop.hdds.scm.events.SCMEvents.NODE_REPORT;
+import static org.apache.hadoop.hdds.scm.events.SCMEvents.CMD_STATUS_REPORT;
/**
* This class tests the behavior of SCMDatanodeHeartbeatDispatcher.
@@ -91,6 +96,8 @@ public class TestSCMDatanodeHeartbeatDispatcher {
ContainerReportsProto containerReport =
ContainerReportsProto.getDefaultInstance();
+ CommandStatusReportsProto commandStatusReport =
+ CommandStatusReportsProto.getDefaultInstance();
SCMDatanodeHeartbeatDispatcher dispatcher =
new SCMDatanodeHeartbeatDispatcher(Mockito.mock(NodeManager.class),
@@ -98,9 +105,18 @@ public class TestSCMDatanodeHeartbeatDispatcher {
@Override
public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void fireEvent(
EVENT_TYPE event, PAYLOAD payload) {
- Assert.assertEquals(event, CONTAINER_REPORT);
- Assert.assertEquals(containerReport,
- ((ContainerReportFromDatanode)payload).getReport());
+ Assert.assertTrue(
+ event.equals(CONTAINER_REPORT)
+ || event.equals(CMD_STATUS_REPORT));
+
+ if (payload instanceof ContainerReportFromDatanode) {
+ Assert.assertEquals(containerReport,
+ ((ContainerReportFromDatanode) payload).getReport());
+ }
+ if (payload instanceof CommandStatusReportFromDatanode) {
+ Assert.assertEquals(commandStatusReport,
+ ((CommandStatusReportFromDatanode) payload).getReport());
+ }
eventReceived.incrementAndGet();
}
});
@@ -111,9 +127,10 @@ public class TestSCMDatanodeHeartbeatDispatcher {
SCMHeartbeatRequestProto.newBuilder()
.setDatanodeDetails(datanodeDetails.getProtoBufMessage())
.setContainerReport(containerReport)
+ .setCommandStatusReport(commandStatusReport)
.build();
dispatcher.dispatch(heartbeat);
- Assert.assertEquals(1, eventReceived.get());
+ Assert.assertEquals(2, eventReceived.get());
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f89e2659/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
index 9db9e80..be8bd87 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
@@ -16,12 +16,29 @@
*/
package org.apache.hadoop.ozone.container.common;
+import java.util.Map;
import org.apache.commons.codec.digest.DigestUtils;
import org.apache.commons.lang3.RandomUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CloseContainerCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.CommandStatus.Status;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.DeleteBlocksCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.ReplicateContainerCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+ StorageContainerDatanodeProtocolProtos.SCMCommandProto.Type;
import org.apache.hadoop.hdds.scm.TestUtils;
import org.apache.hadoop.hdds.scm.VersionInfo;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
@@ -54,6 +71,7 @@ import org.apache.hadoop.ozone.container.common.states.endpoint
import org.apache.hadoop.ozone.container.common.states.endpoint
.VersionEndpointTask;
import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
import org.apache.hadoop.test.PathUtils;
import org.apache.hadoop.util.Time;
import org.junit.AfterClass;
@@ -74,6 +92,9 @@ import static org.apache.hadoop.ozone.container.common.ContainerTestUtils
.createEndpoint;
import static org.hamcrest.Matchers.lessThanOrEqualTo;
import static org.mockito.Mockito.when;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
/**
* Tests the endpoints.
@@ -83,6 +104,7 @@ public class TestEndPoint {
private static RPC.Server scmServer;
private static ScmTestMock scmServerImpl;
private static File testDir;
+ private static Configuration config;
@AfterClass
public static void tearDown() throws Exception {
@@ -99,6 +121,12 @@ public class TestEndPoint {
scmServer = SCMTestUtils.startScmRpcServer(SCMTestUtils.getConf(),
scmServerImpl, serverAddress, 10);
testDir = PathUtils.getTestDir(TestEndPoint.class);
+ config = SCMTestUtils.getConf();
+ config.set(DFS_DATANODE_DATA_DIR_KEY, testDir.getAbsolutePath());
+ config.set(OZONE_METADATA_DIRS, testDir.getAbsolutePath());
+ config
+ .setBoolean(OzoneConfigKeys.DFS_CONTAINER_RATIS_IPC_RANDOM_PORT, true);
+ config.set(HddsConfigKeys.HDDS_COMMAND_STATUS_REPORT_INTERVAL,"1s");
}
@Test
@@ -312,7 +340,87 @@ public class TestEndPoint {
}
}
- private void heartbeatTaskHelper(InetSocketAddress scmAddress,
+ @Test
+ public void testHeartbeatWithCommandStatusReport() throws Exception {
+ DatanodeDetails dataNode = getDatanodeDetails();
+ try (EndpointStateMachine rpcEndPoint =
+ createEndpoint(SCMTestUtils.getConf(),
+ serverAddress, 1000)) {
+ String storageId = UUID.randomUUID().toString();
+ // Add some scmCommands for heartbeat response
+ addScmCommands();
+
+
+ SCMHeartbeatRequestProto request = SCMHeartbeatRequestProto.newBuilder()
+ .setDatanodeDetails(dataNode.getProtoBufMessage())
+ .setNodeReport(TestUtils.createNodeReport(
+ getStorageReports(storageId)))
+ .build();
+
+ SCMHeartbeatResponseProto responseProto = rpcEndPoint.getEndPoint()
+ .sendHeartbeat(request);
+ assertNotNull(responseProto);
+ assertEquals(3, responseProto.getCommandsCount());
+ assertEquals(0, scmServerImpl.getCommandStatusReportCount());
+
+ // Send heartbeat again from heartbeat endpoint task
+ final StateContext stateContext = heartbeatTaskHelper(serverAddress, 3000);
+ Map<Long, CommandStatus> map = stateContext.getCommandStatusMap();
+ assertNotNull(map);
+ assertEquals("Should have 3 objects", 3, map.size());
+ assertTrue(map.containsKey(Long.valueOf(1)));
+ assertTrue(map.containsKey(Long.valueOf(2)));
+ assertTrue(map.containsKey(Long.valueOf(3)));
+ assertTrue(map.get(Long.valueOf(1)).getType()
+ .equals(Type.closeContainerCommand));
+ assertTrue(map.get(Long.valueOf(2)).getType()
+ .equals(Type.replicateContainerCommand));
+ assertTrue(
+ map.get(Long.valueOf(3)).getType().equals(Type.deleteBlocksCommand));
+ assertTrue(map.get(Long.valueOf(1)).getStatus().equals(Status.PENDING));
+ assertTrue(map.get(Long.valueOf(2)).getStatus().equals(Status.PENDING));
+ assertTrue(map.get(Long.valueOf(3)).getStatus().equals(Status.PENDING));
+
+ scmServerImpl.clearScmCommandRequests();
+ }
+ }
+
+ private void addScmCommands() {
+ SCMCommandProto closeCommand = SCMCommandProto.newBuilder()
+ .setCloseContainerCommandProto(
+ CloseContainerCommandProto.newBuilder().setCmdId(1)
+ .setContainerID(1)
+ .setReplicationType(ReplicationType.RATIS)
+ .build())
+ .setCommandType(Type.closeContainerCommand)
+ .build();
+ SCMCommandProto replicationCommand = SCMCommandProto.newBuilder()
+ .setReplicateContainerCommandProto(
+ ReplicateContainerCommandProto.newBuilder()
+ .setCmdId(2)
+ .setContainerID(2)
+ .build())
+ .setCommandType(Type.replicateContainerCommand)
+ .build();
+ SCMCommandProto deleteBlockCommand = SCMCommandProto.newBuilder()
+ .setDeleteBlocksCommandProto(
+ DeleteBlocksCommandProto.newBuilder()
+ .setCmdId(3)
+ .addDeletedBlocksTransactions(
+ DeletedBlocksTransaction.newBuilder()
+ .setContainerID(45)
+ .setCount(1)
+ .setTxID(23)
+ .build())
+ .build())
+ .setCommandType(Type.deleteBlocksCommand)
+ .build();
+ scmServerImpl.addScmCommandRequest(closeCommand);
+ scmServerImpl.addScmCommandRequest(deleteBlockCommand);
+ scmServerImpl.addScmCommandRequest(replicationCommand);
+ }
+
+ private StateContext heartbeatTaskHelper(InetSocketAddress scmAddress,
int rpcTimeout) throws Exception {
Configuration conf = SCMTestUtils.getConf();
conf.set(DFS_DATANODE_DATA_DIR_KEY, testDir.getAbsolutePath());
@@ -344,6 +452,7 @@ public class TestEndPoint {
Assert.assertEquals(EndpointStateMachine.EndPointStates.HEARTBEAT,
rpcEndPoint.getState());
+ return stateContext;
}
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[38/50] hadoop git commit: HDDS-241. Handle Volume in inconsistent
state. Contributed by Hanisha Koneru.
Posted by zh...@apache.org.
HDDS-241. Handle Volume in inconsistent state. Contributed by Hanisha Koneru.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d5d44473
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d5d44473
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d5d44473
Branch: refs/heads/HDFS-13572
Commit: d5d444732bf5c3f3cfc681f8d87e0681a7471f2f
Parents: 1af87df
Author: Xiaoyu Yao <xy...@apache.org>
Authored: Wed Jul 18 09:38:43 2018 -0700
Committer: Xiaoyu Yao <xy...@apache.org>
Committed: Wed Jul 18 09:38:43 2018 -0700
----------------------------------------------------------------------
.../container/common/volume/HddsVolume.java | 45 +++++++++--
.../container/common/volume/VolumeSet.java | 14 +++-
.../container/common/volume/TestVolumeSet.java | 78 +++++++++++++++++---
.../container/ozoneimpl/TestOzoneContainer.java | 18 ++++-
4 files changed, 129 insertions(+), 26 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d5d44473/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java
index 1e71494..6468720 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/HddsVolume.java
@@ -42,6 +42,18 @@ import java.util.Properties;
* HddsVolume represents volume in a datanode. {@link VolumeSet} maitains a
* list of HddsVolumes, one for each volume in the Datanode.
* {@link VolumeInfo} in encompassed by this class.
+ *
+ * The disk layout per volume is as follows:
+ * ../hdds/VERSION
+ * ../hdds/<<scmUuid>>/current/<<containerDir>>/<<containerID>>/metadata
+ * ../hdds/<<scmUuid>>/current/<<containerDir>>/<<containerID>>/<<dataDir>>
+ *
+ * Each hdds volume has its own VERSION file. The hdds volume will have one
+ * scmUuid directory for each SCM it is a part of (currently only one SCM is
+ * supported).
+ *
+ * During DN startup, if the VERSION file exists, we verify that the
+ * clusterID in the version file matches the clusterID from SCM.
*/
public final class HddsVolume {
@@ -108,11 +120,6 @@ public final class HddsVolume {
}
private HddsVolume(Builder b) throws IOException {
- Preconditions.checkNotNull(b.volumeRootStr,
- "Volume root dir cannot be null");
- Preconditions.checkNotNull(b.datanodeUuid, "DatanodeUUID cannot be null");
- Preconditions.checkNotNull(b.conf, "Configuration cannot be null");
-
StorageLocation location = StorageLocation.parse(b.volumeRootStr);
hddsRootDir = new File(location.getUri().getPath(), HDDS_VOLUME_DIR);
this.state = VolumeState.NOT_INITIALIZED;
@@ -162,6 +169,10 @@ public final class HddsVolume {
readVersionFile();
setState(VolumeState.NORMAL);
break;
+ case INCONSISTENT:
+ // Volume Root is in an inconsistent state. Skip loading this volume.
+ throw new IOException("Volume is in an " + VolumeState.INCONSISTENT +
+ " state. Skipped loading volume: " + hddsRootDir.getPath());
default:
throw new IOException("Unrecognized initial state : " +
intialVolumeState + "of volume : " + hddsRootDir);
@@ -170,11 +181,23 @@ public final class HddsVolume {
private VolumeState analyzeVolumeState() {
if (!hddsRootDir.exists()) {
+ // Volume Root does not exist.
return VolumeState.NON_EXISTENT;
}
- if (!getVersionFile().exists()) {
+ if (!hddsRootDir.isDirectory()) {
+ // Volume Root exists but is not a directory.
+ return VolumeState.INCONSISTENT;
+ }
+ File[] files = hddsRootDir.listFiles();
+ if (files == null || files.length == 0) {
+ // Volume Root exists and is empty.
return VolumeState.NOT_FORMATTED;
}
+ if (!getVersionFile().exists()) {
+ // Volume Root is non empty but VERSION file does not exist.
+ return VolumeState.INCONSISTENT;
+ }
+ // Volume Root and VERSION file exist.
return VolumeState.NOT_INITIALIZED;
}
@@ -321,11 +344,21 @@ public final class HddsVolume {
/**
* VolumeState represents the different states a HddsVolume can be in.
+ * NORMAL => Volume can be used for storage
+ * FAILED => Volume has failed due and can no longer be used for
+ * storing containers.
+ * NON_EXISTENT => Volume Root dir does not exist
+ * INCONSISTENT => Volume Root dir is not empty but VERSION file is
+ * missing or Volume Root dir is not a directory
+ * NOT_FORMATTED => Volume Root exists but not formatted (no VERSION file)
+ * NOT_INITIALIZED => VERSION file exists but has not been verified for
+ * correctness.
*/
public enum VolumeState {
NORMAL,
FAILED,
NON_EXISTENT,
+ INCONSISTENT,
NOT_FORMATTED,
NOT_INITIALIZED
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d5d44473/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
index 692a9d1..2dd4763 100644
--- a/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
+++ b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/volume/VolumeSet.java
@@ -202,18 +202,19 @@ public class VolumeSet {
// Add a volume to VolumeSet
- public void addVolume(String dataDir) throws IOException {
- addVolume(dataDir, StorageType.DEFAULT);
+ public boolean addVolume(String dataDir) {
+ return addVolume(dataDir, StorageType.DEFAULT);
}
// Add a volume to VolumeSet
- public void addVolume(String volumeRoot, StorageType storageType)
- throws IOException {
+ public boolean addVolume(String volumeRoot, StorageType storageType) {
String hddsRoot = HddsVolumeUtil.getHddsRoot(volumeRoot);
+ boolean success;
try (AutoCloseableLock lock = volumeSetLock.acquire()) {
if (volumeMap.containsKey(hddsRoot)) {
LOG.warn("Volume : {} already exists in VolumeMap", hddsRoot);
+ success = false;
} else {
if (failedVolumeMap.containsKey(hddsRoot)) {
failedVolumeMap.remove(hddsRoot);
@@ -225,8 +226,13 @@ public class VolumeSet {
LOG.info("Added Volume : {} to VolumeSet",
hddsVolume.getHddsRootDir().getPath());
+ success = true;
}
+ } catch (IOException ex) {
+ LOG.error("Failed to add volume " + volumeRoot + " to VolumeSet", ex);
+ success = false;
}
+ return success;
}
// Mark a volume as failed
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d5d44473/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSet.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSet.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSet.java
index 41f75bd..4f75b9a 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSet.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSet.java
@@ -18,22 +18,30 @@
package org.apache.hadoop.ozone.container.common.volume;
+import java.io.IOException;
+import org.apache.commons.io.FileUtils;
import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.MiniDFSCluster;
import org.apache.hadoop.ozone.container.common.utils.HddsVolumeUtil;
import org.apache.hadoop.test.GenericTestUtils.LogCapturer;
+
+import static org.apache.hadoop.ozone.container.common.volume.HddsVolume
+ .HDDS_VOLUME_DIR;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
+
+import org.junit.After;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.Timeout;
+import java.io.File;
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
@@ -69,6 +77,28 @@ public class TestVolumeSet {
initializeVolumeSet();
}
+ @After
+ public void shutdown() throws IOException {
+ // Delete the hdds volume root dir
+ List<HddsVolume> volumes = new ArrayList<>();
+ volumes.addAll(volumeSet.getVolumesList());
+ volumes.addAll(volumeSet.getFailedVolumesList());
+
+ for (HddsVolume volume : volumes) {
+ FileUtils.deleteDirectory(volume.getHddsRootDir());
+ }
+ }
+
+ private boolean checkVolumeExistsInVolumeSet(String volume) {
+ for (HddsVolume hddsVolume : volumeSet.getVolumesList()) {
+ if (hddsVolume.getHddsRootDir().getPath().equals(
+ HddsVolumeUtil.getHddsRoot(volume))) {
+ return true;
+ }
+ }
+ return false;
+ }
+
@Test
public void testVolumeSetInitialization() throws Exception {
@@ -84,14 +114,18 @@ public class TestVolumeSet {
}
@Test
- public void testAddVolume() throws Exception {
+ public void testAddVolume() {
assertEquals(2, volumeSet.getVolumesList().size());
// Add a volume to VolumeSet
String volume3 = baseDir + "disk3";
- volumeSet.addVolume(volume3);
+// File dir3 = new File(volume3, "hdds");
+// File[] files = dir3.listFiles();
+// System.out.println("------ " + files[0].getPath());
+ boolean success = volumeSet.addVolume(volume3);
+ assertTrue(success);
assertEquals(3, volumeSet.getVolumesList().size());
assertTrue("AddVolume did not add requested volume to VolumeSet",
checkVolumeExistsInVolumeSet(volume3));
@@ -122,7 +156,6 @@ public class TestVolumeSet {
@Test
public void testRemoveVolume() throws Exception {
- List<HddsVolume> volumesList = volumeSet.getVolumesList();
assertEquals(2, volumeSet.getVolumesList().size());
// Remove a volume from VolumeSet
@@ -141,13 +174,34 @@ public class TestVolumeSet {
+ expectedLogMessage, logs.getOutput().contains(expectedLogMessage));
}
- private boolean checkVolumeExistsInVolumeSet(String volume) {
- for (HddsVolume hddsVolume : volumeSet.getVolumesList()) {
- if (hddsVolume.getHddsRootDir().getPath().equals(
- HddsVolumeUtil.getHddsRoot(volume))) {
- return true;
- }
- }
- return false;
+ @Test
+ public void testVolumeInInconsistentState() throws Exception {
+ assertEquals(2, volumeSet.getVolumesList().size());
+
+ // Add a volume to VolumeSet
+ String volume3 = baseDir + "disk3";
+
+ // Create the root volume dir and create a sub-directory within it.
+ File newVolume = new File(volume3, HDDS_VOLUME_DIR);
+ System.out.println("new volume root: " + newVolume);
+ newVolume.mkdirs();
+ assertTrue("Failed to create new volume root", newVolume.exists());
+ File dataDir = new File(newVolume, "chunks");
+ dataDir.mkdirs();
+ assertTrue(dataDir.exists());
+
+ // The new volume is in an inconsistent state as the root dir is
+ // non-empty but the version file does not exist. Add Volume should
+ // return false.
+ boolean success = volumeSet.addVolume(volume3);
+
+ assertFalse(success);
+ assertEquals(2, volumeSet.getVolumesList().size());
+ assertTrue("AddVolume should fail for an inconsistent volume",
+ !checkVolumeExistsInVolumeSet(volume3));
+
+ // Delete volume3
+ File volume = new File(volume3);
+ FileUtils.deleteDirectory(volume);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d5d44473/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
index 27c6528..284ffa3 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
import org.apache.hadoop.hdds.scm.ScmConfigKeys;
import org.apache.hadoop.ozone.OzoneConfigKeys;
import org.apache.hadoop.ozone.container.common.impl.ContainerSet;
+import org.apache.hadoop.ozone.container.common.volume.HddsVolume;
import org.apache.hadoop.ozone.container.common.volume.RoundRobinVolumeChoosingPolicy;
import org.apache.hadoop.ozone.container.common.volume.VolumeSet;
import org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer;
@@ -60,21 +61,30 @@ public class TestOzoneContainer {
public void setUp() throws Exception {
conf = new OzoneConfiguration();
conf.set(ScmConfigKeys.HDDS_DATANODE_DIR_KEY, folder.getRoot()
- .getAbsolutePath() + "," + folder.newFolder().getAbsolutePath());
+ .getAbsolutePath());
conf.set(OzoneConfigKeys.OZONE_METADATA_DIRS, folder.newFolder().getAbsolutePath());
+ }
+
+ @Test
+ public void testBuildContainerMap() throws Exception {
volumeSet = new VolumeSet(datanodeDetails.getUuidString(), conf);
volumeChoosingPolicy = new RoundRobinVolumeChoosingPolicy();
+ // Format the volumes
+ for (HddsVolume volume : volumeSet.getVolumesList()) {
+ volume.format(UUID.randomUUID().toString());
+ }
+
+ // Add containers to disk
for (int i=0; i<10; i++) {
keyValueContainerData = new KeyValueContainerData(i, 1);
keyValueContainer = new KeyValueContainer(
keyValueContainerData, conf);
keyValueContainer.create(volumeSet, volumeChoosingPolicy, scmId);
}
- }
- @Test
- public void testBuildContainerMap() throws Exception {
+ // When OzoneContainer is started, the containers from disk should be
+ // loaded into the containerSet.
OzoneContainer ozoneContainer = new
OzoneContainer(datanodeDetails, conf);
ContainerSet containerset = ozoneContainer.getContainerSet();
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[18/50] hadoop git commit: HDDS-232. Parallel unit test execution for
HDDS/Ozone. Contributed by Arpit Agarwal.
Posted by zh...@apache.org.
HDDS-232. Parallel unit test execution for HDDS/Ozone. Contributed by Arpit Agarwal.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d1850720
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d1850720
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d1850720
Branch: refs/heads/HDFS-13572
Commit: d18507209e268aa5be0d3e56cec23de24107e7d9
Parents: 1fe5b93
Author: Nanda kumar <na...@apache.org>
Authored: Fri Jul 13 19:50:52 2018 +0530
Committer: Nanda kumar <na...@apache.org>
Committed: Fri Jul 13 19:50:52 2018 +0530
----------------------------------------------------------------------
.../common/report/TestReportPublisher.java | 2 +-
hadoop-hdds/pom.xml | 49 ++++++++++++++++++++
hadoop-ozone/pom.xml | 49 ++++++++++++++++++++
3 files changed, 99 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1850720/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
index 026e7aa..d4db55b 100644
--- a/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
+++ b/hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/report/TestReportPublisher.java
@@ -111,7 +111,7 @@ public class TestReportPublisher {
publisher.init(dummyContext, executorService);
Thread.sleep(150);
Assert.assertEquals(1, ((DummyReportPublisher) publisher).getReportCount);
- Thread.sleep(150);
+ Thread.sleep(100);
Assert.assertEquals(2, ((DummyReportPublisher) publisher).getReportCount);
executorService.shutdown();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1850720/hadoop-hdds/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdds/pom.xml b/hadoop-hdds/pom.xml
index 573803b..09fac33 100644
--- a/hadoop-hdds/pom.xml
+++ b/hadoop-hdds/pom.xml
@@ -116,4 +116,53 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
</plugin>
</plugins>
</build>
+
+ <profiles>
+ <profile>
+ <id>parallel-tests</id>
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-maven-plugins</artifactId>
+ <executions>
+ <execution>
+ <id>parallel-tests-createdir</id>
+ <goals>
+ <goal>parallel-tests-createdir</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-surefire-plugin</artifactId>
+ <configuration>
+ <forkCount>${testsThreadCount}</forkCount>
+ <reuseForks>false</reuseForks>
+ <argLine>${maven-surefire-plugin.argLine} -DminiClusterDedicatedDirs=true</argLine>
+ <systemPropertyVariables>
+ <testsThreadCount>${testsThreadCount}</testsThreadCount>
+ <test.build.data>${test.build.data}/${surefire.forkNumber}</test.build.data>
+ <test.build.dir>${test.build.dir}/${surefire.forkNumber}</test.build.dir>
+ <hadoop.tmp.dir>${hadoop.tmp.dir}/${surefire.forkNumber}</hadoop.tmp.dir>
+
+ <!-- This is intentionally the same directory for all JUnit -->
+ <!-- forks, for use in the very rare situation that -->
+ <!-- concurrent tests need to coordinate, such as using lock -->
+ <!-- files. -->
+ <test.build.shared.data>${test.build.data}</test.build.shared.data>
+
+ <!-- Due to a Maven quirk, setting this to just -->
+ <!-- surefire.forkNumber won't do the parameter substitution. -->
+ <!-- Putting a prefix in front of it like "fork-" makes it -->
+ <!-- work. -->
+ <test.unique.fork.id>fork-${surefire.forkNumber}</test.unique.fork.id>
+ </systemPropertyVariables>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ </profile>
+ </profiles>
</project>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1850720/hadoop-ozone/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-ozone/pom.xml b/hadoop-ozone/pom.xml
index b655088..e82a3d8 100644
--- a/hadoop-ozone/pom.xml
+++ b/hadoop-ozone/pom.xml
@@ -178,4 +178,53 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
</plugin>
</plugins>
</build>
+
+ <profiles>
+ <profile>
+ <id>parallel-tests</id>
+ <build>
+ <plugins>
+ <plugin>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-maven-plugins</artifactId>
+ <executions>
+ <execution>
+ <id>parallel-tests-createdir</id>
+ <goals>
+ <goal>parallel-tests-createdir</goal>
+ </goals>
+ </execution>
+ </executions>
+ </plugin>
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-surefire-plugin</artifactId>
+ <configuration>
+ <forkCount>${testsThreadCount}</forkCount>
+ <reuseForks>false</reuseForks>
+ <argLine>${maven-surefire-plugin.argLine} -DminiClusterDedicatedDirs=true</argLine>
+ <systemPropertyVariables>
+ <testsThreadCount>${testsThreadCount}</testsThreadCount>
+ <test.build.data>${test.build.data}/${surefire.forkNumber}</test.build.data>
+ <test.build.dir>${test.build.dir}/${surefire.forkNumber}</test.build.dir>
+ <hadoop.tmp.dir>${hadoop.tmp.dir}/${surefire.forkNumber}</hadoop.tmp.dir>
+
+ <!-- This is intentionally the same directory for all JUnit -->
+ <!-- forks, for use in the very rare situation that -->
+ <!-- concurrent tests need to coordinate, such as using lock -->
+ <!-- files. -->
+ <test.build.shared.data>${test.build.data}</test.build.shared.data>
+
+ <!-- Due to a Maven quirk, setting this to just -->
+ <!-- surefire.forkNumber won't do the parameter substitution. -->
+ <!-- Putting a prefix in front of it like "fork-" makes it -->
+ <!-- work. -->
+ <test.unique.fork.id>fork-${surefire.forkNumber}</test.unique.fork.id>
+ </systemPropertyVariables>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ </profile>
+ </profiles>
</project>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[08/50] hadoop git commit: HDFS-12837. Intermittent failure in
TestReencryptionWithKMS.
Posted by zh...@apache.org.
HDFS-12837. Intermittent failure in TestReencryptionWithKMS.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b37074be
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b37074be
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b37074be
Branch: refs/heads/HDFS-13572
Commit: b37074be5ab35c238e18bb9c3b89db6d7f8d0986
Parents: 632aca5
Author: Xiao Chen <xi...@apache.org>
Authored: Wed Jul 11 20:54:37 2018 -0700
Committer: Xiao Chen <xi...@apache.org>
Committed: Wed Jul 11 21:03:19 2018 -0700
----------------------------------------------------------------------
.../server/namenode/ReencryptionHandler.java | 4 +-
.../hdfs/server/namenode/TestReencryption.java | 61 +++++++++++---------
2 files changed, 37 insertions(+), 28 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b37074be/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
index 5b52c82..b92fe9f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ReencryptionHandler.java
@@ -616,7 +616,9 @@ public class ReencryptionHandler implements Runnable {
while (shouldPauseForTesting) {
LOG.info("Sleeping in the re-encrypt handler for unit test.");
synchronized (reencryptionHandler) {
- reencryptionHandler.wait(30000);
+ if (shouldPauseForTesting) {
+ reencryptionHandler.wait(30000);
+ }
}
LOG.info("Continuing re-encrypt handler after pausing.");
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b37074be/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
index 5409f0d..5d34d3c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReencryption.java
@@ -68,6 +68,7 @@ import static org.apache.hadoop.test.GenericTestUtils.assertExceptionContains;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
@@ -207,8 +208,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertNotEquals(fei0.getEzKeyVersionName(), zs.getEzKeyVersionName());
assertEquals(fei1.getEzKeyVersionName(), zs.getEzKeyVersionName());
assertEquals(10, zs.getFilesReencrypted());
@@ -600,14 +600,27 @@ public class TestReencryption {
final ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
if (fei != null) {
assertNotEquals(fei.getEzKeyVersionName(), zs.getEzKeyVersionName());
}
assertEquals(expectedFiles, zs.getFilesReencrypted());
}
+ /**
+ * Verify the zone status' completion time is larger than 0, and is no less
+ * than submission time.
+ */
+ private void verifyZoneCompletionTime(final ZoneReencryptionStatus zs) {
+ assertNotNull(zs);
+ assertTrue("Completion time should be positive. " + zs.getCompletionTime(),
+ zs.getCompletionTime() > 0);
+ assertTrue("Completion time " + zs.getCompletionTime()
+ + " should be no less than submission time "
+ + zs.getSubmissionTime(),
+ zs.getCompletionTime() >= zs.getSubmissionTime());
+ }
+
@Test
public void testReencryptLoadedFromFsimage() throws Exception {
/*
@@ -1476,7 +1489,7 @@ public class TestReencryption {
}
@Override
- public void reencryptEncryptedKeys() throws IOException {
+ public synchronized void reencryptEncryptedKeys() throws IOException {
if (exceptionCount > 0) {
exceptionCount--;
try {
@@ -1537,8 +1550,7 @@ public class TestReencryption {
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
assertTrue(zs.isCanceled());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(0, zs.getFilesReencrypted());
assertTrue(getUpdater().isRunning());
@@ -1560,8 +1572,7 @@ public class TestReencryption {
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
assertFalse(zs.isCanceled());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
}
@@ -1579,8 +1590,7 @@ public class TestReencryption {
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
assertTrue(zs.isCanceled());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(0, zs.getFilesReencrypted());
// verify re-encryption works after restart.
@@ -1592,8 +1602,7 @@ public class TestReencryption {
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
assertFalse(zs.isCanceled());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
}
@@ -1679,8 +1688,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
}
@@ -1736,7 +1744,7 @@ public class TestReencryption {
}
@Override
- public void reencryptEncryptedKeys() throws IOException {
+ public synchronized void reencryptEncryptedKeys() throws IOException {
if (exceptionCount > 0) {
--exceptionCount;
throw new IOException("Injected KMS failure");
@@ -1772,8 +1780,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(5, zs.getFilesReencrypted());
assertEquals(5, zs.getNumReencryptionFailures());
}
@@ -1788,7 +1795,8 @@ public class TestReencryption {
}
@Override
- public void reencryptUpdaterProcessOneTask() throws IOException {
+ public synchronized void reencryptUpdaterProcessOneTask()
+ throws IOException {
if (exceptionCount > 0) {
--exceptionCount;
throw new IOException("Injected process task failure");
@@ -1824,8 +1832,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(5, zs.getFilesReencrypted());
assertEquals(1, zs.getNumReencryptionFailures());
}
@@ -1841,7 +1848,8 @@ public class TestReencryption {
}
@Override
- public void reencryptUpdaterProcessCheckpoint() throws IOException {
+ public synchronized void reencryptUpdaterProcessCheckpoint()
+ throws IOException {
if (exceptionCount > 0) {
--exceptionCount;
throw new IOException("Injected process checkpoint failure");
@@ -1877,8 +1885,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
assertEquals(1, zs.getNumReencryptionFailures());
}
@@ -1893,7 +1900,8 @@ public class TestReencryption {
}
@Override
- public void reencryptUpdaterProcessOneTask() throws IOException {
+ public synchronized void reencryptUpdaterProcessOneTask()
+ throws IOException {
if (exceptionCount > 0) {
--exceptionCount;
throw new RetriableException("Injected process task failure");
@@ -1930,8 +1938,7 @@ public class TestReencryption {
ZoneReencryptionStatus zs = it.next();
assertEquals(zone.toString(), zs.getZoneName());
assertEquals(ZoneReencryptionStatus.State.Completed, zs.getState());
- assertTrue(zs.getCompletionTime() > 0);
- assertTrue(zs.getCompletionTime() > zs.getSubmissionTime());
+ verifyZoneCompletionTime(zs);
assertEquals(10, zs.getFilesReencrypted());
assertEquals(0, zs.getNumReencryptionFailures());
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[05/50] hadoop git commit: HDFS-13729. Fix broken links to RBF
documentation. Contributed by Gabor Bota.
Posted by zh...@apache.org.
HDFS-13729. Fix broken links to RBF documentation. Contributed by Gabor Bota.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/418cc7f3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/418cc7f3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/418cc7f3
Branch: refs/heads/HDFS-13572
Commit: 418cc7f3aeabedc57c94aa9d4c4248c1476ac90e
Parents: 162228e
Author: Akira Ajisaka <aa...@apache.org>
Authored: Wed Jul 11 14:46:43 2018 -0400
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Wed Jul 11 14:46:43 2018 -0400
----------------------------------------------------------------------
.../hadoop-hdfs/src/site/markdown/HDFSCommands.md | 4 ++--
.../hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md | 2 +-
hadoop-project/src/site/markdown/index.md.vm | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/418cc7f3/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 9ed69bf..391b71b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -420,7 +420,7 @@ Runs a HDFS dfsadmin client.
Usage: `hdfs dfsrouter`
-Runs the DFS router. See [Router](./HDFSRouterFederation.html#Router) for more info.
+Runs the DFS router. See [Router](../hadoop-hdfs-rbf/HDFSRouterFederation.html#Router) for more info.
### `dfsrouteradmin`
@@ -449,7 +449,7 @@ Usage:
| `-nameservice` `disable` `enable` *nameservice* | Disable/enable a name service from the federation. If disabled, requests will not go to that name service. |
| `-getDisabledNameservices` | Get the name services that are disabled in the federation. |
-The commands for managing Router-based federation. See [Mount table management](./HDFSRouterFederation.html#Mount_table_management) for more info.
+The commands for managing Router-based federation. See [Mount table management](../hadoop-hdfs-rbf/HDFSRouterFederation.html#Mount_table_management) for more info.
### `diskbalancer`
http://git-wip-us.apache.org/repos/asf/hadoop/blob/418cc7f3/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
index 01e7076..b8d5321 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
@@ -38,7 +38,7 @@ is limited to creating a *read-only image* of a remote namespace that implements
to serve the image. Specifically, reads from a snapshot of a remote namespace are
supported. Adding a remote namespace to an existing/running namenode, refreshing the
remote snapshot, unmounting, and writes are not available in this release. One
-can use [ViewFs](./ViewFs.html) and [RBF](HDFSRouterFederation.html) to
+can use [ViewFs](./ViewFs.html) and [RBF](../hadoop-hdfs-rbf/HDFSRouterFederation.html) to
integrate namespaces with `PROVIDED` storage into an existing deployment.
Creating HDFS Clusters with `PROVIDED` Storage
http://git-wip-us.apache.org/repos/asf/hadoop/blob/418cc7f3/hadoop-project/src/site/markdown/index.md.vm
----------------------------------------------------------------------
diff --git a/hadoop-project/src/site/markdown/index.md.vm b/hadoop-project/src/site/markdown/index.md.vm
index 8b9cfda..438145a 100644
--- a/hadoop-project/src/site/markdown/index.md.vm
+++ b/hadoop-project/src/site/markdown/index.md.vm
@@ -225,7 +225,7 @@ cluster for existing HDFS clients.
See [HDFS-10467](https://issues.apache.org/jira/browse/HDFS-10467) and the
HDFS Router-based Federation
-[documentation](./hadoop-project-dist/hadoop-hdfs/HDFSRouterFederation.html) for
+[documentation](./hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html) for
more details.
API-based configuration of Capacity Scheduler queue configuration
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[36/50] hadoop git commit: Fix potential FSImage corruption.
Contributed by Ekanth Sethuramalingam & Arpit Agarwal.
Posted by zh...@apache.org.
Fix potential FSImage corruption. Contributed by Ekanth Sethuramalingam & Arpit Agarwal.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0a1e922f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0a1e922f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0a1e922f
Branch: refs/heads/HDFS-13572
Commit: 0a1e922f3d8eca4e852be57124ec1bcafaadb289
Parents: d215357
Author: Konstantin V Shvachko <sh...@apache.org>
Authored: Mon Jul 16 18:20:24 2018 -0700
Committer: Konstantin V Shvachko <sh...@apache.org>
Committed: Mon Jul 16 18:24:18 2018 -0700
----------------------------------------------------------------------
.../server/namenode/AclEntryStatusFormat.java | 6 +-
.../namenode/INodeWithAdditionalFields.java | 4 +-
.../hdfs/server/namenode/XAttrFormat.java | 67 +++++++++++++-------
3 files changed, 49 insertions(+), 28 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0a1e922f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
index 82aa214..2c5b23b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java
@@ -38,7 +38,8 @@ import com.google.common.collect.ImmutableList;
* [1:3) -- the type of the entry (AclEntryType) <br>
* [3:6) -- the permission of the entry (FsAction) <br>
* [6:7) -- A flag to indicate whether Named entry or not <br>
- * [7:32) -- the name of the entry, which is an ID that points to a <br>
+ * [7:8) -- Reserved <br>
+ * [8:32) -- the name of the entry, which is an ID that points to a <br>
* string in the StringTableSection. <br>
*/
public enum AclEntryStatusFormat {
@@ -47,7 +48,8 @@ public enum AclEntryStatusFormat {
TYPE(SCOPE.BITS, 2),
PERMISSION(TYPE.BITS, 3),
NAMED_ENTRY_CHECK(PERMISSION.BITS, 1),
- NAME(NAMED_ENTRY_CHECK.BITS, 25);
+ RESERVED(NAMED_ENTRY_CHECK.BITS, 1),
+ NAME(RESERVED.BITS, 24);
private final LongBitFormat BITS;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0a1e922f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeWithAdditionalFields.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeWithAdditionalFields.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeWithAdditionalFields.java
index 9adcc3e..84d99e4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeWithAdditionalFields.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeWithAdditionalFields.java
@@ -35,8 +35,8 @@ public abstract class INodeWithAdditionalFields extends INode
implements LinkedElement {
enum PermissionStatusFormat {
MODE(null, 16),
- GROUP(MODE.BITS, 25),
- USER(GROUP.BITS, 23);
+ GROUP(MODE.BITS, 24),
+ USER(GROUP.BITS, 24);
final LongBitFormat BITS;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0a1e922f/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrFormat.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrFormat.java
index 7e704d0..f9f06db 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrFormat.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrFormat.java
@@ -27,25 +27,56 @@ import org.apache.hadoop.hdfs.XAttrHelper;
import com.google.common.base.Preconditions;
import com.google.common.primitives.Ints;
+import org.apache.hadoop.hdfs.util.LongBitFormat;
/**
* Class to pack XAttrs into byte[].<br>
* For each XAttr:<br>
* The first 4 bytes represents XAttr namespace and name<br>
* [0:3) - XAttr namespace<br>
- * [3:32) - The name of the entry, which is an ID that points to a
+ * [3:8) - Reserved<br>
+ * [8:32) - The name of the entry, which is an ID that points to a
* string in map<br>
* The following two bytes represents the length of XAttr value<br>
* The remaining bytes is the XAttr value<br>
*/
class XAttrFormat {
- private static final int XATTR_NAMESPACE_MASK = (1 << 3) - 1;
- private static final int XATTR_NAMESPACE_OFFSET = 29;
- private static final int XATTR_NAME_MASK = (1 << 29) - 1;
- private static final int XATTR_NAME_ID_MAX = 1 << 29;
+ private enum XAttrStatusFormat {
+
+ NAMESPACE(null, 3),
+ RESERVED(NAMESPACE.BITS, 5),
+ NAME(RESERVED.BITS, 24);
+
+ private final LongBitFormat BITS;
+
+ XAttrStatusFormat(LongBitFormat previous, int length) {
+ BITS = new LongBitFormat(name(), previous, length, 0);
+ }
+
+ static XAttr.NameSpace getNamespace(int xattrStatus) {
+ int ordinal = (int) NAMESPACE.BITS.retrieve(xattrStatus);
+ return XAttr.NameSpace.values()[ordinal];
+ }
+
+ static String getName(int xattrStatus) {
+ int id = (int) NAME.BITS.retrieve(xattrStatus);
+ return XAttrStorage.getName(id);
+ }
+
+ static int toInt(XAttr.NameSpace namespace, String name) {
+ long xattrStatusInt = 0;
+
+ xattrStatusInt = NAMESPACE.BITS
+ .combine(namespace.ordinal(), xattrStatusInt);
+ int nid = XAttrStorage.getNameSerialNumber(name);
+ xattrStatusInt = NAME.BITS
+ .combine(nid, xattrStatusInt);
+
+ return (int) xattrStatusInt;
+ }
+ }
+
private static final int XATTR_VALUE_LEN_MAX = 1 << 16;
- private static final XAttr.NameSpace[] XATTR_NAMESPACE_VALUES =
- XAttr.NameSpace.values();
/**
* Unpack byte[] to XAttrs.
@@ -64,10 +95,8 @@ class XAttrFormat {
int v = Ints.fromBytes(attrs[i], attrs[i + 1],
attrs[i + 2], attrs[i + 3]);
i += 4;
- int ns = (v >> XATTR_NAMESPACE_OFFSET) & XATTR_NAMESPACE_MASK;
- int nid = v & XATTR_NAME_MASK;
- builder.setNameSpace(XATTR_NAMESPACE_VALUES[ns]);
- builder.setName(XAttrStorage.getName(nid));
+ builder.setNameSpace(XAttrStatusFormat.getNamespace(v));
+ builder.setName(XAttrStatusFormat.getName(v));
int vlen = ((0xff & attrs[i]) << 8) | (0xff & attrs[i + 1]);
i += 2;
if (vlen > 0) {
@@ -100,10 +129,8 @@ class XAttrFormat {
int v = Ints.fromBytes(attrs[i], attrs[i + 1],
attrs[i + 2], attrs[i + 3]);
i += 4;
- int ns = (v >> XATTR_NAMESPACE_OFFSET) & XATTR_NAMESPACE_MASK;
- int nid = v & XATTR_NAME_MASK;
- XAttr.NameSpace namespace = XATTR_NAMESPACE_VALUES[ns];
- String name = XAttrStorage.getName(nid);
+ XAttr.NameSpace namespace = XAttrStatusFormat.getNamespace(v);
+ String name = XAttrStatusFormat.getName(v);
int vlen = ((0xff & attrs[i]) << 8) | (0xff & attrs[i + 1]);
i += 2;
if (xAttr.getNameSpace() == namespace &&
@@ -134,15 +161,7 @@ class XAttrFormat {
ByteArrayOutputStream out = new ByteArrayOutputStream();
try {
for (XAttr a : xAttrs) {
- int nsOrd = a.getNameSpace().ordinal();
- Preconditions.checkArgument(nsOrd < 8, "Too many namespaces.");
- int nid = XAttrStorage.getNameSerialNumber(a.getName());
- Preconditions.checkArgument(nid < XATTR_NAME_ID_MAX,
- "Too large serial number of the xattr name");
-
- // big-endian
- int v = ((nsOrd & XATTR_NAMESPACE_MASK) << XATTR_NAMESPACE_OFFSET)
- | (nid & XATTR_NAME_MASK);
+ int v = XAttrStatusFormat.toInt(a.getNameSpace(), a.getName());
out.write(Ints.toByteArray(v));
int vlen = a.getValue() == null ? 0 : a.getValue().length;
Preconditions.checkArgument(vlen < XATTR_VALUE_LEN_MAX,
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[17/50] hadoop git commit: HDDS-253. SCMBlockDeletingService should
publish events for delete blocks to EventQueue. Contributed by Lokesh Jain.
Posted by zh...@apache.org.
HDDS-253. SCMBlockDeletingService should publish events for delete blocks to EventQueue. Contributed by Lokesh Jain.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1fe5b938
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1fe5b938
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1fe5b938
Branch: refs/heads/HDFS-13572
Commit: 1fe5b938435ab49e40cffa66f4dd16ddf1592405
Parents: 3f3f722
Author: Nanda kumar <na...@apache.org>
Authored: Fri Jul 13 17:18:42 2018 +0530
Committer: Nanda kumar <na...@apache.org>
Committed: Fri Jul 13 17:18:42 2018 +0530
----------------------------------------------------------------------
.../apache/hadoop/hdds/scm/block/BlockManagerImpl.java | 10 ++++++----
.../hadoop/hdds/scm/block/SCMBlockDeletingService.java | 13 +++++++++----
.../hdds/scm/server/StorageContainerManager.java | 2 +-
.../apache/hadoop/hdds/scm/block/TestBlockManager.java | 2 +-
.../apache/hadoop/ozone/scm/TestContainerSQLCli.java | 3 +--
5 files changed, 18 insertions(+), 12 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
index 953f71e..6825ca4 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationType;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
import org.apache.hadoop.metrics2.util.MBeans;
import org.apache.hadoop.ozone.OzoneConsts;
import org.apache.hadoop.hdds.client.BlockID;
@@ -87,10 +88,12 @@ public class BlockManagerImpl implements BlockManager, BlockmanagerMXBean {
* @param conf - configuration.
* @param nodeManager - node manager.
* @param containerManager - container manager.
+ * @param eventPublisher - event publisher.
* @throws IOException
*/
public BlockManagerImpl(final Configuration conf,
- final NodeManager nodeManager, final Mapping containerManager)
+ final NodeManager nodeManager, final Mapping containerManager,
+ EventPublisher eventPublisher)
throws IOException {
this.nodeManager = nodeManager;
this.containerManager = containerManager;
@@ -120,9 +123,8 @@ public class BlockManagerImpl implements BlockManager, BlockmanagerMXBean {
OZONE_BLOCK_DELETING_SERVICE_TIMEOUT_DEFAULT,
TimeUnit.MILLISECONDS);
blockDeletingService =
- new SCMBlockDeletingService(
- deletedBlockLog, containerManager, nodeManager, svcInterval,
- serviceTimeout, conf);
+ new SCMBlockDeletingService(deletedBlockLog, containerManager,
+ nodeManager, eventPublisher, svcInterval, serviceTimeout, conf);
}
/**
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
index 2c555e0..6f65fdd 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java
@@ -20,11 +20,14 @@ import com.google.common.annotations.VisibleForTesting;
import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdds.scm.container.Mapping;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
import org.apache.hadoop.hdds.protocol.proto
.StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
import org.apache.hadoop.ozone.protocol.commands.DeleteBlocksCommand;
import org.apache.hadoop.util.Time;
import org.apache.hadoop.utils.BackgroundService;
@@ -61,6 +64,7 @@ public class SCMBlockDeletingService extends BackgroundService {
private final DeletedBlockLog deletedBlockLog;
private final Mapping mappingService;
private final NodeManager nodeManager;
+ private final EventPublisher eventPublisher;
// Block delete limit size is dynamically calculated based on container
// delete limit size (ozone.block.deleting.container.limit.per.interval)
@@ -76,13 +80,14 @@ public class SCMBlockDeletingService extends BackgroundService {
private int blockDeleteLimitSize;
public SCMBlockDeletingService(DeletedBlockLog deletedBlockLog,
- Mapping mapper, NodeManager nodeManager,
- long interval, long serviceTimeout, Configuration conf) {
+ Mapping mapper, NodeManager nodeManager, EventPublisher eventPublisher,
+ long interval, long serviceTimeout, Configuration conf) {
super("SCMBlockDeletingService", interval, TimeUnit.MILLISECONDS,
BLOCK_DELETING_SERVICE_CORE_POOL_SIZE, serviceTimeout);
this.deletedBlockLog = deletedBlockLog;
this.mappingService = mapper;
this.nodeManager = nodeManager;
+ this.eventPublisher = eventPublisher;
int containerLimit = conf.getInt(
OZONE_BLOCK_DELETING_CONTAINER_LIMIT_PER_INTERVAL,
@@ -145,8 +150,8 @@ public class SCMBlockDeletingService extends BackgroundService {
// We should stop caching new commands if num of un-processed
// command is bigger than a limit, e.g 50. In case datanode goes
// offline for sometime, the cached commands be flooded.
- nodeManager.addDatanodeCommand(dnId,
- new DeleteBlocksCommand(dnTXs));
+ eventPublisher.fireEvent(SCMEvents.DATANODE_COMMAND,
+ new CommandForDatanode<>(dnId, new DeleteBlocksCommand(dnTXs)));
LOG.debug(
"Added delete block command for datanode {} in the queue,"
+ " number of delete block transactions: {}, TxID list: {}",
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
index 5f511ee..f37a0ed 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
@@ -181,7 +181,7 @@ public final class StorageContainerManager extends ServiceRuntimeInfoImpl
scmContainerManager = new ContainerMapping(
conf, getScmNodeManager(), cacheSize);
scmBlockManager = new BlockManagerImpl(
- conf, getScmNodeManager(), scmContainerManager);
+ conf, getScmNodeManager(), scmContainerManager, eventQueue);
Node2ContainerMap node2ContainerMap = new Node2ContainerMap();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
index 9fbb9fa..06e7420 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
@@ -74,7 +74,7 @@ public class TestBlockManager {
}
nodeManager = new MockNodeManager(true, 10);
mapping = new ContainerMapping(conf, nodeManager, 128);
- blockManager = new BlockManagerImpl(conf, nodeManager, mapping);
+ blockManager = new BlockManagerImpl(conf, nodeManager, mapping, null);
if(conf.getBoolean(ScmConfigKeys.DFS_CONTAINER_RATIS_ENABLED_KEY,
ScmConfigKeys.DFS_CONTAINER_RATIS_ENABLED_DEFAULT)){
factor = HddsProtos.ReplicationFactor.THREE;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1fe5b938/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
index 1a1f37c..a878627 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
@@ -17,7 +17,6 @@
*/
package org.apache.hadoop.ozone.scm;
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.node.NodeManager;
import org.apache.hadoop.ozone.MiniOzoneCluster;
import org.apache.hadoop.ozone.OzoneConfigKeys;
@@ -117,7 +116,7 @@ public class TestContainerSQLCli {
nodeManager = cluster.getStorageContainerManager().getScmNodeManager();
mapping = new ContainerMapping(conf, nodeManager, 128);
- blockManager = new BlockManagerImpl(conf, nodeManager, mapping);
+ blockManager = new BlockManagerImpl(conf, nodeManager, mapping, null);
// blockManager.allocateBlock() will create containers if there is none
// stored in levelDB. The number of containers to create is the value of
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[24/50] hadoop git commit: HDDS-254. Fix
TestStorageContainerManager#testBlockDeletingThrottling. Contributed by
Lokesh Jain
Posted by zh...@apache.org.
HDDS-254. Fix TestStorageContainerManager#testBlockDeletingThrottling. Contributed by Lokesh Jain
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5074ca93
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5074ca93
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5074ca93
Branch: refs/heads/HDFS-13572
Commit: 5074ca93afb4fbd1c367852ba55d1e89b38a2133
Parents: 0927bc4
Author: Bharat Viswanadham <bh...@apache.org>
Authored: Sun Jul 15 10:47:20 2018 -0700
Committer: Bharat Viswanadham <bh...@apache.org>
Committed: Sun Jul 15 10:47:20 2018 -0700
----------------------------------------------------------------------
.../test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5074ca93/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
index b3137bf..3ef74b0 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
@@ -392,7 +392,7 @@ public final class MiniOzoneClusterImpl implements MiniOzoneCluster {
private void configureSCMheartbeat() {
if (hbInterval.isPresent()) {
- conf.getTimeDuration(ScmConfigKeys.OZONE_SCM_HEARTBEAT_INTERVAL,
+ conf.setTimeDuration(ScmConfigKeys.OZONE_SCM_HEARTBEAT_INTERVAL,
hbInterval.get(), TimeUnit.MILLISECONDS);
} else {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[04/50] hadoop git commit: HDFS-13723. Occasional "Should be
different group" error in TestRefreshUserMappings#testGroupMappingRefresh.
Contributed by Siyao Meng.
Posted by zh...@apache.org.
HDFS-13723. Occasional "Should be different group" error in TestRefreshUserMappings#testGroupMappingRefresh. Contributed by Siyao Meng.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/162228e8
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/162228e8
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/162228e8
Branch: refs/heads/HDFS-13572
Commit: 162228e8db937d4bdb9cf61d15ed555f1c96368f
Parents: d36ed94
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Wed Jul 11 10:02:08 2018 -0700
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Wed Jul 11 10:02:08 2018 -0700
----------------------------------------------------------------------
.../java/org/apache/hadoop/security/Groups.java | 5 ++++-
.../hadoop/security/TestRefreshUserMappings.java | 19 +++++++++++++------
2 files changed, 17 insertions(+), 7 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/162228e8/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
index ad09865..63ec9a5 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
@@ -73,7 +73,8 @@ import org.slf4j.LoggerFactory;
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
@InterfaceStability.Evolving
public class Groups {
- private static final Logger LOG = LoggerFactory.getLogger(Groups.class);
+ @VisibleForTesting
+ static final Logger LOG = LoggerFactory.getLogger(Groups.class);
private final GroupMappingServiceProvider impl;
@@ -308,6 +309,7 @@ public class Groups {
*/
@Override
public List<String> load(String user) throws Exception {
+ LOG.debug("GroupCacheLoader - load.");
TraceScope scope = null;
Tracer tracer = Tracer.curThreadTracer();
if (tracer != null) {
@@ -346,6 +348,7 @@ public class Groups {
public ListenableFuture<List<String>> reload(final String key,
List<String> oldValue)
throws Exception {
+ LOG.debug("GroupCacheLoader - reload (async).");
if (!reloadGroupsInBackground) {
return super.reload(key, oldValue);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/162228e8/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
index f511eb1..0e7dfc3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestRefreshUserMappings.java
@@ -45,6 +45,8 @@ import org.apache.hadoop.hdfs.tools.DFSAdmin;
import org.apache.hadoop.security.authorize.AuthorizationException;
import org.apache.hadoop.security.authorize.DefaultImpersonationProvider;
import org.apache.hadoop.security.authorize.ProxyUsers;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.slf4j.event.Level;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -93,6 +95,8 @@ public class TestRefreshUserMappings {
FileSystem.setDefaultUri(config, "hdfs://localhost:" + "0");
cluster = new MiniDFSCluster.Builder(config).build();
cluster.waitActive();
+
+ GenericTestUtils.setLogLevel(Groups.LOG, Level.DEBUG);
}
@After
@@ -114,21 +118,24 @@ public class TestRefreshUserMappings {
String [] args = new String[]{"-refreshUserToGroupsMappings"};
Groups groups = Groups.getUserToGroupsMappingService(config);
String user = UserGroupInformation.getCurrentUser().getUserName();
- System.out.println("first attempt:");
+
+ System.out.println("First attempt:");
List<String> g1 = groups.getGroups(user);
String [] str_groups = new String [g1.size()];
g1.toArray(str_groups);
System.out.println(Arrays.toString(str_groups));
- System.out.println("second attempt, should be same:");
+ System.out.println("Second attempt, should be the same:");
List<String> g2 = groups.getGroups(user);
g2.toArray(str_groups);
System.out.println(Arrays.toString(str_groups));
for(int i=0; i<g2.size(); i++) {
assertEquals("Should be same group ", g1.get(i), g2.get(i));
}
+
+ // Test refresh command
admin.run(args);
- System.out.println("third attempt(after refresh command), should be different:");
+ System.out.println("Third attempt(after refresh command), should be different:");
List<String> g3 = groups.getGroups(user);
g3.toArray(str_groups);
System.out.println(Arrays.toString(str_groups));
@@ -137,9 +144,9 @@ public class TestRefreshUserMappings {
g1.get(i).equals(g3.get(i)));
}
- // test time out
- Thread.sleep(groupRefreshTimeoutSec*1100);
- System.out.println("fourth attempt(after timeout), should be different:");
+ // Test timeout
+ Thread.sleep(groupRefreshTimeoutSec * 1500);
+ System.out.println("Fourth attempt(after timeout), should be different:");
List<String> g4 = groups.getGroups(user);
g4.toArray(str_groups);
System.out.println(Arrays.toString(str_groups));
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[40/50] hadoop git commit: HDDS-255. Fix TestOzoneConfigurationFields
for missing hdds.command.status.report.interval in config classes.
Contributed by Sandeep Nemuri.
Posted by zh...@apache.org.
HDDS-255. Fix TestOzoneConfigurationFields for missing hdds.command.status.report.interval in config classes. Contributed by Sandeep Nemuri.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c492eacc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c492eacc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c492eacc
Branch: refs/heads/HDFS-13572
Commit: c492eaccc21bb53d0d40214290b2fa9c493e2955
Parents: 129269f
Author: Xiaoyu Yao <xy...@apache.org>
Authored: Wed Jul 18 11:46:26 2018 -0700
Committer: Xiaoyu Yao <xy...@apache.org>
Committed: Wed Jul 18 11:46:26 2018 -0700
----------------------------------------------------------------------
.../org/apache/hadoop/ozone/TestOzoneConfigurationFields.java | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/c492eacc/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
index 717bb68..909cddf 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestOzoneConfigurationFields.java
@@ -18,6 +18,7 @@
package org.apache.hadoop.ozone;
import org.apache.hadoop.conf.TestConfigurationFieldsBase;
+import org.apache.hadoop.hdds.HddsConfigKeys;
import org.apache.hadoop.ozone.om.OMConfigKeys;
import org.apache.hadoop.hdds.scm.ScmConfigKeys;
@@ -31,7 +32,7 @@ public class TestOzoneConfigurationFields extends TestConfigurationFieldsBase {
xmlFilename = new String("ozone-default.xml");
configurationClasses =
new Class[] {OzoneConfigKeys.class, ScmConfigKeys.class,
- OMConfigKeys.class};
+ OMConfigKeys.class, HddsConfigKeys.class};
errorIfMissingConfigProps = true;
errorIfMissingXmlProps = true;
xmlPropsToSkipCompare.add("hadoop.tags.custom");
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[37/50] hadoop git commit: HDFS-13733. RBF: Add Web UI configurations
and descriptions to RBF document. Contributed by Takanobu Asanuma.
Posted by zh...@apache.org.
HDFS-13733. RBF: Add Web UI configurations and descriptions to RBF document. Contributed by Takanobu Asanuma.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1af87df2
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1af87df2
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1af87df2
Branch: refs/heads/HDFS-13572
Commit: 1af87df242c4286474961078d306a5692f85debc
Parents: 0a1e922
Author: Yiqun Lin <yq...@apache.org>
Authored: Tue Jul 17 10:45:08 2018 +0800
Committer: Yiqun Lin <yq...@apache.org>
Committed: Tue Jul 17 10:45:08 2018 +0800
----------------------------------------------------------------------
.../src/site/markdown/HDFSRouterFederation.md | 12 ++++++++++++
1 file changed, 12 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/1af87df2/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index 73e0f4a..c5bf5e1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -330,6 +330,18 @@ The administration server to manage the Mount Table.
| dfs.federation.router.admin-bind-host | 0.0.0.0 | The actual address the RPC admin server will bind to. |
| dfs.federation.router.admin.handler.count | 1 | The number of server threads for the router to handle RPC requests from admin. |
+### HTTP Server
+
+The HTTP Server to provide Web UI and the HDFS REST interface ([WebHDFS](../hadoop-hdfs/WebHDFS.html)) for the clients. The default URL is "`http://router_host:50071`".
+
+| Property | Default | Description|
+|:---- |:---- |:---- |
+| dfs.federation.router.http.enable | `true` | If `true`, the HTTP service to handle client requests in the router is enabled. |
+| dfs.federation.router.http-address | 0.0.0.0:50071 | HTTP address that handles the web requests to the Router. |
+| dfs.federation.router.http-bind-host | 0.0.0.0 | The actual address the HTTP server will bind to. |
+| dfs.federation.router.https-address | 0.0.0.0:50072 | HTTPS address that handles the web requests to the Router. |
+| dfs.federation.router.https-bind-host | 0.0.0.0 | The actual address the HTTPS server will bind to. |
+
### State Store
The connection to the State Store and the internal caching at the Router.
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[12/50] hadoop git commit: HDDS-234. Add SCM node report handler.
Contributed by Ajay Kumar.
Posted by zh...@apache.org.
HDDS-234. Add SCM node report handler.
Contributed by Ajay Kumar.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/556d9b36
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/556d9b36
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/556d9b36
Branch: refs/heads/HDFS-13572
Commit: 556d9b36be4b0b759646b8f6030c9e693b97bdb8
Parents: 5ee90ef
Author: Anu Engineer <ae...@apache.org>
Authored: Thu Jul 12 12:09:31 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Thu Jul 12 12:09:31 2018 -0700
----------------------------------------------------------------------
.../hadoop/hdds/scm/node/NodeManager.java | 9 ++
.../hadoop/hdds/scm/node/NodeReportHandler.java | 19 +++-
.../hadoop/hdds/scm/node/SCMNodeManager.java | 11 +++
.../hdds/scm/container/MockNodeManager.java | 11 +++
.../hdds/scm/node/TestNodeReportHandler.java | 95 ++++++++++++++++++++
.../testutils/ReplicationNodeManagerMock.java | 10 +++
6 files changed, 152 insertions(+), 3 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
index 5e2969d..deb1628 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeManager.java
@@ -17,6 +17,7 @@
*/
package org.apache.hadoop.hdds.scm.node;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.NodeReportProto;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeStat;
import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
@@ -138,4 +139,12 @@ public interface NodeManager extends StorageContainerNodeProtocol,
* @param command
*/
void addDatanodeCommand(UUID dnId, SCMCommand command);
+
+ /**
+ * Process node report.
+ *
+ * @param dnUuid
+ * @param nodeReport
+ */
+ void processNodeReport(UUID dnUuid, NodeReportProto nodeReport);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
index aa78d53..331bfed 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
@@ -7,7 +7,7 @@
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
- * http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
@@ -18,25 +18,38 @@
package org.apache.hadoop.hdds.scm.node;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
.NodeReportFromDatanode;
import org.apache.hadoop.hdds.server.events.EventHandler;
import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
/**
* Handles Node Reports from datanode.
*/
public class NodeReportHandler implements EventHandler<NodeReportFromDatanode> {
+ private static final Logger LOGGER = LoggerFactory
+ .getLogger(NodeReportHandler.class);
private final NodeManager nodeManager;
public NodeReportHandler(NodeManager nodeManager) {
+ Preconditions.checkNotNull(nodeManager);
this.nodeManager = nodeManager;
}
@Override
public void onMessage(NodeReportFromDatanode nodeReportFromDatanode,
- EventPublisher publisher) {
- //TODO: process node report.
+ EventPublisher publisher) {
+ Preconditions.checkNotNull(nodeReportFromDatanode);
+ DatanodeDetails dn = nodeReportFromDatanode.getDatanodeDetails();
+ Preconditions.checkNotNull(dn, "NodeReport is "
+ + "missing DatanodeDetails.");
+ LOGGER.trace("Processing node report for dn: {}", dn);
+ nodeManager
+ .processNodeReport(dn.getUuid(), nodeReportFromDatanode.getReport());
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
index 2ba8067..7370b07 100644
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
+++ b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/SCMNodeManager.java
@@ -423,6 +423,17 @@ public class SCMNodeManager
}
/**
+ * Process node report.
+ *
+ * @param dnUuid
+ * @param nodeReport
+ */
+ @Override
+ public void processNodeReport(UUID dnUuid, NodeReportProto nodeReport) {
+ this.updateNodeStat(dnUuid, nodeReport);
+ }
+
+ /**
* Returns the aggregated node stats.
* @return the aggregated node stats.
*/
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
index 5e83c28..593b780 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/MockNodeManager.java
@@ -295,6 +295,17 @@ public class MockNodeManager implements NodeManager {
}
}
+ /**
+ * Empty implementation for processNodeReport.
+ *
+ * @param dnUuid
+ * @param nodeReport
+ */
+ @Override
+ public void processNodeReport(UUID dnUuid, NodeReportProto nodeReport) {
+ // do nothing
+ }
+
// Returns the number of commands that is queued to this node manager.
public int getCommandCount(DatanodeDetails dd) {
List<SCMCommand> list = commandMap.get(dd.getUuid());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeReportHandler.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeReportHandler.java
new file mode 100644
index 0000000..3cbde4b
--- /dev/null
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestNodeReportHandler.java
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.UUID;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.NodeReportProto;
+import org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.StorageReportProto;
+import org.apache.hadoop.hdds.scm.TestUtils;
+import org.apache.hadoop.hdds.scm.container.placement.metrics.SCMNodeMetric;
+import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.NodeReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.Event;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.hdds.server.events.EventQueue;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class TestNodeReportHandler implements EventPublisher {
+
+ private static Logger LOG = LoggerFactory
+ .getLogger(TestNodeReportHandler.class);
+ private NodeReportHandler nodeReportHandler;
+ private SCMNodeManager nodeManager;
+ private String storagePath = GenericTestUtils.getRandomizedTempPath()
+ .concat("/" + UUID.randomUUID().toString());
+ ;
+
+ @Before
+ public void resetEventCollector() throws IOException {
+ OzoneConfiguration conf = new OzoneConfiguration();
+ nodeManager = new SCMNodeManager(conf, "cluster1", null, new EventQueue());
+ nodeReportHandler = new NodeReportHandler(nodeManager);
+ }
+
+ @Test
+ public void testNodeReport() throws IOException {
+ DatanodeDetails dn = TestUtils.getDatanodeDetails();
+ List<StorageReportProto> reports =
+ TestUtils.createStorageReport(100, 10, 90, storagePath, null,
+ dn.getUuid().toString(), 1);
+
+ nodeReportHandler.onMessage(
+ getNodeReport(dn, reports), this);
+ SCMNodeMetric nodeMetric = nodeManager.getNodeStat(dn);
+
+ Assert.assertTrue(nodeMetric.get().getCapacity().get() == 100);
+ Assert.assertTrue(nodeMetric.get().getRemaining().get() == 90);
+ Assert.assertTrue(nodeMetric.get().getScmUsed().get() == 10);
+
+ reports =
+ TestUtils.createStorageReport(100, 10, 90, storagePath, null,
+ dn.getUuid().toString(), 2);
+ nodeReportHandler.onMessage(
+ getNodeReport(dn, reports), this);
+ nodeMetric = nodeManager.getNodeStat(dn);
+
+ Assert.assertTrue(nodeMetric.get().getCapacity().get() == 200);
+ Assert.assertTrue(nodeMetric.get().getRemaining().get() == 180);
+ Assert.assertTrue(nodeMetric.get().getScmUsed().get() == 20);
+
+ }
+
+ private NodeReportFromDatanode getNodeReport(DatanodeDetails dn,
+ List<StorageReportProto> reports) {
+ NodeReportProto nodeReportProto = TestUtils.createNodeReport(reports);
+ return new NodeReportFromDatanode(dn, nodeReportProto);
+ }
+
+ @Override
+ public <PAYLOAD, EVENT_TYPE extends Event<PAYLOAD>> void fireEvent(
+ EVENT_TYPE event, PAYLOAD payload) {
+ LOG.info("Event is published: {}", payload);
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/556d9b36/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
index 2d27d71..a0249aa 100644
--- a/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
+++ b/hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/ozone/container/testutils/ReplicationNodeManagerMock.java
@@ -289,6 +289,16 @@ public class ReplicationNodeManagerMock implements NodeManager {
this.commandQueue.addCommand(dnId, command);
}
+ /**
+ * Empty implementation for processNodeReport.
+ * @param dnUuid
+ * @param nodeReport
+ */
+ @Override
+ public void processNodeReport(UUID dnUuid, NodeReportProto nodeReport) {
+ // do nothing.
+ }
+
@Override
public void onMessage(CommandForDatanode commandForDatanode,
EventPublisher publisher) {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[20/50] hadoop git commit: HADOOP-15531. Use commons-text instead of
commons-lang in some classes to fix deprecation warnings. Contributed by
Takanobu Asanuma.
Posted by zh...@apache.org.
HADOOP-15531. Use commons-text instead of commons-lang in some classes to fix deprecation warnings. Contributed by Takanobu Asanuma.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/88625f5c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/88625f5c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/88625f5c
Branch: refs/heads/HDFS-13572
Commit: 88625f5cd90766136a9ebd76a8d84b45a37e6c99
Parents: 17118f4
Author: Akira Ajisaka <aa...@apache.org>
Authored: Fri Jul 13 11:42:12 2018 -0400
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Fri Jul 13 11:42:12 2018 -0400
----------------------------------------------------------------------
hadoop-client-modules/hadoop-client-minicluster/pom.xml | 4 ++++
hadoop-common-project/hadoop-common/pom.xml | 5 +++++
.../org/apache/hadoop/conf/ReconfigurationServlet.java | 2 +-
.../hdfs/qjournal/server/GetJournalEditServlet.java | 2 +-
.../hadoop/hdfs/server/diskbalancer/command/Command.java | 6 +++---
.../hdfs/server/diskbalancer/command/PlanCommand.java | 4 ++--
.../hdfs/server/diskbalancer/command/ReportCommand.java | 10 +++++-----
.../apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 2 +-
.../java/org/apache/hadoop/hdfs/tools/CacheAdmin.java | 2 +-
.../java/org/apache/hadoop/hdfs/TestDecommission.java | 4 ++--
.../java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java | 4 ++--
.../apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java | 2 +-
.../apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java | 2 +-
.../apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java | 2 +-
.../apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java | 2 +-
hadoop-project/pom.xml | 5 +++++
.../java/org/apache/hadoop/yarn/client/cli/TopCLI.java | 3 ++-
.../src/main/java/org/apache/hadoop/yarn/state/Graph.java | 2 +-
.../org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java | 2 +-
.../org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java | 2 +-
.../java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java | 2 +-
.../java/org/apache/hadoop/yarn/webapp/view/TextView.java | 2 +-
.../apache/hadoop/yarn/server/webapp/AppAttemptBlock.java | 2 +-
.../org/apache/hadoop/yarn/server/webapp/AppBlock.java | 2 +-
.../org/apache/hadoop/yarn/server/webapp/AppsBlock.java | 2 +-
.../resourcemanager/webapp/FairSchedulerAppsBlock.java | 2 +-
.../server/resourcemanager/webapp/RMAppAttemptBlock.java | 2 +-
.../yarn/server/resourcemanager/webapp/RMAppBlock.java | 2 +-
.../yarn/server/resourcemanager/webapp/RMAppsBlock.java | 2 +-
.../hadoop/yarn/server/router/webapp/AppsBlock.java | 4 ++--
30 files changed, 52 insertions(+), 37 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-client-modules/hadoop-client-minicluster/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index 490281a..ea8d680 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -171,6 +171,10 @@
<artifactId>commons-lang3</artifactId>
</exclusion>
<exclusion>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-text</artifactId>
+ </exclusion>
+ <exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-common-project/hadoop-common/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/pom.xml b/hadoop-common-project/hadoop-common/pom.xml
index 67a5a54..42554da 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -172,6 +172,11 @@
<scope>compile</scope>
</dependency>
<dependency>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-text</artifactId>
+ <scope>compile</scope>
+ </dependency>
+ <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<scope>compile</scope>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
index c5bdf4e..ef4eac6 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
@@ -18,7 +18,7 @@
package org.apache.hadoop.conf;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import java.util.Collection;
import java.util.Enumeration;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
index 64ac11c..e967527 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/GetJournalEditServlet.java
@@ -31,7 +31,7 @@ import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.classification.InterfaceAudience;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
index 968a5a7..eddef33 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/Command.java
@@ -27,7 +27,7 @@ import com.google.common.collect.Lists;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.Option;
import org.apache.commons.lang3.StringUtils;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.CommonConfigurationKeys;
@@ -491,7 +491,7 @@ public abstract class Command extends Configured implements Closeable {
/**
* Put output line to log and string buffer.
* */
- protected void recordOutput(final StrBuilder result,
+ protected void recordOutput(final TextStringBuilder result,
final String outputLine) {
LOG.info(outputLine);
result.appendln(outputLine);
@@ -501,7 +501,7 @@ public abstract class Command extends Configured implements Closeable {
* Parse top number of nodes to be processed.
* @return top number of nodes to be processed.
*/
- protected int parseTopNodes(final CommandLine cmd, final StrBuilder result)
+ protected int parseTopNodes(final CommandLine cmd, final TextStringBuilder result)
throws IllegalArgumentException {
String outputLine = "";
int nodes = 0;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
index 90cc0c4..dab9559 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/PlanCommand.java
@@ -23,7 +23,7 @@ import com.google.common.base.Throwables;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.HelpFormatter;
import org.apache.commons.lang3.StringUtils;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
@@ -89,7 +89,7 @@ public class PlanCommand extends Command {
*/
@Override
public void execute(CommandLine cmd) throws Exception {
- StrBuilder result = new StrBuilder();
+ TextStringBuilder result = new TextStringBuilder();
String outputLine = "";
LOG.debug("Processing Plan Command.");
Preconditions.checkState(cmd.hasOption(DiskBalancerCLI.PLAN));
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java
index 5f4e0f7..4f75aff 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/command/ReportCommand.java
@@ -25,7 +25,7 @@ import java.util.ListIterator;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.HelpFormatter;
import org.apache.commons.lang3.StringUtils;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.server.diskbalancer.DiskBalancerException;
import org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerDataNode;
@@ -67,7 +67,7 @@ public class ReportCommand extends Command {
@Override
public void execute(CommandLine cmd) throws Exception {
- StrBuilder result = new StrBuilder();
+ TextStringBuilder result = new TextStringBuilder();
String outputLine = "Processing report command";
recordOutput(result, outputLine);
@@ -99,7 +99,7 @@ public class ReportCommand extends Command {
getPrintStream().println(result.toString());
}
- private void handleTopReport(final CommandLine cmd, final StrBuilder result,
+ private void handleTopReport(final CommandLine cmd, final TextStringBuilder result,
final String nodeFormat) throws IllegalArgumentException {
Collections.sort(getCluster().getNodes(), Collections.reverseOrder());
@@ -131,7 +131,7 @@ public class ReportCommand extends Command {
}
}
- private void handleNodeReport(final CommandLine cmd, StrBuilder result,
+ private void handleNodeReport(final CommandLine cmd, TextStringBuilder result,
final String nodeFormat, final String volumeFormat) throws Exception {
String outputLine = "";
/*
@@ -175,7 +175,7 @@ public class ReportCommand extends Command {
/**
* Put node report lines to string buffer.
*/
- private void recordNodeReport(StrBuilder result, DiskBalancerDataNode dbdn,
+ private void recordNodeReport(TextStringBuilder result, DiskBalancerDataNode dbdn,
final String nodeFormat, final String volumeFormat) throws Exception {
final String trueStr = "True";
final String falseStr = "False";
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index f94f6d0..66bc567 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -17,7 +17,7 @@
*/
package org.apache.hadoop.hdfs.server.namenode;
-import static org.apache.commons.lang3.StringEscapeUtils.escapeJava;
+import static org.apache.commons.text.StringEscapeUtils.escapeJava;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_DEFAULT;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_TRASH_INTERVAL_KEY;
import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_CALLER_CONTEXT_ENABLED_DEFAULT;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
index 9781ea1..9e7a3cb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
@@ -22,7 +22,7 @@ import java.util.EnumSet;
import java.util.LinkedList;
import java.util.List;
-import org.apache.commons.lang3.text.WordUtils;
+import org.apache.commons.text.WordUtils;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
index 42b4257..bd266ed 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
@@ -38,7 +38,7 @@ import java.util.concurrent.atomic.AtomicBoolean;
import com.google.common.base.Supplier;
import com.google.common.collect.Lists;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.hadoop.fs.BlockLocation;
import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.fs.FSDataOutputStream;
@@ -661,7 +661,7 @@ public class TestDecommission extends AdminStatesBaseTest {
}
private static String scanIntoString(final ByteArrayOutputStream baos) {
- final StrBuilder sb = new StrBuilder();
+ final TextStringBuilder sb = new TextStringBuilder();
final Scanner scanner = new Scanner(baos.toString());
while (scanner.hasNextLine()) {
sb.appendln(scanner.nextLine());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
index 1245247..badb81b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSAdmin.java
@@ -27,7 +27,7 @@ import com.google.common.base.Supplier;
import com.google.common.collect.Lists;
import org.apache.commons.io.FileUtils;
-import org.apache.commons.lang3.text.StrBuilder;
+import org.apache.commons.text.TextStringBuilder;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
@@ -518,7 +518,7 @@ public class TestDFSAdmin {
}
private static String scanIntoString(final ByteArrayOutputStream baos) {
- final StrBuilder sb = new StrBuilder();
+ final TextStringBuilder sb = new TextStringBuilder();
final Scanner scanner = new Scanner(baos.toString());
while (scanner.hasNextLine()) {
sb.appendln(scanner.nextLine());
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
index 944f65e..4b8cde3 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TaskPage.java
@@ -27,7 +27,7 @@ import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
import java.util.EnumSet;
import java.util.Collection;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.MRConfig;
import org.apache.hadoop.mapreduce.v2.api.records.JobId;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java
index a2d8fa9..a6d9f52 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/TasksBlock.java
@@ -24,7 +24,7 @@ import static org.apache.hadoop.yarn.util.StringHelper.join;
import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR;
import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR_VALUE;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.mapreduce.v2.api.records.TaskType;
import org.apache.hadoop.mapreduce.v2.app.job.Task;
import org.apache.hadoop.mapreduce.v2.app.webapp.dao.TaskInfo;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
index 216bdce..3f4daf9 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsJobsBlock.java
@@ -21,7 +21,7 @@ package org.apache.hadoop.mapreduce.v2.hs.webapp;
import java.text.SimpleDateFormat;
import java.util.Date;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.MRConfig;
import org.apache.hadoop.mapreduce.v2.app.AppContext;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
index e8e76d1..8defc4f 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsTaskPage.java
@@ -29,7 +29,7 @@ import static org.apache.hadoop.yarn.webapp.view.JQueryUI.tableInit;
import java.util.Collection;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.mapreduce.v2.api.records.TaskId;
import org.apache.hadoop.mapreduce.v2.api.records.TaskType;
import org.apache.hadoop.mapreduce.v2.app.job.TaskAttempt;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-project/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 8e28afe..387a3da 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1070,6 +1070,11 @@
<version>3.7</version>
</dependency>
<dependency>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-text</artifactId>
+ <version>1.4</version>
+ </dependency>
+ <dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
index b890bee..aed5258 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/TopCLI.java
@@ -867,7 +867,8 @@ public class TopCLI extends YarnCLI {
TimeUnit.MILLISECONDS.toMinutes(uptime)
- TimeUnit.HOURS.toMinutes(TimeUnit.MILLISECONDS.toHours(uptime));
String uptimeStr = String.format("%dd, %d:%d", days, hours, minutes);
- String currentTime = DateFormatUtils.ISO_TIME_NO_T_FORMAT.format(now);
+ String currentTime = DateFormatUtils.ISO_8601_EXTENDED_TIME_FORMAT
+ .format(now);
ret.append(CLEAR_LINE);
ret.append(limitLineLength(String.format(
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
index ab884fa..11e6f86 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/state/Graph.java
@@ -26,7 +26,7 @@ import java.util.HashSet;
import java.util.List;
import java.util.Set;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.classification.InterfaceAudience.Private;
@Private
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java
index 1562b1e..b0ff19f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletImpl.java
@@ -28,7 +28,7 @@ import java.util.EnumSet;
import static java.util.EnumSet.*;
import java.util.Iterator;
-import static org.apache.commons.lang3.StringEscapeUtils.*;
+import static org.apache.commons.text.StringEscapeUtils.*;
import static org.apache.hadoop.yarn.webapp.hamlet.HamletImpl.EOpt.*;
import org.apache.hadoop.classification.InterfaceAudience;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java
index 1fcab23..1c4db06 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet2/HamletImpl.java
@@ -28,7 +28,7 @@ import java.util.EnumSet;
import static java.util.EnumSet.*;
import java.util.Iterator;
-import static org.apache.commons.lang3.StringEscapeUtils.*;
+import static org.apache.commons.text.StringEscapeUtils.*;
import static org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl.EOpt.*;
import org.apache.hadoop.classification.InterfaceAudience;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
index 91e5f89..b8e954d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/JQueryUI.java
@@ -18,7 +18,7 @@
package org.apache.hadoop.yarn.webapp.view;
-import static org.apache.commons.lang3.StringEscapeUtils.escapeEcmaScript;
+import static org.apache.commons.text.StringEscapeUtils.escapeEcmaScript;
import static org.apache.hadoop.yarn.util.StringHelper.djoin;
import static org.apache.hadoop.yarn.util.StringHelper.join;
import static org.apache.hadoop.yarn.util.StringHelper.split;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
index e67f960..4b08220 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/view/TextView.java
@@ -20,7 +20,7 @@ package org.apache.hadoop.yarn.webapp.view;
import java.io.PrintWriter;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.yarn.webapp.View;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
index 38c79ba..2d53dc9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppAttemptBlock.java
@@ -25,7 +25,7 @@ import java.security.PrivilegedExceptionAction;
import java.util.Collection;
import java.util.List;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.api.ApplicationBaseProtocol;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
index 3c1018c..0c7a536 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppBlock.java
@@ -28,7 +28,7 @@ import java.util.Collection;
import java.util.List;
import java.util.Map;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.security.UserGroupInformation;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
index 291a572..29843b5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/AppsBlock.java
@@ -32,7 +32,7 @@ import java.util.Collection;
import java.util.EnumSet;
import java.util.List;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.commons.lang3.Range;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.util.StringUtils;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
index 4bc3182..14ad277 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
@@ -29,7 +29,7 @@ import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
index 18595de..43a6ac9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java
@@ -27,7 +27,7 @@ import java.io.IOException;
import java.util.Collection;
import java.util.List;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptReportRequest;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
index 80d27f7..d260400 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppBlock.java
@@ -25,7 +25,7 @@ import java.util.Collection;
import java.util.List;
import java.util.Set;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationAttemptsRequest;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
index 25b3a4d..b1c0cd9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppsBlock.java
@@ -26,7 +26,7 @@ import java.io.IOException;
import java.util.List;
import java.util.Set;
-import org.apache.commons.lang3.StringEscapeUtils;
+import org.apache.commons.text.StringEscapeUtils;
import org.apache.hadoop.util.StringUtils;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88625f5c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
index aafc5f6..028bacd 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java
@@ -18,8 +18,8 @@
package org.apache.hadoop.yarn.server.router.webapp;
-import static org.apache.commons.lang3.StringEscapeUtils.escapeHtml4;
-import static org.apache.commons.lang3.StringEscapeUtils.escapeEcmaScript;
+import static org.apache.commons.text.StringEscapeUtils.escapeHtml4;
+import static org.apache.commons.text.StringEscapeUtils.escapeEcmaScript;
import static org.apache.hadoop.yarn.util.StringHelper.join;
import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR;
import static org.apache.hadoop.yarn.webapp.view.JQueryUI.C_PROGRESSBAR_VALUE;
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[49/50] hadoop git commit: HDFS-13743. RBF: Router throws
NullPointerException due to the invalid initialization of MountTableResolver.
Contributed by Takanobu Asanuma.
Posted by zh...@apache.org.
HDFS-13743. RBF: Router throws NullPointerException due to the invalid initialization of MountTableResolver. Contributed by Takanobu Asanuma.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/7b25fb94
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/7b25fb94
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/7b25fb94
Branch: refs/heads/HDFS-13572
Commit: 7b25fb949bf6f02df997beeca7df46c9e84c8d96
Parents: e6873df
Author: Yiqun Lin <yq...@apache.org>
Authored: Fri Jul 20 17:28:57 2018 +0800
Committer: Yiqun Lin <yq...@apache.org>
Committed: Fri Jul 20 17:28:57 2018 +0800
----------------------------------------------------------------------
.../federation/resolver/MountTableResolver.java | 28 +++++--
.../TestInitializeMountTableResolver.java | 82 ++++++++++++++++++++
2 files changed, 102 insertions(+), 8 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b25fb94/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
index 3f6efd6..c264de3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java
@@ -17,6 +17,8 @@
*/
package org.apache.hadoop.hdfs.server.federation.resolver;
+import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_NAMESERVICES;
+import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DeprecatedKeys.DFS_NAMESERVICE_ID;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE;
import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE_DEFAULT;
@@ -42,7 +44,6 @@ import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
-import org.apache.hadoop.HadoopIllegalArgumentException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.DFSUtil;
@@ -149,14 +150,25 @@ public class MountTableResolver
* @param conf Configuration for this resolver.
*/
private void initDefaultNameService(Configuration conf) {
- try {
- this.defaultNameService = conf.get(
- DFS_ROUTER_DEFAULT_NAMESERVICE,
- DFSUtil.getNamenodeNameServiceId(conf));
- } catch (HadoopIllegalArgumentException e) {
- LOG.error("Cannot find default name service, setting it to the first");
+ this.defaultNameService = conf.get(
+ DFS_ROUTER_DEFAULT_NAMESERVICE,
+ DFSUtil.getNamenodeNameServiceId(conf));
+
+ if (defaultNameService == null) {
+ LOG.warn(
+ "{} and {} is not set. Fallback to {} as the default name service.",
+ DFS_ROUTER_DEFAULT_NAMESERVICE, DFS_NAMESERVICE_ID, DFS_NAMESERVICES);
Collection<String> nsIds = DFSUtilClient.getNameServiceIds(conf);
- this.defaultNameService = nsIds.iterator().next();
+ if (nsIds.isEmpty()) {
+ this.defaultNameService = "";
+ } else {
+ this.defaultNameService = nsIds.iterator().next();
+ }
+ }
+
+ if (this.defaultNameService.equals("")) {
+ LOG.warn("Default name service is not set.");
+ } else {
LOG.info("Default name service: {}", this.defaultNameService);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/7b25fb94/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
new file mode 100644
index 0000000..5db7531
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/resolver/TestInitializeMountTableResolver.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.resolver;
+
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMESERVICE_ID;
+import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DFS_NAMESERVICES;
+import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_DEFAULT_NAMESERVICE;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Test {@link MountTableResolver} initialization.
+ */
+public class TestInitializeMountTableResolver {
+
+ @Test
+ public void testDefaultNameserviceIsMissing() {
+ Configuration conf = new Configuration();
+ MountTableResolver mountTable = new MountTableResolver(conf);
+ assertEquals("", mountTable.getDefaultNamespace());
+ }
+
+ @Test
+ public void testDefaultNameserviceWithEmptyString() {
+ Configuration conf = new Configuration();
+ conf.set(DFS_ROUTER_DEFAULT_NAMESERVICE, "");
+ MountTableResolver mountTable = new MountTableResolver(conf);
+ assertEquals("", mountTable.getDefaultNamespace());
+ }
+
+ @Test
+ public void testRouterDefaultNameservice() {
+ Configuration conf = new Configuration();
+ conf.set(DFS_ROUTER_DEFAULT_NAMESERVICE, "router_ns"); // this is priority
+ conf.set(DFS_NAMESERVICE_ID, "ns_id");
+ conf.set(DFS_NAMESERVICES, "nss");
+ MountTableResolver mountTable = new MountTableResolver(conf);
+ assertEquals("router_ns", mountTable.getDefaultNamespace());
+ }
+
+ @Test
+ public void testNameserviceID() {
+ Configuration conf = new Configuration();
+ conf.set(DFS_NAMESERVICE_ID, "ns_id"); // this is priority
+ conf.set(DFS_NAMESERVICES, "nss");
+ MountTableResolver mountTable = new MountTableResolver(conf);
+ assertEquals("ns_id", mountTable.getDefaultNamespace());
+ }
+
+ @Test
+ public void testSingleNameservices() {
+ Configuration conf = new Configuration();
+ conf.set(DFS_NAMESERVICES, "ns1");
+ MountTableResolver mountTable = new MountTableResolver(conf);
+ assertEquals("ns1", mountTable.getDefaultNamespace());
+ }
+
+ @Test
+ public void testMultipleNameservices() {
+ Configuration conf = new Configuration();
+ conf.set(DFS_NAMESERVICES, "ns1,ns2");
+ MountTableResolver mountTable = new MountTableResolver(conf);
+ assertEquals("ns1", mountTable.getDefaultNamespace());
+ }
+}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[26/50] hadoop git commit: HDFS-13475. RBF: Admin cannot enforce
Router enter SafeMode. Contributed by Chao Sun.
Posted by zh...@apache.org.
HDFS-13475. RBF: Admin cannot enforce Router enter SafeMode. Contributed by Chao Sun.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/359ea4e1
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/359ea4e1
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/359ea4e1
Branch: refs/heads/HDFS-13572
Commit: 359ea4e18147af5677c6d88265e26de6b6c72999
Parents: 937ef39
Author: Inigo Goiri <in...@apache.org>
Authored: Mon Jul 16 09:46:21 2018 -0700
Committer: Inigo Goiri <in...@apache.org>
Committed: Mon Jul 16 09:46:21 2018 -0700
----------------------------------------------------------------------
.../hdfs/server/federation/router/Router.java | 7 +++
.../federation/router/RouterAdminServer.java | 32 ++++++++---
.../federation/router/RouterRpcServer.java | 26 +--------
.../router/RouterSafemodeService.java | 44 ++++++++++++---
.../federation/router/TestRouterAdminCLI.java | 7 ++-
.../federation/router/TestRouterSafemode.java | 58 ++++++++++++++++----
6 files changed, 121 insertions(+), 53 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/359ea4e1/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
index df2a448..7e67daa 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java
@@ -665,4 +665,11 @@ public class Router extends CompositeService {
Collection<NamenodeHeartbeatService> getNamenodeHearbeatServices() {
return this.namenodeHeartbeatServices;
}
+
+ /**
+ * Get the Router safe mode service
+ */
+ RouterSafemodeService getSafemodeService() {
+ return this.safemodeService;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/359ea4e1/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
index 139dfb8..8e23eca 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAdminServer.java
@@ -24,6 +24,7 @@ import java.io.IOException;
import java.net.InetSocketAddress;
import java.util.Set;
+import com.google.common.base.Preconditions;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.DFSConfigKeys;
import org.apache.hadoop.hdfs.protocol.HdfsConstants;
@@ -272,23 +273,37 @@ public class RouterAdminServer extends AbstractService
@Override
public EnterSafeModeResponse enterSafeMode(EnterSafeModeRequest request)
throws IOException {
- this.router.updateRouterState(RouterServiceState.SAFEMODE);
- this.router.getRpcServer().setSafeMode(true);
- return EnterSafeModeResponse.newInstance(verifySafeMode(true));
+ boolean success = false;
+ RouterSafemodeService safeModeService = this.router.getSafemodeService();
+ if (safeModeService != null) {
+ this.router.updateRouterState(RouterServiceState.SAFEMODE);
+ safeModeService.setManualSafeMode(true);
+ success = verifySafeMode(true);
+ }
+ return EnterSafeModeResponse.newInstance(success);
}
@Override
public LeaveSafeModeResponse leaveSafeMode(LeaveSafeModeRequest request)
throws IOException {
- this.router.updateRouterState(RouterServiceState.RUNNING);
- this.router.getRpcServer().setSafeMode(false);
- return LeaveSafeModeResponse.newInstance(verifySafeMode(false));
+ boolean success = false;
+ RouterSafemodeService safeModeService = this.router.getSafemodeService();
+ if (safeModeService != null) {
+ this.router.updateRouterState(RouterServiceState.RUNNING);
+ safeModeService.setManualSafeMode(false);
+ success = verifySafeMode(false);
+ }
+ return LeaveSafeModeResponse.newInstance(success);
}
@Override
public GetSafeModeResponse getSafeMode(GetSafeModeRequest request)
throws IOException {
- boolean isInSafeMode = this.router.getRpcServer().isInSafeMode();
+ boolean isInSafeMode = false;
+ RouterSafemodeService safeModeService = this.router.getSafemodeService();
+ if (safeModeService != null) {
+ isInSafeMode = safeModeService.isInSafeMode();
+ }
return GetSafeModeResponse.newInstance(isInSafeMode);
}
@@ -298,7 +313,8 @@ public class RouterAdminServer extends AbstractService
* @return
*/
private boolean verifySafeMode(boolean isInSafeMode) {
- boolean serverInSafeMode = this.router.getRpcServer().isInSafeMode();
+ Preconditions.checkNotNull(this.router.getSafemodeService());
+ boolean serverInSafeMode = this.router.getSafemodeService().isInSafeMode();
RouterServiceState currentState = this.router.getRouterState();
return (isInSafeMode && currentState == RouterServiceState.SAFEMODE
http://git-wip-us.apache.org/repos/asf/hadoop/blob/359ea4e1/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
index 7031af7..027db8a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java
@@ -193,9 +193,6 @@ public class RouterRpcServer extends AbstractService
/** Interface to map global name space to HDFS subcluster name spaces. */
private final FileSubclusterResolver subclusterResolver;
- /** If we are in safe mode, fail requests as if a standby NN. */
- private volatile boolean safeMode;
-
/** Category of the operation that a thread is executing. */
private final ThreadLocal<OperationCategory> opCategory = new ThreadLocal<>();
@@ -456,7 +453,8 @@ public class RouterRpcServer extends AbstractService
return;
}
- if (safeMode) {
+ RouterSafemodeService safemodeService = router.getSafemodeService();
+ if (safemodeService != null && safemodeService.isInSafeMode()) {
// Throw standby exception, router is not available
if (rpcMonitor != null) {
rpcMonitor.routerFailureSafemode();
@@ -466,26 +464,6 @@ public class RouterRpcServer extends AbstractService
}
}
- /**
- * In safe mode all RPC requests will fail and return a standby exception.
- * The client will try another Router, similar to the client retry logic for
- * HA.
- *
- * @param mode True if enabled, False if disabled.
- */
- public void setSafeMode(boolean mode) {
- this.safeMode = mode;
- }
-
- /**
- * Check if the Router is in safe mode and cannot serve RPC calls.
- *
- * @return If the Router is in safe mode.
- */
- public boolean isInSafeMode() {
- return this.safeMode;
- }
-
@Override // ClientProtocol
public Token<DelegationTokenIdentifier> getDelegationToken(Text renewer)
throws IOException {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/359ea4e1/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSafemodeService.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSafemodeService.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSafemodeService.java
index 5dfb356..877e1d4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSafemodeService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterSafemodeService.java
@@ -42,6 +42,23 @@ public class RouterSafemodeService extends PeriodicService {
/** Router to manage safe mode. */
private final Router router;
+ /**
+ * If we are in safe mode, fail requests as if a standby NN.
+ * Router can enter safe mode in two different ways:
+ * 1. upon start up: router enters this mode after service start, and will
+ * exit after certain time threshold;
+ * 2. via admin command: router enters this mode via admin command:
+ * dfsrouteradmin -safemode enter
+ * and exit after admin command:
+ * dfsrouteradmin -safemode leave
+ */
+
+ /** Whether Router is in safe mode */
+ private volatile boolean safeMode;
+
+ /** Whether the Router safe mode is set manually (i.e., via Router admin) */
+ private volatile boolean isSafeModeSetManually;
+
/** Interval in ms to wait post startup before allowing RPC requests. */
private long startupInterval;
/** Interval in ms after which the State Store cache is too stale. */
@@ -64,13 +81,28 @@ public class RouterSafemodeService extends PeriodicService {
}
/**
+ * Return whether the current Router is in safe mode.
+ */
+ boolean isInSafeMode() {
+ return this.safeMode;
+ }
+
+ /**
+ * Set the flag to indicate that the safe mode for this Router is set manually
+ * via the Router admin command.
+ */
+ void setManualSafeMode(boolean mode) {
+ this.safeMode = mode;
+ this.isSafeModeSetManually = mode;
+ }
+
+ /**
* Enter safe mode.
*/
private void enter() {
LOG.info("Entering safe mode");
enterSafeModeTime = now();
- RouterRpcServer rpcServer = router.getRpcServer();
- rpcServer.setSafeMode(true);
+ safeMode = true;
router.updateRouterState(RouterServiceState.SAFEMODE);
}
@@ -87,8 +119,7 @@ public class RouterSafemodeService extends PeriodicService {
} else {
routerMetrics.setSafeModeTime(timeInSafemode);
}
- RouterRpcServer rpcServer = router.getRpcServer();
- rpcServer.setSafeMode(false);
+ safeMode = false;
router.updateRouterState(RouterServiceState.RUNNING);
}
@@ -131,17 +162,16 @@ public class RouterSafemodeService extends PeriodicService {
this.startupInterval - delta);
return;
}
- RouterRpcServer rpcServer = router.getRpcServer();
StateStoreService stateStore = router.getStateStore();
long cacheUpdateTime = stateStore.getCacheUpdateTime();
boolean isCacheStale = (now - cacheUpdateTime) > this.staleInterval;
// Always update to indicate our cache was updated
if (isCacheStale) {
- if (!rpcServer.isInSafeMode()) {
+ if (!safeMode) {
enter();
}
- } else if (rpcServer.isInSafeMode()) {
+ } else if (safeMode && !isSafeModeSetManually) {
// Cache recently updated, leave safe mode
leave();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/359ea4e1/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
index 7e04e61..5207f00 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdminCLI.java
@@ -82,6 +82,7 @@ public class TestRouterAdminCLI {
.stateStore()
.admin()
.rpc()
+ .safemode()
.build();
cluster.addRouterOverrides(conf);
@@ -501,13 +502,13 @@ public class TestRouterAdminCLI {
public void testManageSafeMode() throws Exception {
// ensure the Router become RUNNING state
waitState(RouterServiceState.RUNNING);
- assertFalse(routerContext.getRouter().getRpcServer().isInSafeMode());
+ assertFalse(routerContext.getRouter().getSafemodeService().isInSafeMode());
assertEquals(0, ToolRunner.run(admin,
new String[] {"-safemode", "enter"}));
// verify state
assertEquals(RouterServiceState.SAFEMODE,
routerContext.getRouter().getRouterState());
- assertTrue(routerContext.getRouter().getRpcServer().isInSafeMode());
+ assertTrue(routerContext.getRouter().getSafemodeService().isInSafeMode());
System.setOut(new PrintStream(out));
assertEquals(0, ToolRunner.run(admin,
@@ -519,7 +520,7 @@ public class TestRouterAdminCLI {
// verify state
assertEquals(RouterServiceState.RUNNING,
routerContext.getRouter().getRouterState());
- assertFalse(routerContext.getRouter().getRpcServer().isInSafeMode());
+ assertFalse(routerContext.getRouter().getSafemodeService().isInSafeMode());
out.reset();
assertEquals(0, ToolRunner.run(admin,
http://git-wip-us.apache.org/repos/asf/hadoop/blob/359ea4e1/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterSafemode.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterSafemode.java b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterSafemode.java
index f16ceb5..9c1aeb2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterSafemode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterSafemode.java
@@ -28,14 +28,17 @@ import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
import java.io.IOException;
+import java.net.InetSocketAddress;
import java.net.URISyntaxException;
import java.util.concurrent.TimeUnit;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.tools.federation.RouterAdmin;
import org.apache.hadoop.ipc.StandbyException;
import org.apache.hadoop.service.Service.STATE;
import org.apache.hadoop.util.Time;
+import org.apache.hadoop.util.ToolRunner;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
@@ -60,12 +63,12 @@ public class TestRouterSafemode {
// 2 sec startup standby
conf.setTimeDuration(DFS_ROUTER_SAFEMODE_EXTENSION,
TimeUnit.SECONDS.toMillis(2), TimeUnit.MILLISECONDS);
- // 1 sec cache refresh
+ // 200 ms cache refresh
conf.setTimeDuration(DFS_ROUTER_CACHE_TIME_TO_LIVE_MS,
- TimeUnit.SECONDS.toMillis(1), TimeUnit.MILLISECONDS);
- // 2 sec post cache update before entering safemode (2 intervals)
+ 200, TimeUnit.MILLISECONDS);
+ // 1 sec post cache update before entering safemode (2 intervals)
conf.setTimeDuration(DFS_ROUTER_SAFEMODE_EXPIRATION,
- TimeUnit.SECONDS.toMillis(2), TimeUnit.MILLISECONDS);
+ TimeUnit.SECONDS.toMillis(1), TimeUnit.MILLISECONDS);
conf.set(RBFConfigKeys.DFS_ROUTER_RPC_BIND_HOST_KEY, "0.0.0.0");
conf.set(RBFConfigKeys.DFS_ROUTER_RPC_ADDRESS_KEY, "127.0.0.1:0");
@@ -77,6 +80,7 @@ public class TestRouterSafemode {
// RPC + State Store + Safe Mode only
conf = new RouterConfigBuilder(conf)
.rpc()
+ .admin()
.safemode()
.stateStore()
.metrics()
@@ -118,7 +122,7 @@ public class TestRouterSafemode {
public void testRouterExitSafemode()
throws InterruptedException, IllegalStateException, IOException {
- assertTrue(router.getRpcServer().isInSafeMode());
+ assertTrue(router.getSafemodeService().isInSafeMode());
verifyRouter(RouterServiceState.SAFEMODE);
// Wait for initial time in milliseconds
@@ -129,7 +133,7 @@ public class TestRouterSafemode {
TimeUnit.SECONDS.toMillis(1), TimeUnit.MILLISECONDS);
Thread.sleep(interval);
- assertFalse(router.getRpcServer().isInSafeMode());
+ assertFalse(router.getSafemodeService().isInSafeMode());
verifyRouter(RouterServiceState.RUNNING);
}
@@ -138,7 +142,7 @@ public class TestRouterSafemode {
throws IllegalStateException, IOException, InterruptedException {
// Verify starting state
- assertTrue(router.getRpcServer().isInSafeMode());
+ assertTrue(router.getSafemodeService().isInSafeMode());
verifyRouter(RouterServiceState.SAFEMODE);
// We should be in safe mode for DFS_ROUTER_SAFEMODE_EXTENSION time
@@ -157,7 +161,7 @@ public class TestRouterSafemode {
Thread.sleep(interval1);
// Running
- assertFalse(router.getRpcServer().isInSafeMode());
+ assertFalse(router.getSafemodeService().isInSafeMode());
verifyRouter(RouterServiceState.RUNNING);
// Disable cache
@@ -167,12 +171,12 @@ public class TestRouterSafemode {
long interval2 =
conf.getTimeDuration(DFS_ROUTER_SAFEMODE_EXPIRATION,
TimeUnit.SECONDS.toMillis(2), TimeUnit.MILLISECONDS) +
- conf.getTimeDuration(DFS_ROUTER_CACHE_TIME_TO_LIVE_MS,
+ 2 * conf.getTimeDuration(DFS_ROUTER_CACHE_TIME_TO_LIVE_MS,
TimeUnit.SECONDS.toMillis(1), TimeUnit.MILLISECONDS);
Thread.sleep(interval2);
// Safemode
- assertTrue(router.getRpcServer().isInSafeMode());
+ assertTrue(router.getSafemodeService().isInSafeMode());
verifyRouter(RouterServiceState.SAFEMODE);
}
@@ -180,7 +184,7 @@ public class TestRouterSafemode {
public void testRouterRpcSafeMode()
throws IllegalStateException, IOException {
- assertTrue(router.getRpcServer().isInSafeMode());
+ assertTrue(router.getSafemodeService().isInSafeMode());
verifyRouter(RouterServiceState.SAFEMODE);
// If the Router is in Safe Mode, we should get a SafeModeException
@@ -194,6 +198,38 @@ public class TestRouterSafemode {
assertTrue("We should have thrown a safe mode exception", exception);
}
+ @Test
+ public void testRouterManualSafeMode() throws Exception {
+ InetSocketAddress adminAddr = router.getAdminServerAddress();
+ conf.setSocketAddr(RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY, adminAddr);
+ RouterAdmin admin = new RouterAdmin(conf);
+
+ assertTrue(router.getSafemodeService().isInSafeMode());
+ verifyRouter(RouterServiceState.SAFEMODE);
+
+ // Wait until the Router exit start up safe mode
+ long interval = conf.getTimeDuration(DFS_ROUTER_SAFEMODE_EXTENSION,
+ TimeUnit.SECONDS.toMillis(2), TimeUnit.MILLISECONDS) + 300;
+ Thread.sleep(interval);
+ verifyRouter(RouterServiceState.RUNNING);
+
+ // Now enter safe mode via Router admin command - it should work
+ assertEquals(0, ToolRunner.run(admin, new String[] {"-safemode", "enter"}));
+ verifyRouter(RouterServiceState.SAFEMODE);
+
+ // Wait for update interval of the safe mode service, it should still in
+ // safe mode.
+ interval = 2 * conf.getTimeDuration(
+ DFS_ROUTER_CACHE_TIME_TO_LIVE_MS, TimeUnit.SECONDS.toMillis(1),
+ TimeUnit.MILLISECONDS);
+ Thread.sleep(interval);
+ verifyRouter(RouterServiceState.SAFEMODE);
+
+ // Exit safe mode via admin command
+ assertEquals(0, ToolRunner.run(admin, new String[] {"-safemode", "leave"}));
+ verifyRouter(RouterServiceState.RUNNING);
+ }
+
private void verifyRouter(RouterServiceState status)
throws IllegalStateException, IOException {
assertEquals(status, router.getRouterState());
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[34/50] hadoop git commit: YARN-8299. Added CLI and REST API for
query container status. Contributed by Chandni Singh
Posted by zh...@apache.org.
YARN-8299. Added CLI and REST API for query container status.
Contributed by Chandni Singh
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/121865c3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/121865c3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/121865c3
Branch: refs/heads/HDFS-13572
Commit: 121865c3f96166e2190ed54b433ebcf8d053b91c
Parents: efb4e27
Author: Eric Yang <ey...@apache.org>
Authored: Mon Jul 16 17:41:23 2018 -0400
Committer: Eric Yang <ey...@apache.org>
Committed: Mon Jul 16 17:41:23 2018 -0400
----------------------------------------------------------------------
.../yarn/service/client/ApiServiceClient.java | 74 ++++++---
.../hadoop/yarn/service/webapp/ApiServer.java | 67 ++++++--
.../hadoop/yarn/service/ClientAMProtocol.java | 5 +
.../hadoop/yarn/service/ClientAMService.java | 14 ++
.../yarn/service/client/ServiceClient.java | 47 ++++++
.../component/instance/ComponentInstance.java | 41 ++++-
.../yarn/service/conf/RestApiConstants.java | 5 +-
.../pb/client/ClientAMProtocolPBClientImpl.java | 13 ++
.../service/ClientAMProtocolPBServiceImpl.java | 13 ++
.../hadoop/yarn/service/utils/FilterUtils.java | 81 ++++++++++
.../yarn/service/utils/ServiceApiUtil.java | 9 ++
.../src/main/proto/ClientAMProtocol.proto | 12 ++
.../yarn/service/MockRunningServiceContext.java | 154 +++++++++++++++++++
.../yarn/service/client/TestServiceCLI.java | 25 ++-
.../yarn/service/client/TestServiceClient.java | 54 ++++++-
.../yarn/service/component/TestComponent.java | 133 +---------------
.../instance/TestComponentInstance.java | 46 +++---
.../yarn/service/utils/TestFilterUtils.java | 102 ++++++++++++
.../hadoop/yarn/client/cli/ApplicationCLI.java | 68 +++++++-
.../hadoop/yarn/client/cli/TestYarnCLI.java | 6 +-
.../hadoop/yarn/client/api/AppAdminClient.java | 6 +
21 files changed, 773 insertions(+), 202 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
index 9232fc8..f5162e9 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/client/ApiServiceClient.java
@@ -25,8 +25,10 @@ import java.util.List;
import java.util.Map;
import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
import org.apache.commons.lang3.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
@@ -48,10 +50,8 @@ import org.apache.hadoop.yarn.service.api.records.Service;
import org.apache.hadoop.yarn.service.api.records.ServiceState;
import org.apache.hadoop.yarn.service.api.records.ServiceStatus;
import org.apache.hadoop.yarn.service.conf.RestApiConstants;
-import org.apache.hadoop.yarn.service.utils.JsonSerDeser;
import org.apache.hadoop.yarn.service.utils.ServiceApiUtil;
import org.apache.hadoop.yarn.util.RMHAUtils;
-import org.codehaus.jackson.map.PropertyNamingStrategy;
import org.eclipse.jetty.util.UrlEncoded;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -147,11 +147,7 @@ public class ApiServiceClient extends AppAdminClient {
api.append("/");
api.append(appName);
}
- Configuration conf = getConfig();
- if (conf.get("hadoop.http.authentication.type").equalsIgnoreCase("simple")) {
- api.append("?user.name=" + UrlEncoded
- .encodeString(System.getProperty("user.name")));
- }
+ appendUserNameIfRequired(api);
return api.toString();
}
@@ -162,15 +158,27 @@ public class ApiServiceClient extends AppAdminClient {
api.append(url);
api.append("/app/v1/services/").append(appName).append("/")
.append(RestApiConstants.COMP_INSTANCES);
- Configuration conf = getConfig();
- if (conf.get("hadoop.http.authentication.type").equalsIgnoreCase(
- "simple")) {
- api.append("?user.name=" + UrlEncoded
- .encodeString(System.getProperty("user.name")));
- }
+ appendUserNameIfRequired(api);
return api.toString();
}
+ private String getInstancePath(String appName, List<String> components,
+ String version, List<String> containerStates) throws IOException {
+ UriBuilder builder = UriBuilder.fromUri(getInstancesPath(appName));
+ if (components != null && !components.isEmpty()) {
+ components.forEach(compName ->
+ builder.queryParam(RestApiConstants.PARAM_COMP_NAME, compName));
+ }
+ if (!Strings.isNullOrEmpty(version)){
+ builder.queryParam(RestApiConstants.PARAM_VERSION, version);
+ }
+ if (containerStates != null && !containerStates.isEmpty()){
+ containerStates.forEach(state ->
+ builder.queryParam(RestApiConstants.PARAM_CONTAINER_STATE, state));
+ }
+ return builder.build().toString();
+ }
+
private String getComponentsPath(String appName) throws IOException {
Preconditions.checkNotNull(appName);
String url = getRMWebAddress();
@@ -178,13 +186,17 @@ public class ApiServiceClient extends AppAdminClient {
api.append(url);
api.append("/app/v1/services/").append(appName).append("/")
.append(RestApiConstants.COMPONENTS);
+ appendUserNameIfRequired(api);
+ return api.toString();
+ }
+
+ private void appendUserNameIfRequired(StringBuilder builder) {
Configuration conf = getConfig();
if (conf.get("hadoop.http.authentication.type").equalsIgnoreCase(
"simple")) {
- api.append("?user.name=" + UrlEncoded
+ builder.append("?user.name=").append(UrlEncoded
.encodeString(System.getProperty("user.name")));
}
- return api.toString();
}
private Builder getApiClient() throws IOException {
@@ -553,7 +565,7 @@ public class ApiServiceClient extends AppAdminClient {
container.setState(ContainerState.UPGRADING);
toUpgrade[idx++] = container;
}
- String buffer = CONTAINER_JSON_SERDE.toJson(toUpgrade);
+ String buffer = ServiceApiUtil.CONTAINER_JSON_SERDE.toJson(toUpgrade);
ClientResponse response = getApiClient(getInstancesPath(appName))
.put(ClientResponse.class, buffer);
result = processResponse(response);
@@ -577,7 +589,7 @@ public class ApiServiceClient extends AppAdminClient {
component.setState(ComponentState.UPGRADING);
toUpgrade[idx++] = component;
}
- String buffer = COMP_JSON_SERDE.toJson(toUpgrade);
+ String buffer = ServiceApiUtil.COMP_JSON_SERDE.toJson(toUpgrade);
ClientResponse response = getApiClient(getComponentsPath(appName))
.put(ClientResponse.class, buffer);
result = processResponse(response);
@@ -599,11 +611,25 @@ public class ApiServiceClient extends AppAdminClient {
return result;
}
- private static final JsonSerDeser<Container[]> CONTAINER_JSON_SERDE =
- new JsonSerDeser<>(Container[].class,
- PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
-
- private static final JsonSerDeser<Component[]> COMP_JSON_SERDE =
- new JsonSerDeser<>(Component[].class,
- PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
+ @Override
+ public String getInstances(String appName, List<String> components,
+ String version, List<String> containerStates) throws IOException,
+ YarnException {
+ try {
+ String uri = getInstancePath(appName, components, version,
+ containerStates);
+ ClientResponse response = getApiClient(uri).get(ClientResponse.class);
+ if (response.getStatus() != 200) {
+ StringBuilder sb = new StringBuilder();
+ sb.append("Failed: HTTP error code: ");
+ sb.append(response.getStatus());
+ sb.append(" ErrorMsg: ").append(response.getEntity(String.class));
+ return sb.toString();
+ }
+ return response.getEntity(String.class);
+ } catch (Exception e) {
+ LOG.error("Fail to get containers {}", e);
+ }
+ return null;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
index 82fadae..4db0ac8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api/src/main/java/org/apache/hadoop/yarn/service/webapp/ApiServer.java
@@ -44,14 +44,7 @@ import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.servlet.http.HttpServletRequest;
-import javax.ws.rs.Consumes;
-import javax.ws.rs.DELETE;
-import javax.ws.rs.GET;
-import javax.ws.rs.POST;
-import javax.ws.rs.PUT;
-import javax.ws.rs.Path;
-import javax.ws.rs.PathParam;
-import javax.ws.rs.Produces;
+import javax.ws.rs.*;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
@@ -61,13 +54,7 @@ import java.io.FileNotFoundException;
import java.io.IOException;
import java.lang.reflect.UndeclaredThrowableException;
import java.security.PrivilegedExceptionAction;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
+import java.util.*;
import java.util.stream.Collectors;
import static org.apache.hadoop.yarn.service.api.records.ServiceState.ACCEPTED;
@@ -582,6 +569,40 @@ public class ApiServer {
return Response.status(Status.NO_CONTENT).build();
}
+ @GET
+ @Path(COMP_INSTANCES_PATH)
+ @Produces({RestApiConstants.MEDIA_TYPE_JSON_UTF8})
+ public Response getComponentInstances(@Context HttpServletRequest request,
+ @PathParam(SERVICE_NAME) String serviceName,
+ @QueryParam(PARAM_COMP_NAME) List<String> componentNames,
+ @QueryParam(PARAM_VERSION) String version,
+ @QueryParam(PARAM_CONTAINER_STATE) List<String> containerStates) {
+ try {
+ UserGroupInformation ugi = getProxyUser(request);
+ LOG.info("GET: component instances for service = {}, compNames in {}, " +
+ "version = {}, containerStates in {}, user = {}", serviceName,
+ Objects.toString(componentNames, "[]"), Objects.toString(version, ""),
+ Objects.toString(containerStates, "[]"), ugi);
+
+ List<ContainerState> containerStatesDe = containerStates.stream().map(
+ ContainerState::valueOf).collect(Collectors.toList());
+
+ return Response.ok(getContainers(ugi, serviceName, componentNames,
+ version, containerStatesDe)).build();
+ } catch (IllegalArgumentException iae) {
+ return formatResponse(Status.BAD_REQUEST, "valid container states are: " +
+ Arrays.toString(ContainerState.values()));
+ } catch (AccessControlException e) {
+ return formatResponse(Response.Status.FORBIDDEN, e.getMessage());
+ } catch (IOException | InterruptedException e) {
+ return formatResponse(Response.Status.INTERNAL_SERVER_ERROR,
+ e.getMessage());
+ } catch (UndeclaredThrowableException e) {
+ return formatResponse(Response.Status.INTERNAL_SERVER_ERROR,
+ e.getCause().getMessage());
+ }
+ }
+
private Response flexService(Service service, UserGroupInformation ugi)
throws IOException, InterruptedException {
String appName = service.getName();
@@ -752,6 +773,22 @@ public class ApiServer {
});
}
+ private Container[] getContainers(UserGroupInformation ugi,
+ String serviceName, List<String> componentNames, String version,
+ List<ContainerState> containerStates) throws IOException,
+ InterruptedException {
+ return ugi.doAs((PrivilegedExceptionAction<Container[]>) () -> {
+ Container[] result;
+ ServiceClient sc = getServiceClient();
+ sc.init(YARN_CONFIG);
+ sc.start();
+ result = sc.getContainers(serviceName, componentNames, version,
+ containerStates);
+ sc.close();
+ return result;
+ });
+ }
+
/**
* Used by negative test case.
*
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMProtocol.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMProtocol.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMProtocol.java
index 45ff98a..652a314 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMProtocol.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMProtocol.java
@@ -23,6 +23,8 @@ import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeRespons
import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsResponseProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesRequestProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.RestartServiceRequestProto;
@@ -55,4 +57,7 @@ public interface ClientAMProtocol {
CompInstancesUpgradeResponseProto upgrade(
CompInstancesUpgradeRequestProto request) throws IOException,
YarnException;
+
+ GetCompInstancesResponseProto getCompInstances(
+ GetCompInstancesRequestProto request) throws IOException, YarnException;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
index e97c3d6..5bf1833 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/ClientAMService.java
@@ -35,6 +35,8 @@ import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeRespons
import org.apache.hadoop.yarn.proto.ClientAMProtocol.ComponentCountProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsResponseProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesRequestProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.RestartServiceRequestProto;
@@ -43,15 +45,18 @@ import org.apache.hadoop.yarn.proto.ClientAMProtocol.StopRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.StopResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.UpgradeServiceRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.UpgradeServiceResponseProto;
+import org.apache.hadoop.yarn.service.api.records.Container;
import org.apache.hadoop.yarn.service.component.ComponentEvent;
import org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEvent;
import org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEventType;
+import org.apache.hadoop.yarn.service.utils.FilterUtils;
import org.apache.hadoop.yarn.service.utils.ServiceApiUtil;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.net.InetSocketAddress;
+import java.util.List;
import static org.apache.hadoop.yarn.service.component.ComponentEventType.FLEX;
@@ -194,4 +199,13 @@ public class ClientAMService extends AbstractService
}
return CompInstancesUpgradeResponseProto.newBuilder().build();
}
+
+ @Override
+ public GetCompInstancesResponseProto getCompInstances(
+ GetCompInstancesRequestProto request) throws IOException {
+ List<Container> containers = FilterUtils.filterInstances(context, request);
+ return GetCompInstancesResponseProto.newBuilder().setCompInstances(
+ ServiceApiUtil.CONTAINER_JSON_SERDE.toJson(containers.toArray(
+ new Container[containers.size()]))).build();
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
index 699a4e5..4b67998 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java
@@ -57,6 +57,8 @@ import org.apache.hadoop.yarn.ipc.YarnRPC;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.ComponentCountProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsRequestProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesRequestProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.RestartServiceRequestProto;
@@ -66,6 +68,7 @@ import org.apache.hadoop.yarn.proto.ClientAMProtocol.UpgradeServiceResponseProto
import org.apache.hadoop.yarn.service.ClientAMProtocol;
import org.apache.hadoop.yarn.service.ServiceMaster;
import org.apache.hadoop.yarn.service.api.records.Container;
+import org.apache.hadoop.yarn.service.api.records.ContainerState;
import org.apache.hadoop.yarn.service.api.records.Component;
import org.apache.hadoop.yarn.service.api.records.Service;
import org.apache.hadoop.yarn.service.api.records.ServiceState;
@@ -100,6 +103,7 @@ import java.nio.ByteBuffer;
import java.text.MessageFormat;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
+import java.util.stream.Collectors;
import static org.apache.hadoop.yarn.api.records.YarnApplicationState.*;
import static org.apache.hadoop.yarn.service.conf.YarnServiceConf.*;
@@ -318,6 +322,49 @@ public class ServiceClient extends AppAdminClient implements SliderExitCodes,
}
}
+ @Override
+ public String getInstances(String appName,
+ List<String> components, String version, List<String> containerStates)
+ throws IOException, YarnException {
+ GetCompInstancesResponseProto result = filterContainers(appName, components,
+ version, containerStates);
+ return result.getCompInstances();
+ }
+
+ public Container[] getContainers(String appName, List<String> components,
+ String version, List<ContainerState> containerStates)
+ throws IOException, YarnException {
+ GetCompInstancesResponseProto result = filterContainers(appName, components,
+ version, containerStates != null ? containerStates.stream()
+ .map(Enum::toString).collect(Collectors.toList()) : null);
+
+ return ServiceApiUtil.CONTAINER_JSON_SERDE.fromJson(
+ result.getCompInstances());
+ }
+
+ private GetCompInstancesResponseProto filterContainers(String appName,
+ List<String> components, String version,
+ List<String> containerStates) throws IOException, YarnException {
+ ApplicationReport appReport = yarnClient.getApplicationReport(getAppId(
+ appName));
+ if (StringUtils.isEmpty(appReport.getHost())) {
+ throw new YarnException(appName + " AM hostname is empty.");
+ }
+ ClientAMProtocol proxy = createAMProxy(appName, appReport);
+ GetCompInstancesRequestProto.Builder req = GetCompInstancesRequestProto
+ .newBuilder();
+ if (components != null && !components.isEmpty()) {
+ req.addAllComponentNames(components);
+ }
+ if (version != null) {
+ req.setVersion(version);
+ }
+ if (containerStates != null && !containerStates.isEmpty()){
+ req.addAllContainerStates(containerStates);
+ }
+ return proxy.getCompInstances(req.build());
+ }
+
public int actionUpgrade(Service service, List<Container> compInstances)
throws IOException, YarnException {
ApplicationReport appReport =
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/instance/ComponentInstance.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/instance/ComponentInstance.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/instance/ComponentInstance.java
index 529596d..64f35d3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/instance/ComponentInstance.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/component/instance/ComponentInstance.java
@@ -97,6 +97,7 @@ public class ComponentInstance implements EventHandler<ComponentInstanceEvent>,
private long containerStartedTime = 0;
// This container object is used for rest API query
private org.apache.hadoop.yarn.service.api.records.Container containerSpec;
+ private String serviceVersion;
private static final StateMachineFactory<ComponentInstance,
@@ -194,6 +195,8 @@ public class ComponentInstance implements EventHandler<ComponentInstanceEvent>,
compInstance.getCompSpec().addContainer(container);
compInstance.containerStartedTime = containerStartTime;
compInstance.component.incRunningContainers();
+ compInstance.serviceVersion = compInstance.scheduler.getApp()
+ .getVersion();
if (compInstance.timelineServiceEnabled) {
compInstance.serviceTimelinePublisher
@@ -210,6 +213,8 @@ public class ComponentInstance implements EventHandler<ComponentInstanceEvent>,
if (compInstance.getState().equals(ComponentInstanceState.UPGRADING)) {
compInstance.component.incContainersReady(false);
compInstance.component.decContainersThatNeedUpgrade();
+ compInstance.serviceVersion = compInstance.component.getUpgradeEvent()
+ .getUpgradeVersion();
ComponentEvent checkState = new ComponentEvent(
compInstance.component.getName(), ComponentEventType.CHECK_STABLE);
compInstance.scheduler.getDispatcher().getEventHandler().handle(
@@ -382,6 +387,30 @@ public class ComponentInstance implements EventHandler<ComponentInstanceEvent>,
}
}
+ /**
+ * Returns the version of service at which the instance is at.
+ */
+ public String getServiceVersion() {
+ this.readLock.lock();
+ try {
+ return this.serviceVersion;
+ } finally {
+ this.readLock.unlock();
+ }
+ }
+
+ /**
+ * Returns the state of the container in the container spec.
+ */
+ public ContainerState getContainerState() {
+ this.readLock.lock();
+ try {
+ return this.containerSpec.getState();
+ } finally {
+ this.readLock.unlock();
+ }
+ }
+
@Override
public void handle(ComponentInstanceEvent event) {
try {
@@ -667,8 +696,16 @@ public class ComponentInstance implements EventHandler<ComponentInstanceEvent>,
return result;
}
- @VisibleForTesting public org.apache.hadoop.yarn.service.api.records
+ /**
+ * Returns container spec.
+ */
+ public org.apache.hadoop.yarn.service.api.records
.Container getContainerSpec() {
- return containerSpec;
+ readLock.lock();
+ try {
+ return containerSpec;
+ } finally {
+ readLock.unlock();
+ }
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/RestApiConstants.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/RestApiConstants.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/RestApiConstants.java
index 2d7db32..45ad7e4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/RestApiConstants.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/RestApiConstants.java
@@ -37,11 +37,14 @@ public interface RestApiConstants {
String COMPONENTS = "components";
String COMPONENTS_PATH = SERVICE_PATH + "/" + COMPONENTS;
- // Query param
String SERVICE_NAME = "service_name";
String COMPONENT_NAME = "component_name";
String COMP_INSTANCE_NAME = "component_instance_name";
+ String PARAM_COMP_NAME = "componentName";
+ String PARAM_VERSION = "version";
+ String PARAM_CONTAINER_STATE = "containerState";
+
String MEDIA_TYPE_JSON_UTF8 = MediaType.APPLICATION_JSON + ";charset=utf-8";
Long DEFAULT_UNLIMITED_LIFETIME = -1l;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/client/ClientAMProtocolPBClientImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/client/ClientAMProtocolPBClientImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/client/ClientAMProtocolPBClientImpl.java
index e82181e..49ecd2e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/client/ClientAMProtocolPBClientImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/client/ClientAMProtocolPBClientImpl.java
@@ -34,6 +34,8 @@ import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeRespons
import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsResponseProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesRequestProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusResponseProto;
import org.apache.hadoop.yarn.service.impl.pb.service.ClientAMProtocolPB;
@@ -128,4 +130,15 @@ public class ClientAMProtocolPBClientImpl
}
return null;
}
+
+ @Override
+ public GetCompInstancesResponseProto getCompInstances(
+ GetCompInstancesRequestProto request) throws IOException, YarnException {
+ try {
+ return proxy.getCompInstances(null, request);
+ } catch (ServiceException e) {
+ RPCUtil.unwrapAndThrowException(e);
+ }
+ return null;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/service/ClientAMProtocolPBServiceImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/service/ClientAMProtocolPBServiceImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/service/ClientAMProtocolPBServiceImpl.java
index 50a678b..eab3f9f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/service/ClientAMProtocolPBServiceImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/impl/pb/service/ClientAMProtocolPBServiceImpl.java
@@ -25,6 +25,8 @@ import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeRequest
import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.FlexComponentsResponseProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesRequestProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetStatusResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.RestartServiceRequestProto;
@@ -103,4 +105,15 @@ public class ClientAMProtocolPBServiceImpl implements ClientAMProtocolPB {
throw new ServiceException(e);
}
}
+
+ @Override
+ public GetCompInstancesResponseProto getCompInstances(
+ RpcController controller, GetCompInstancesRequestProto request)
+ throws ServiceException {
+ try {
+ return real.getCompInstances(request);
+ } catch (IOException | YarnException e) {
+ throw new ServiceException(e);
+ }
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/FilterUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/FilterUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/FilterUtils.java
new file mode 100644
index 0000000..10f9fea
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/FilterUtils.java
@@ -0,0 +1,81 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.service.utils;
+
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol;
+import org.apache.hadoop.yarn.service.ServiceContext;
+import org.apache.hadoop.yarn.service.api.records.Container;
+import org.apache.hadoop.yarn.service.component.instance.ComponentInstance;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class FilterUtils {
+
+ /**
+ * Returns containers filtered by requested fields.
+ *
+ * @param context service context
+ * @param filterReq filter request
+ */
+ public static List<Container> filterInstances(ServiceContext context,
+ ClientAMProtocol.GetCompInstancesRequestProto filterReq) {
+ List<Container> results = new ArrayList<>();
+ Map<ContainerId, ComponentInstance> instances =
+ context.scheduler.getLiveInstances();
+
+ instances.forEach(((containerId, instance) -> {
+ boolean include = true;
+ if (filterReq.getComponentNamesList() != null &&
+ !filterReq.getComponentNamesList().isEmpty()) {
+ // filter by component name
+ if (!filterReq.getComponentNamesList().contains(
+ instance.getComponent().getName())) {
+ include = false;
+ }
+ }
+
+ if (filterReq.getVersion() != null && !filterReq.getVersion().isEmpty()) {
+ // filter by version
+ String instanceServiceVersion = instance.getServiceVersion();
+ if (instanceServiceVersion == null || !instanceServiceVersion.equals(
+ filterReq.getVersion())) {
+ include = false;
+ }
+ }
+
+ if (filterReq.getContainerStatesList() != null &&
+ !filterReq.getContainerStatesList().isEmpty()) {
+ // filter by state
+ if (!filterReq.getContainerStatesList().contains(
+ instance.getContainerState().toString())) {
+ include = false;
+ }
+ }
+
+ if (include) {
+ results.add(instance.getContainerSpec());
+ }
+ }));
+
+ return results;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java
index 705e040..447250f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java
@@ -72,6 +72,15 @@ public class ServiceApiUtil {
public static JsonSerDeser<Service> jsonSerDeser =
new JsonSerDeser<>(Service.class,
PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
+
+ public static final JsonSerDeser<Container[]> CONTAINER_JSON_SERDE =
+ new JsonSerDeser<>(Container[].class,
+ PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
+
+ public static final JsonSerDeser<Component[]> COMP_JSON_SERDE =
+ new JsonSerDeser<>(Component[].class,
+ PropertyNamingStrategy.CAMEL_CASE_TO_LOWER_CASE_WITH_UNDERSCORES);
+
private static final PatternValidator namePattern
= new PatternValidator("[a-z][a-z0-9-]*");
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/proto/ClientAMProtocol.proto
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/proto/ClientAMProtocol.proto b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/proto/ClientAMProtocol.proto
index 91721b0..6166ded 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/proto/ClientAMProtocol.proto
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/proto/ClientAMProtocol.proto
@@ -32,6 +32,8 @@ service ClientAMProtocolService {
returns (RestartServiceResponseProto);
rpc upgrade(CompInstancesUpgradeRequestProto) returns
(CompInstancesUpgradeResponseProto);
+ rpc getCompInstances(GetCompInstancesRequestProto) returns
+ (GetCompInstancesResponseProto);
}
message FlexComponentsRequestProto {
@@ -81,4 +83,14 @@ message CompInstancesUpgradeRequestProto {
}
message CompInstancesUpgradeResponseProto {
+}
+
+message GetCompInstancesRequestProto {
+ repeated string componentNames = 1;
+ optional string version = 2;
+ repeated string containerStates = 3;
+}
+
+message GetCompInstancesResponseProto {
+ optional string compInstances = 1;
}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/MockRunningServiceContext.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/MockRunningServiceContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/MockRunningServiceContext.java
new file mode 100644
index 0000000..89888c5
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/MockRunningServiceContext.java
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.service;
+
+import org.apache.hadoop.registry.client.api.RegistryOperations;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.Container;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.ContainerStatus;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.client.api.NMClient;
+import org.apache.hadoop.yarn.client.api.async.NMClientAsync;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.service.api.records.Service;
+import org.apache.hadoop.yarn.service.component.Component;
+import org.apache.hadoop.yarn.service.component.ComponentEvent;
+import org.apache.hadoop.yarn.service.component.ComponentEventType;
+import org.apache.hadoop.yarn.service.component.instance.ComponentInstance;
+import org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEvent;
+import org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEventType;
+import org.apache.hadoop.yarn.service.containerlaunch.ContainerLaunchService;
+import org.apache.hadoop.yarn.service.registry.YarnRegistryViewForProviders;
+import org.mockito.stubbing.Answer;
+
+import java.io.IOException;
+import java.util.Map;
+
+import static org.mockito.Matchers.anyObject;
+import static org.mockito.Mockito.doNothing;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+/**
+ * Mocked service context for a running service.
+ */
+public class MockRunningServiceContext extends ServiceContext {
+
+ public MockRunningServiceContext(ServiceTestUtils.ServiceFSWatcher fsWatcher,
+ Service serviceDef) throws Exception {
+ super();
+ this.service = serviceDef;
+ this.fs = fsWatcher.getFs();
+
+ ContainerLaunchService mockLaunchService = mock(
+ ContainerLaunchService.class);
+
+ this.scheduler = new ServiceScheduler(this) {
+ @Override
+ protected YarnRegistryViewForProviders
+ createYarnRegistryOperations(
+ ServiceContext context, RegistryOperations registryClient) {
+ return mock(YarnRegistryViewForProviders.class);
+ }
+
+ @Override
+ public NMClientAsync createNMClient() {
+ NMClientAsync nmClientAsync = super.createNMClient();
+ NMClient nmClient = mock(NMClient.class);
+ try {
+ when(nmClient.getContainerStatus(anyObject(), anyObject()))
+ .thenAnswer(
+ (Answer<ContainerStatus>) invocation -> ContainerStatus
+ .newInstance((ContainerId) invocation.getArguments()[0],
+ org.apache.hadoop.yarn.api.records.ContainerState
+ .RUNNING,
+ "", 0));
+ } catch (YarnException | IOException e) {
+ throw new RuntimeException(e);
+ }
+ nmClientAsync.setClient(nmClient);
+ return nmClientAsync;
+ }
+
+ @Override
+ public ContainerLaunchService getContainerLaunchService() {
+ return mockLaunchService;
+ }
+ };
+ this.scheduler.init(fsWatcher.getConf());
+
+ ServiceTestUtils.createServiceManager(this);
+
+ doNothing().when(mockLaunchService).
+ reInitCompInstance(anyObject(), anyObject(), anyObject(), anyObject());
+ stabilizeComponents(this);
+ }
+
+ private void stabilizeComponents(ServiceContext context) {
+
+ ApplicationId appId = ApplicationId.fromString(context.service.getId());
+ ApplicationAttemptId attemptId = ApplicationAttemptId.newInstance(appId, 1);
+ context.attemptId = attemptId;
+ Map<String, Component>
+ componentState = context.scheduler.getAllComponents();
+
+ int counter = 0;
+ for (org.apache.hadoop.yarn.service.api.records.Component componentSpec :
+ context.service.getComponents()) {
+ Component component = new org.apache.hadoop.yarn.service.component.
+ Component(componentSpec, 1L, context);
+ componentState.put(component.getName(), component);
+ component.handle(new ComponentEvent(component.getName(),
+ ComponentEventType.FLEX));
+
+ for (int i = 0; i < componentSpec.getNumberOfContainers(); i++) {
+ counter++;
+ assignNewContainer(attemptId, counter, component);
+ }
+
+ component.handle(new ComponentEvent(component.getName(),
+ ComponentEventType.CHECK_STABLE));
+ }
+ }
+
+ public void assignNewContainer(ApplicationAttemptId attemptId,
+ long containerNum, Component component) {
+
+ Container container = org.apache.hadoop.yarn.api.records.Container
+ .newInstance(ContainerId.newContainerId(attemptId, containerNum),
+ NODE_ID, "localhost", null, null,
+ null);
+ component.handle(new ComponentEvent(component.getName(),
+ ComponentEventType.CONTAINER_ALLOCATED)
+ .setContainer(container).setContainerId(container.getId()));
+ ComponentInstance instance = this.scheduler.getLiveInstances().get(
+ container.getId());
+ ComponentInstanceEvent startEvent = new ComponentInstanceEvent(
+ container.getId(), ComponentInstanceEventType.START);
+ instance.handle(startEvent);
+
+ ComponentInstanceEvent readyEvent = new ComponentInstanceEvent(
+ container.getId(), ComponentInstanceEventType.BECOME_READY);
+ instance.handle(readyEvent);
+ }
+
+ private static final NodeId NODE_ID = NodeId.fromString("localhost:0");
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
index 363fe91..0e047c2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceCLI.java
@@ -166,7 +166,7 @@ public class TestServiceCLI {
checkApp(serviceName, "master", 1L, 1000L, "qname");
}
- @Test (timeout = 180000)
+ @Test
public void testInitiateServiceUpgrade() throws Exception {
String[] args = {"app", "-upgrade", "app-1",
"-initiate", ExampleAppJson.resourceName(ExampleAppJson.APP_JSON),
@@ -185,7 +185,7 @@ public class TestServiceCLI {
Assert.assertEquals(result, 0);
}
- @Test (timeout = 180000)
+ @Test
public void testUpgradeInstances() throws Exception {
conf.set(YARN_APP_ADMIN_CLIENT_PREFIX + DUMMY_APP_TYPE,
DummyServiceClient.class.getName());
@@ -197,7 +197,7 @@ public class TestServiceCLI {
Assert.assertEquals(result, 0);
}
- @Test (timeout = 180000)
+ @Test
public void testUpgradeComponents() throws Exception {
conf.set(YARN_APP_ADMIN_CLIENT_PREFIX + DUMMY_APP_TYPE,
DummyServiceClient.class.getName());
@@ -209,6 +209,18 @@ public class TestServiceCLI {
Assert.assertEquals(result, 0);
}
+ @Test
+ public void testGetInstances() throws Exception {
+ conf.set(YARN_APP_ADMIN_CLIENT_PREFIX + DUMMY_APP_TYPE,
+ DummyServiceClient.class.getName());
+ cli.setConf(conf);
+ String[] args = {"container", "-list", "app-1",
+ "-components", "comp1,comp2",
+ "-appTypes", DUMMY_APP_TYPE};
+ int result = cli.run(ApplicationCLI.preProcessArgs(args));
+ Assert.assertEquals(result, 0);
+ }
+
@Test (timeout = 180000)
public void testEnableFastLaunch() throws Exception {
fs.getFileSystem().create(new Path(basedir.getAbsolutePath(), "test.jar"))
@@ -313,5 +325,12 @@ public class TestServiceCLI {
throws IOException, YarnException {
return 0;
}
+
+ @Override
+ public String getInstances(String appName, List<String> components,
+ String version, List<String> containerStates)
+ throws IOException, YarnException {
+ return "";
+ }
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceClient.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceClient.java
index d3664ea..700655c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceClient.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/client/TestServiceClient.java
@@ -18,6 +18,7 @@
package org.apache.hadoop.yarn.service.client;
+import com.google.common.collect.Lists;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
@@ -32,8 +33,12 @@ import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.service.ClientAMProtocol;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.CompInstancesUpgradeResponseProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesRequestProto;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesResponseProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.UpgradeServiceRequestProto;
import org.apache.hadoop.yarn.proto.ClientAMProtocol.UpgradeServiceResponseProto;
+import org.apache.hadoop.yarn.service.MockRunningServiceContext;
+import org.apache.hadoop.yarn.service.ServiceContext;
import org.apache.hadoop.yarn.service.ServiceTestUtils;
import org.apache.hadoop.yarn.service.api.records.Component;
import org.apache.hadoop.yarn.service.api.records.Container;
@@ -41,6 +46,7 @@ import org.apache.hadoop.yarn.service.api.records.Service;
import org.apache.hadoop.yarn.service.api.records.ServiceState;
import org.apache.hadoop.yarn.service.conf.YarnServiceConf;
import org.apache.hadoop.yarn.service.exceptions.ErrorStrings;
+import org.apache.hadoop.yarn.service.utils.FilterUtils;
import org.apache.hadoop.yarn.service.utils.ServiceApiUtil;
import org.junit.Assert;
import org.junit.Rule;
@@ -52,6 +58,7 @@ import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.ArrayList;
+import java.util.List;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
@@ -122,6 +129,26 @@ public class TestServiceClient {
client.stop();
}
+ @Test
+ public void testGetCompInstances() throws Exception {
+ Service service = createService();
+ MockServiceClient client = MockServiceClient.create(rule, service, true);
+
+ //upgrade the service
+ service.setVersion("v2");
+ client.initiateUpgrade(service);
+
+ //add containers to the component that needs to be upgraded.
+ Component comp = service.getComponents().iterator().next();
+ ContainerId containerId = ContainerId.newContainerId(client.attemptId, 1L);
+ comp.addContainer(new Container().id(containerId.toString()));
+
+ Container[] containers = client.getContainers(service.getName(),
+ Lists.newArrayList("compa"), "v1", null);
+ Assert.assertEquals("num containers", 2, containers.length);
+ client.stop();
+ }
+
private Service createService() throws IOException,
YarnException {
Service service = ServiceTestUtils.createExampleApplication();
@@ -137,6 +164,7 @@ public class TestServiceClient {
private final ClientAMProtocol amProxy;
private Object proxyResponse;
private Service service;
+ private ServiceContext context;
private MockServiceClient() {
amProxy = mock(ClientAMProtocol.class);
@@ -147,8 +175,12 @@ public class TestServiceClient {
static MockServiceClient create(ServiceTestUtils.ServiceFSWatcher rule,
Service service, boolean enableUpgrade)
- throws IOException, YarnException {
+ throws Exception {
MockServiceClient client = new MockServiceClient();
+ ApplicationId applicationId = ApplicationId.newInstance(
+ System.currentTimeMillis(), 1);
+ service.setId(applicationId.toString());
+ client.context = new MockRunningServiceContext(rule, service);
YarnClient yarnClient = createMockYarnClient();
ApplicationReport appReport = mock(ApplicationReport.class);
@@ -175,10 +207,28 @@ public class TestServiceClient {
CompInstancesUpgradeRequestProto.class))).thenAnswer(
(Answer<CompInstancesUpgradeResponseProto>) invocation -> {
CompInstancesUpgradeResponseProto response =
- CompInstancesUpgradeResponseProto.newBuilder().build();
+ CompInstancesUpgradeResponseProto.newBuilder().build();
client.proxyResponse = response;
return response;
});
+
+ when(client.amProxy.getCompInstances(Matchers.any(
+ GetCompInstancesRequestProto.class))).thenAnswer(
+ (Answer<GetCompInstancesResponseProto>) invocation -> {
+
+ GetCompInstancesRequestProto req = (GetCompInstancesRequestProto)
+ invocation.getArguments()[0];
+ List<Container> containers = FilterUtils.filterInstances(
+ client.context, req);
+ GetCompInstancesResponseProto response =
+ GetCompInstancesResponseProto.newBuilder().setCompInstances(
+ ServiceApiUtil.CONTAINER_JSON_SERDE.toJson(
+ containers.toArray(new Container[containers.size()])))
+ .build();
+ client.proxyResponse = response;
+ return response;
+ });
+
client.setFileSystem(rule.getFs());
client.setYarnClient(yarnClient);
client.service = service;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/TestComponent.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/TestComponent.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/TestComponent.java
index d7c15ec..d5fb941 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/TestComponent.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/TestComponent.java
@@ -18,19 +18,10 @@
package org.apache.hadoop.yarn.service.component;
-import org.apache.hadoop.registry.client.api.RegistryOperations;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
import org.apache.hadoop.yarn.api.records.Container;
import org.apache.hadoop.yarn.api.records.ContainerExitStatus;
-import org.apache.hadoop.yarn.api.records.ContainerId;
import org.apache.hadoop.yarn.api.records.ContainerStatus;
-import org.apache.hadoop.yarn.api.records.NodeId;
-import org.apache.hadoop.yarn.client.api.NMClient;
-import org.apache.hadoop.yarn.client.api.async.NMClientAsync;
-import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.hadoop.yarn.service.ServiceContext;
-import org.apache.hadoop.yarn.service.ServiceScheduler;
import org.apache.hadoop.yarn.service.ServiceTestUtils;
import org.apache.hadoop.yarn.service.TestServiceManager;
import org.apache.hadoop.yarn.service.api.records.ComponentState;
@@ -38,23 +29,15 @@ import org.apache.hadoop.yarn.service.api.records.Service;
import org.apache.hadoop.yarn.service.component.instance.ComponentInstance;
import org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEvent;
import org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEventType;
-
-import org.apache.hadoop.yarn.service.containerlaunch.ContainerLaunchService;
-import org.apache.hadoop.yarn.service.registry.YarnRegistryViewForProviders;
+import org.apache.hadoop.yarn.service.MockRunningServiceContext;
import org.apache.log4j.Logger;
import org.junit.Assert;
import org.junit.Rule;
import org.junit.Test;
-import org.mockito.stubbing.Answer;
-import java.io.IOException;
import java.util.Iterator;
-import java.util.Map;
import static org.apache.hadoop.yarn.service.component.instance.ComponentInstanceEventType.STOP;
-
-import static org.mockito.Matchers.anyObject;
-import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
@@ -63,7 +46,6 @@ import static org.mockito.Mockito.when;
*/
public class TestComponent {
- private static final int WAIT_MS_PER_LOOP = 1000;
static final Logger LOG = Logger.getLogger(TestComponent.class);
@Rule
@@ -115,7 +97,7 @@ public class TestComponent {
@Test
public void testContainerCompletedWhenUpgrading() throws Exception {
String serviceName = "testContainerComplete";
- ServiceContext context = createTestContext(rule, serviceName);
+ MockRunningServiceContext context = createTestContext(rule, serviceName);
Component comp = context.scheduler.getAllComponents().entrySet().iterator()
.next().getValue();
@@ -148,7 +130,7 @@ public class TestComponent {
ComponentState.FLEXING, comp.getComponentSpec().getState());
// new container get allocated
- assignNewContainer(context.attemptId, 10, context, comp);
+ context.assignNewContainer(context.attemptId, 10, comp);
// second instance finished upgrading
ComponentInstance instance2 = instanceIter.next();
@@ -174,7 +156,7 @@ public class TestComponent {
serviceName);
TestServiceManager.createDef(serviceName, testService);
- ServiceContext context = createTestContext(rule, testService);
+ ServiceContext context = new MockRunningServiceContext(rule, testService);
for (Component comp : context.scheduler.getAllComponents().values()) {
@@ -225,114 +207,11 @@ public class TestComponent {
return spec;
}
- public static ServiceContext createTestContext(
+ public static MockRunningServiceContext createTestContext(
ServiceTestUtils.ServiceFSWatcher fsWatcher, String serviceName)
throws Exception {
- return createTestContext(fsWatcher,
+ return new MockRunningServiceContext(fsWatcher,
TestServiceManager.createBaseDef(serviceName));
}
-
- public static ServiceContext createTestContext(
- ServiceTestUtils.ServiceFSWatcher fsWatcher, Service serviceDef)
- throws Exception {
- ServiceContext context = new ServiceContext();
- context.service = serviceDef;
- context.fs = fsWatcher.getFs();
-
- ContainerLaunchService mockLaunchService = mock(
- ContainerLaunchService.class);
-
- context.scheduler = new ServiceScheduler(context) {
- @Override protected YarnRegistryViewForProviders
- createYarnRegistryOperations(
- ServiceContext context, RegistryOperations registryClient) {
- return mock(YarnRegistryViewForProviders.class);
- }
-
- @Override public NMClientAsync createNMClient() {
- NMClientAsync nmClientAsync = super.createNMClient();
- NMClient nmClient = mock(NMClient.class);
- try {
- when(nmClient.getContainerStatus(anyObject(), anyObject()))
- .thenAnswer(
- (Answer<ContainerStatus>) invocation -> ContainerStatus
- .newInstance((ContainerId) invocation.getArguments()[0],
- org.apache.hadoop.yarn.api.records.ContainerState
- .RUNNING,
- "", 0));
- } catch (YarnException | IOException e) {
- throw new RuntimeException(e);
- }
- nmClientAsync.setClient(nmClient);
- return nmClientAsync;
- }
-
- @Override public ContainerLaunchService getContainerLaunchService() {
- return mockLaunchService;
- }
- };
- context.scheduler.init(fsWatcher.getConf());
-
- ServiceTestUtils.createServiceManager(context);
-
- doNothing().when(mockLaunchService).
- reInitCompInstance(anyObject(), anyObject(), anyObject(), anyObject());
- stabilizeComponents(context);
-
- return context;
- }
-
- private static void stabilizeComponents(ServiceContext context) {
-
- ApplicationId appId = ApplicationId.fromString(context.service.getId());
- ApplicationAttemptId attemptId = ApplicationAttemptId.newInstance(appId, 1);
- context.attemptId = attemptId;
- Map<String, Component>
- componentState = context.scheduler.getAllComponents();
-
- int counter = 0;
- for (org.apache.hadoop.yarn.service.api.records.Component componentSpec :
- context.service.getComponents()) {
- Component component = new org.apache.hadoop.yarn.service.component.
- Component(componentSpec, 1L, context);
- componentState.put(component.getName(), component);
- component.handle(new ComponentEvent(component.getName(),
- ComponentEventType.FLEX));
-
- for (int i = 0; i < componentSpec.getNumberOfContainers(); i++) {
- counter++;
- assignNewContainer(attemptId, counter, context, component);
- }
-
- component.handle(new ComponentEvent(component.getName(),
- ComponentEventType.CHECK_STABLE));
- }
- }
-
- private static void assignNewContainer(
- ApplicationAttemptId attemptId, long containerNum,
- ServiceContext context, Component component) {
-
-
- Container container = org.apache.hadoop.yarn.api.records.Container
- .newInstance(ContainerId.newContainerId(attemptId, containerNum),
- NODE_ID, "localhost", null, null,
- null);
- component.handle(new ComponentEvent(component.getName(),
- ComponentEventType.CONTAINER_ALLOCATED)
- .setContainer(container).setContainerId(container.getId()));
- ComponentInstance instance = context.scheduler.getLiveInstances().get(
- container.getId());
- ComponentInstanceEvent startEvent = new ComponentInstanceEvent(
- container.getId(), ComponentInstanceEventType.START);
- instance.handle(startEvent);
-
- ComponentInstanceEvent readyEvent = new ComponentInstanceEvent(
- container.getId(), ComponentInstanceEventType.BECOME_READY);
- instance.handle(readyEvent);
- }
-
- private static final NodeId NODE_ID = NodeId.fromString("localhost:0");
-
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/instance/TestComponentInstance.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/instance/TestComponentInstance.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/instance/TestComponentInstance.java
index 26e8c93..0e7816c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/instance/TestComponentInstance.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/instance/TestComponentInstance.java
@@ -6,9 +6,9 @@
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
- * <p>
- * http://www.apache.org/licenses/LICENSE-2.0
- * <p>
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@@ -60,19 +60,20 @@ import static org.mockito.Mockito.when;
*/
public class TestComponentInstance {
- @Rule public ServiceTestUtils.ServiceFSWatcher rule =
+ @Rule
+ public ServiceTestUtils.ServiceFSWatcher rule =
new ServiceTestUtils.ServiceFSWatcher();
- @Test public void testContainerUpgrade() throws Exception {
+ @Test
+ public void testContainerUpgrade() throws Exception {
ServiceContext context = TestComponent.createTestContext(rule,
"testContainerUpgrade");
- Component component =
- context.scheduler.getAllComponents().entrySet().iterator().next()
- .getValue();
+ Component component = context.scheduler.getAllComponents().entrySet()
+ .iterator().next().getValue();
upgradeComponent(component);
- ComponentInstance instance =
- component.getAllComponentInstances().iterator().next();
+ ComponentInstance instance = component.getAllComponentInstances().iterator()
+ .next();
ComponentInstanceEvent instanceEvent = new ComponentInstanceEvent(
instance.getContainer().getId(), ComponentInstanceEventType.UPGRADE);
instance.handle(instanceEvent);
@@ -82,16 +83,16 @@ public class TestComponentInstance {
containerSpec.getState());
}
- @Test public void testContainerReadyAfterUpgrade() throws Exception {
+ @Test
+ public void testContainerReadyAfterUpgrade() throws Exception {
ServiceContext context = TestComponent.createTestContext(rule,
"testContainerStarted");
- Component component =
- context.scheduler.getAllComponents().entrySet().iterator().next()
- .getValue();
+ Component component = context.scheduler.getAllComponents().entrySet()
+ .iterator().next().getValue();
upgradeComponent(component);
- ComponentInstance instance =
- component.getAllComponentInstances().iterator().next();
+ ComponentInstance instance = component.getAllComponentInstances().iterator()
+ .next();
ComponentInstanceEvent instanceEvent = new ComponentInstanceEvent(
instance.getContainer().getId(), ComponentInstanceEventType.UPGRADE);
@@ -100,9 +101,8 @@ public class TestComponentInstance {
instance.handle(new ComponentInstanceEvent(instance.getContainer().getId(),
ComponentInstanceEventType.BECOME_READY));
Assert.assertEquals("instance not ready", ContainerState.READY,
- instance.getCompSpec()
- .getContainer(instance.getContainer().getId().toString())
- .getState());
+ instance.getCompSpec().getContainer(
+ instance.getContainer().getId().toString()).getState());
}
private void upgradeComponent(Component component) {
@@ -113,9 +113,8 @@ public class TestComponentInstance {
private Component createComponent(ServiceScheduler scheduler,
org.apache.hadoop.yarn.service.api.records.Component.RestartPolicyEnum
- restartPolicy,
- int nSucceededInstances, int nFailedInstances, int totalAsk,
- int componentId) {
+ restartPolicy, int nSucceededInstances, int nFailedInstances,
+ int totalAsk, int componentId) {
assert (nSucceededInstances + nFailedInstances) <= totalAsk;
@@ -214,7 +213,8 @@ public class TestComponentInstance {
return componentInstance;
}
- @Test public void testComponentRestartPolicy() {
+ @Test
+ public void testComponentRestartPolicy() {
Map<String, Component> allComponents = new HashMap<>();
Service mockService = mock(Service.class);
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/utils/TestFilterUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/utils/TestFilterUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/utils/TestFilterUtils.java
new file mode 100644
index 0000000..065c37a
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/utils/TestFilterUtils.java
@@ -0,0 +1,102 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.service.utils;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.yarn.proto.ClientAMProtocol.GetCompInstancesRequestProto;
+import org.apache.hadoop.yarn.service.ServiceContext;
+import org.apache.hadoop.yarn.service.ServiceTestUtils;
+import org.apache.hadoop.yarn.service.TestServiceManager;
+import org.apache.hadoop.yarn.service.api.records.Container;
+import org.apache.hadoop.yarn.service.MockRunningServiceContext;
+import org.apache.hadoop.yarn.service.api.records.ContainerState;
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+
+import java.util.List;
+
+public class TestFilterUtils {
+
+ @Rule
+ public ServiceTestUtils.ServiceFSWatcher rule =
+ new ServiceTestUtils.ServiceFSWatcher();
+
+ @Test
+ public void testNoFilter() throws Exception {
+ GetCompInstancesRequestProto req = GetCompInstancesRequestProto.newBuilder()
+ .build();
+ List<Container> containers = FilterUtils.filterInstances(
+ new MockRunningServiceContext(rule,
+ TestServiceManager.createBaseDef("service")), req);
+ Assert.assertEquals("num containers", 4, containers.size());
+ }
+
+ @Test
+ public void testFilterWithComp() throws Exception {
+ GetCompInstancesRequestProto req = GetCompInstancesRequestProto.newBuilder()
+ .addAllComponentNames(Lists.newArrayList("compa")).build();
+ List<Container> containers = FilterUtils.filterInstances(
+ new MockRunningServiceContext(rule,
+ TestServiceManager.createBaseDef("service")), req);
+ Assert.assertEquals("num containers", 2, containers.size());
+ }
+
+ @Test
+ public void testFilterWithVersion() throws Exception {
+ ServiceContext sc = new MockRunningServiceContext(rule,
+ TestServiceManager.createBaseDef("service"));
+ GetCompInstancesRequestProto.Builder reqBuilder =
+ GetCompInstancesRequestProto.newBuilder();
+
+ reqBuilder.setVersion("v2");
+ Assert.assertEquals("num containers", 0,
+ FilterUtils.filterInstances(sc, reqBuilder.build()).size());
+
+ reqBuilder.addAllComponentNames(Lists.newArrayList("compa"))
+ .setVersion("v1").build();
+
+ Assert.assertEquals("num containers", 2,
+ FilterUtils.filterInstances(sc, reqBuilder.build()).size());
+
+ reqBuilder.setVersion("v2").build();
+ Assert.assertEquals("num containers", 0,
+ FilterUtils.filterInstances(sc, reqBuilder.build()).size());
+ }
+
+ @Test
+ public void testFilterWithState() throws Exception {
+ ServiceContext sc = new MockRunningServiceContext(rule,
+ TestServiceManager.createBaseDef("service"));
+ GetCompInstancesRequestProto.Builder reqBuilder =
+ GetCompInstancesRequestProto.newBuilder();
+
+ reqBuilder.addAllContainerStates(Lists.newArrayList(
+ ContainerState.READY.toString()));
+ Assert.assertEquals("num containers", 4,
+ FilterUtils.filterInstances(sc, reqBuilder.build()).size());
+
+ reqBuilder.clearContainerStates();
+ reqBuilder.addAllContainerStates(Lists.newArrayList(
+ ContainerState.STOPPED.toString()));
+ Assert.assertEquals("num containers", 0,
+ FilterUtils.filterInstances(sc, reqBuilder.build()).size());
+ }
+
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
index 1d26a96..14710a4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
@@ -105,6 +105,8 @@ public class ApplicationCLI extends YarnCLI {
public static final String UPGRADE_FINALIZE = "finalize";
public static final String COMPONENT_INSTS = "instances";
public static final String COMPONENTS = "components";
+ public static final String VERSION = "version";
+ public static final String STATES = "states";
private static String firstArg = null;
@@ -294,10 +296,39 @@ public class ApplicationCLI extends YarnCLI {
opts.addOption(STATUS_CMD, true,
"Prints the status of the container.");
opts.addOption(LIST_CMD, true,
- "List containers for application attempt.");
+ "List containers for application attempt when application " +
+ "attempt ID is provided. When application name is provided, " +
+ "then it finds the instances of the application based on app's " +
+ "own implementation, and -appTypes option must be specified " +
+ "unless it is the default yarn-service type. With app name, it " +
+ "supports optional use of -version to filter instances based on " +
+ "app version, -components to filter instances based on component " +
+ "names, -states to filter instances based on instance state.");
opts.addOption(HELP_CMD, false, "Displays help for all commands.");
opts.getOption(STATUS_CMD).setArgName("Container ID");
- opts.getOption(LIST_CMD).setArgName("Application Attempt ID");
+ opts.getOption(LIST_CMD).setArgName("Application Name or Attempt ID");
+ opts.addOption(APP_TYPE_CMD, true, "Works with -list to " +
+ "specify the app type when application name is provided.");
+ opts.getOption(APP_TYPE_CMD).setValueSeparator(',');
+ opts.getOption(APP_TYPE_CMD).setArgs(Option.UNLIMITED_VALUES);
+ opts.getOption(APP_TYPE_CMD).setArgName("Types");
+
+ opts.addOption(VERSION, true, "Works with -list "
+ + "to filter instances based on input application version.");
+ opts.getOption(VERSION).setArgs(1);
+
+ opts.addOption(COMPONENTS, true, "Works with -list to " +
+ "filter instances based on input comma-separated list of " +
+ "component names.");
+ opts.getOption(COMPONENTS).setValueSeparator(',');
+ opts.getOption(COMPONENTS).setArgs(Option.UNLIMITED_VALUES);
+
+ opts.addOption(STATES, true, "Works with -list to " +
+ "filter instances based on input comma-separated list of " +
+ "instance states.");
+ opts.getOption(STATES).setValueSeparator(',');
+ opts.getOption(STATES).setArgs(Option.UNLIMITED_VALUES);
+
opts.addOption(SIGNAL_CMD, true,
"Signal the container. The available signal commands are " +
java.util.Arrays.asList(SignalContainerCommand.values()) +
@@ -426,11 +457,40 @@ public class ApplicationCLI extends YarnCLI {
}
listApplicationAttempts(cliParser.getOptionValue(LIST_CMD));
} else if (title.equalsIgnoreCase(CONTAINER)) {
- if (hasAnyOtherCLIOptions(cliParser, opts, LIST_CMD)) {
+ if (hasAnyOtherCLIOptions(cliParser, opts, LIST_CMD, APP_TYPE_CMD,
+ VERSION, COMPONENTS, STATES)) {
printUsage(title, opts);
return exitCode;
}
- listContainers(cliParser.getOptionValue(LIST_CMD));
+ String appAttemptIdOrName = cliParser.getOptionValue(LIST_CMD);
+ try {
+ // try parsing attempt id, if it succeeds, it means it's appId
+ ApplicationAttemptId.fromString(appAttemptIdOrName);
+ listContainers(appAttemptIdOrName);
+ } catch (IllegalArgumentException e) {
+ // not appAttemptId format, it could be appName. If app-type is not
+ // provided, assume it is yarn-service type.
+ AppAdminClient client = AppAdminClient
+ .createAppAdminClient(getSingleAppTypeFromCLI(cliParser),
+ getConf());
+ String version = cliParser.getOptionValue(VERSION);
+ String[] components = cliParser.getOptionValues(COMPONENTS);
+ String[] instanceStates = cliParser.getOptionValues(STATES);
+ try {
+ sysout.println(client.getInstances(appAttemptIdOrName,
+ components == null ? null : Arrays.asList(components),
+ version, instanceStates == null ? null :
+ Arrays.asList(instanceStates)));
+ return 0;
+ } catch (ApplicationNotFoundException exception) {
+ System.err.println("Application with name '" + appAttemptIdOrName
+ + "' doesn't exist in RM or Timeline Server.");
+ return -1;
+ } catch (Exception ex) {
+ System.err.println(ex.getMessage());
+ return -1;
+ }
+ }
}
} else if (cliParser.hasOption(KILL_CMD)) {
if (hasAnyOtherCLIOptions(cliParser, opts, KILL_CMD)) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
index 518cd1c..6b823b2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java
@@ -2280,13 +2280,17 @@ public class TestYarnCLI {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintWriter pw = new PrintWriter(baos);
pw.println("usage: container");
+ pw.println(" -appTypes <Types> Works with -list to specify the app type when application name is provided.");
+ pw.println(" -components <arg> Works with -list to filter instances based on input comma-separated list of component names.");
pw.println(" -help Displays help for all commands.");
- pw.println(" -list <Application Attempt ID> List containers for application attempt.");
+ pw.println(" -list <Application Name or Attempt ID> List containers for application attempt when application attempt ID is provided. When application name is provided, then it finds the instances of the application based on app's own implementation, and -appTypes option must be specified unless it is the default yarn-service type. With app name, it supports optional use of -version to filter instances based on app version, -components to filter instances based on component names, -states to filter instances based on instance state.");
pw.println(" -signal <container ID [signal command]> Signal the container.");
pw.println("The available signal commands are ");
pw.println(java.util.Arrays.asList(SignalContainerCommand.values()));
pw.println(" Default command is OUTPUT_THREAD_DUMP.");
+ pw.println(" -states <arg> Works with -list to filter instances based on input comma-separated list of instance states.");
pw.println(" -status <Container ID> Prints the status of the container.");
+ pw.println(" -version <arg> Works with -list to filter instances based on input application version. ");
pw.close();
try {
return normalize(baos.toString("UTF-8"));
http://git-wip-us.apache.org/repos/asf/hadoop/blob/121865c3/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java
index 3cd1a78..3fb4778 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/AppAdminClient.java
@@ -282,4 +282,10 @@ public abstract class AppAdminClient extends CompositeService {
public abstract int actionCleanUp(String appName, String userName) throws
IOException, YarnException;
+ @Public
+ @Unstable
+ public abstract String getInstances(String appName,
+ List<String> components, String version, List<String> containerStates)
+ throws IOException, YarnException;
+
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[30/50] hadoop git commit: YARN-8524. Single parameter Resource /
LightWeightResource constructor looks confusing. (Szilard Nemeth via wangda)
Posted by zh...@apache.org.
YARN-8524. Single parameter Resource / LightWeightResource constructor looks confusing. (Szilard Nemeth via wangda)
Change-Id: I4ae97548b5b8d76a6bcebb2d3d70bf8e0be3c125
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/238ffff9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/238ffff9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/238ffff9
Branch: refs/heads/HDFS-13572
Commit: 238ffff99907290fb2cf791a1ad28ff7f78952f4
Parents: a2e49f4
Author: Wangda Tan <wa...@apache.org>
Authored: Mon Jul 16 10:58:00 2018 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Mon Jul 16 10:58:00 2018 -0700
----------------------------------------------------------------------
.../hadoop/yarn/api/records/Resource.java | 11 ------
.../api/records/impl/LightWeightResource.java | 16 ---------
.../hadoop/yarn/util/resource/Resources.java | 23 ++++++++++++-
.../yarn/util/resource/TestResources.java | 36 ++++++++++++++++++++
.../scheduler/fair/ConfigurableResource.java | 8 ++++-
5 files changed, 65 insertions(+), 29 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/238ffff9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
index 173d4c9..3cac1d1 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
@@ -76,17 +76,6 @@ public abstract class Resource implements Comparable<Resource> {
@Private
public static final int VCORES_INDEX = 1;
- /**
- * Return a new {@link Resource} instance with all resource values
- * initialized to {@code value}.
- * @param value the value to use for all resources
- * @return a new {@link Resource} instance
- */
- @Private
- @Unstable
- public static Resource newInstance(long value) {
- return new LightWeightResource(value);
- }
@Public
@Stable
http://git-wip-us.apache.org/repos/asf/hadoop/blob/238ffff9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java
index 77f77f3..02afe50 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/LightWeightResource.java
@@ -64,22 +64,6 @@ public class LightWeightResource extends Resource {
private ResourceInformation memoryResInfo;
private ResourceInformation vcoresResInfo;
- /**
- * Create a new {@link LightWeightResource} instance with all resource values
- * initialized to {@code value}.
- * @param value the value to use for all resources
- */
- public LightWeightResource(long value) {
- ResourceInformation[] types = ResourceUtils.getResourceTypesArray();
- initResourceInformations(value, value, types.length);
-
- for (int i = 2; i < types.length; i++) {
- resources[i] = new ResourceInformation();
- ResourceInformation.copy(types[i], resources[i]);
- resources[i].setValue(value);
- }
- }
-
public LightWeightResource(long memory, int vcores) {
int numberOfKnownResourceTypes = ResourceUtils
.getNumberOfKnownResourceTypes();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/238ffff9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
index 7826f51..ace8b5d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
@@ -21,9 +21,11 @@ package org.apache.hadoop.yarn.util.resource;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
import org.apache.hadoop.classification.InterfaceStability.Unstable;
import org.apache.hadoop.yarn.api.records.Resource;
import org.apache.hadoop.yarn.api.records.ResourceInformation;
+import org.apache.hadoop.yarn.api.records.impl.LightWeightResource;
import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException;
import org.apache.hadoop.yarn.util.UnitsConversionUtil;
@@ -39,10 +41,29 @@ public class Resources {
LogFactory.getLog(Resources.class);
/**
+ * Return a new {@link Resource} instance with all resource values
+ * initialized to {@code value}.
+ * @param value the value to use for all resources
+ * @return a new {@link Resource} instance
+ */
+ @Private
+ @Unstable
+ public static Resource createResourceWithSameValue(long value) {
+ LightWeightResource res = new LightWeightResource(value,
+ Long.valueOf(value).intValue());
+ int numberOfResources = ResourceUtils.getNumberOfKnownResourceTypes();
+ for (int i = 2; i < numberOfResources; i++) {
+ res.setResourceValue(i, value);
+ }
+
+ return res;
+ }
+
+ /**
* Helper class to create a resource with a fixed value for all resource
* types. For example, a NONE resource which returns 0 for any resource type.
*/
- @InterfaceAudience.Private
+ @Private
@Unstable
static class FixedValueResource extends Resource {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/238ffff9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java
index a8404fb..07b24eb 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/resource/TestResources.java
@@ -263,4 +263,40 @@ public class TestResources {
multiplyAndAddTo(createResource(3, 1, 2), createResource(2, 2, 3),
1.5));
}
+
+ @Test
+ public void testCreateResourceWithSameLongValue() throws Exception {
+ unsetExtraResourceType();
+ setupExtraResourceType();
+
+ Resource res = Resources.createResourceWithSameValue(11L);
+ assertEquals(11L, res.getMemorySize());
+ assertEquals(11, res.getVirtualCores());
+ assertEquals(11L, res.getResourceInformation(EXTRA_RESOURCE_TYPE).getValue());
+ }
+
+ @Test
+ public void testCreateResourceWithSameIntValue() throws Exception {
+ unsetExtraResourceType();
+ setupExtraResourceType();
+
+ Resource res = Resources.createResourceWithSameValue(11);
+ assertEquals(11, res.getMemorySize());
+ assertEquals(11, res.getVirtualCores());
+ assertEquals(11, res.getResourceInformation(EXTRA_RESOURCE_TYPE).getValue());
+ }
+
+ @Test
+ public void testCreateSimpleResourceWithSameLongValue() {
+ Resource res = Resources.createResourceWithSameValue(11L);
+ assertEquals(11L, res.getMemorySize());
+ assertEquals(11, res.getVirtualCores());
+ }
+
+ @Test
+ public void testCreateSimpleResourceWithSameIntValue() {
+ Resource res = Resources.createResourceWithSameValue(11);
+ assertEquals(11, res.getMemorySize());
+ assertEquals(11, res.getVirtualCores());
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/238ffff9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java
index 0c3b0dd..f772c4d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/ConfigurableResource.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
import org.apache.hadoop.yarn.api.records.ResourceInformation;
import org.apache.hadoop.yarn.exceptions.ResourceNotFoundException;
import org.apache.hadoop.yarn.util.resource.ResourceUtils;
+import org.apache.hadoop.yarn.util.resource.Resources;
/**
* A {@code ConfigurableResource} object represents an entity that is used to
@@ -46,8 +47,13 @@ public class ConfigurableResource {
this.resource = null;
}
+ /**
+ * Creates a {@link ConfigurableResource} instance with all resource values
+ * initialized to {@code value}.
+ * @param value the value to use for all resources
+ */
ConfigurableResource(long value) {
- this(Resource.newInstance(value));
+ this(Resources.createResourceWithSameValue(value));
}
public ConfigurableResource(Resource resource) {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[50/50] hadoop git commit: HDFS-13643. Implement basic async rpc
client
Posted by zh...@apache.org.
HDFS-13643. Implement basic async rpc client
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/48c41c1e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/48c41c1e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/48c41c1e
Branch: refs/heads/HDFS-13572
Commit: 48c41c1ea7037a3894a27e78264a1e6a3d7be251
Parents: 7b25fb9
Author: zhangduo <zh...@apache.org>
Authored: Mon Jun 4 21:54:45 2018 +0800
Committer: zhangduo <zh...@apache.org>
Committed: Fri Jul 20 18:31:19 2018 +0800
----------------------------------------------------------------------
.../hadoop-client-minicluster/pom.xml | 4 +
hadoop-hdfs-project/hadoop-hdfs-client/pom.xml | 29 ++-
.../hdfs/ipc/BufferCallBeforeInitHandler.java | 100 +++++++++++
.../java/org/apache/hadoop/hdfs/ipc/Call.java | 132 ++++++++++++++
.../apache/hadoop/hdfs/ipc/ConnectionId.java | 71 ++++++++
.../hadoop/hdfs/ipc/HdfsRpcController.java | 74 ++++++++
.../org/apache/hadoop/hdfs/ipc/IPCUtil.java | 34 ++++
.../org/apache/hadoop/hdfs/ipc/RpcClient.java | 128 ++++++++++++++
.../apache/hadoop/hdfs/ipc/RpcConnection.java | 153 ++++++++++++++++
.../hadoop/hdfs/ipc/RpcDuplexHandler.java | 175 +++++++++++++++++++
.../apache/hadoop/hdfs/ipc/TestAsyncIPC.java | 88 ++++++++++
.../hadoop/hdfs/ipc/TestRpcProtocolPB.java | 27 +++
.../org/apache/hadoop/hdfs/ipc/TestServer.java | 58 ++++++
.../src/test/proto/test_rpc.proto | 35 ++++
14 files changed, 1103 insertions(+), 5 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-client-modules/hadoop-client-minicluster/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index ea8d680..ca14c19 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -115,6 +115,10 @@
<artifactId>netty</artifactId>
</exclusion>
<exclusion>
+ <groupId>io.netty</groupId>
+ <artifactId>netty-all</artifactId>
+ </exclusion>
+ <exclusion>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
</exclusion>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
index a5ed7a3..c7cdf13 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/pom.xml
@@ -39,6 +39,11 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
<artifactId>okhttp</artifactId>
</dependency>
<dependency>
+ <groupId>io.netty</groupId>
+ <artifactId>netty-all</artifactId>
+ <scope>compile</scope>
+ </dependency>
+ <dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<scope>provided</scope>
@@ -64,11 +69,6 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
<scope>test</scope>
</dependency>
<dependency>
- <groupId>io.netty</groupId>
- <artifactId>netty-all</artifactId>
- <scope>test</scope>
- </dependency>
- <dependency>
<groupId>org.mock-server</groupId>
<artifactId>mockserver-netty</artifactId>
<scope>test</scope>
@@ -163,6 +163,25 @@ http://maven.apache.org/xsd/maven-4.0.0.xsd">
</source>
</configuration>
</execution>
+ <execution>
+ <id>compile-test-protoc</id>
+ <goals>
+ <goal>test-protoc</goal>
+ </goals>
+ <configuration>
+ <protocVersion>${protobuf.version}</protocVersion>
+ <protocCommand>${protoc.path}</protocCommand>
+ <imports>
+ <param>${basedir}/src/test/proto</param>
+ </imports>
+ <source>
+ <directory>${basedir}/src/test/proto</directory>
+ <includes>
+ <include>test_rpc.proto</include>
+ </includes>
+ </source>
+ </configuration>
+ </execution>
</executions>
</plugin>
<plugin>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/BufferCallBeforeInitHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/BufferCallBeforeInitHandler.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/BufferCallBeforeInitHandler.java
new file mode 100644
index 0000000..89433e9
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/BufferCallBeforeInitHandler.java
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import io.netty.channel.ChannelDuplexHandler;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPromise;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.hadoop.classification.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class BufferCallBeforeInitHandler extends ChannelDuplexHandler {
+
+ private enum BufferCallAction {
+ FLUSH, FAIL
+ }
+
+ public static final class BufferCallEvent {
+
+ public final BufferCallAction action;
+
+ public final IOException error;
+
+ private BufferCallEvent(BufferCallBeforeInitHandler.BufferCallAction action,
+ IOException error) {
+ this.action = action;
+ this.error = error;
+ }
+
+ public static BufferCallBeforeInitHandler.BufferCallEvent success() {
+ return SUCCESS_EVENT;
+ }
+
+ public static BufferCallBeforeInitHandler.BufferCallEvent fail(
+ IOException error) {
+ return new BufferCallEvent(BufferCallAction.FAIL, error);
+ }
+ }
+
+ private static final BufferCallEvent SUCCESS_EVENT =
+ new BufferCallEvent(BufferCallAction.FLUSH, null);
+
+ private final Map<Integer, Call> id2Call = new HashMap<>();
+
+ @Override
+ public void write(ChannelHandlerContext ctx, Object msg,
+ ChannelPromise promise) {
+ if (msg instanceof Call) {
+ Call call = (Call) msg;
+ id2Call.put(call.getId(), call);
+ // The call is already in track so here we set the write operation as
+ // success.
+ // We will fail the call directly if we can not write it out.
+ promise.trySuccess();
+ } else {
+ ctx.write(msg, promise);
+ }
+ }
+
+ @Override
+ public void userEventTriggered(ChannelHandlerContext ctx, Object evt)
+ throws Exception {
+ if (evt instanceof BufferCallEvent) {
+ BufferCallEvent bcEvt = (BufferCallBeforeInitHandler.BufferCallEvent) evt;
+ switch (bcEvt.action) {
+ case FLUSH:
+ for (Call call : id2Call.values()) {
+ ctx.write(call);
+ }
+ break;
+ case FAIL:
+ for (Call call : id2Call.values()) {
+ call.setException(bcEvt.error);
+ }
+ break;
+ }
+ ctx.flush();
+ ctx.pipeline().remove(this);
+ } else {
+ ctx.fireUserEventTriggered(evt);
+ }
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/Call.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/Call.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/Call.java
new file mode 100644
index 0000000..14a35af
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/Call.java
@@ -0,0 +1,132 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import com.google.protobuf.Message;
+import com.google.protobuf.RpcCallback;
+import java.io.IOException;
+import org.apache.hadoop.classification.InterfaceAudience;
+
+@InterfaceAudience.Private
+class Call {
+ private final int id;
+
+ private final String protocolName;
+
+ private final long protocolVersion;
+
+ private final String methodName;
+
+ private final Message param;
+
+ private final Message responseDefaultType;
+
+ private volatile Message response;
+
+ private volatile IOException error;
+
+ private boolean done;
+
+ private final RpcCallback<Call> callback;
+
+ Call(int id, String protocolName, long protocolVersion, String methodName,
+ Message param, Message responseDefaultType, RpcCallback<Call> callback) {
+ this.id = id;
+ this.protocolName = protocolName;
+ this.protocolVersion = protocolVersion;
+ this.methodName = methodName;
+ this.param = param;
+ this.responseDefaultType = responseDefaultType;
+ this.callback = callback;
+ }
+
+ private void callComplete() {
+ callback.run(this);
+ }
+
+ /**
+ * Set the exception when there is an error. Notify the caller the call is
+ * done.
+ *
+ * @param error exception thrown by the call; either local or remote
+ */
+ void setException(IOException error) {
+ synchronized (this) {
+ if (done) {
+ return;
+ }
+ this.done = true;
+ this.error = error;
+ }
+ callComplete();
+ }
+
+ /**
+ * Set the return value when there is no error. Notify the caller the call is
+ * done.
+ *
+ * @param response return value of the call.
+ * @param cells Can be null
+ */
+ void setResponse(Message response) {
+ synchronized (this) {
+ if (done) {
+ return;
+ }
+ this.done = true;
+ this.response = response;
+ }
+ callComplete();
+ }
+
+ int getId() {
+ return id;
+ }
+
+ String getProtocolName() {
+ return protocolName;
+ }
+
+ long getProtocolVersion() {
+ return protocolVersion;
+ }
+
+ String getMethodName() {
+ return methodName;
+ }
+
+ Message getParam() {
+ return param;
+ }
+
+ Message getResponseDefaultType() {
+ return responseDefaultType;
+ }
+
+ Message getResponse() {
+ return response;
+ }
+
+ IOException getError() {
+ return error;
+ }
+
+ boolean isDone() {
+ return done;
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/ConnectionId.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/ConnectionId.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/ConnectionId.java
new file mode 100644
index 0000000..111b925
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/ConnectionId.java
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import java.net.InetSocketAddress;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.security.UserGroupInformation;
+
+@InterfaceAudience.Private
+class ConnectionId {
+
+ private static final int PRIME = 16777619;
+
+ private final UserGroupInformation ticket;
+ private final String protocolName;
+ private final InetSocketAddress address;
+
+ public ConnectionId(UserGroupInformation ticket, String protocolName,
+ InetSocketAddress address) {
+ this.ticket = ticket;
+ this.protocolName = protocolName;
+ this.address = address;
+ }
+
+ UserGroupInformation getTicket() {
+ return ticket;
+ }
+
+ String getProtocolName() {
+ return protocolName;
+ }
+
+ InetSocketAddress getAddress() {
+ return address;
+ }
+
+ @Override
+ public int hashCode() {
+ int h = ticket == null ? 0 : ticket.hashCode();
+ h = PRIME * h + protocolName.hashCode();
+ h = PRIME * h + address.hashCode();
+ return h;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (obj instanceof ConnectionId) {
+ ConnectionId id = (ConnectionId) obj;
+ return address.equals(id.address) &&
+ ((ticket != null && ticket.equals(id.ticket)) ||
+ (ticket == id.ticket)) &&
+ protocolName.equals(id.protocolName);
+ }
+ return false;
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/HdfsRpcController.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/HdfsRpcController.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/HdfsRpcController.java
new file mode 100644
index 0000000..71ac3ef
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/HdfsRpcController.java
@@ -0,0 +1,74 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import com.google.protobuf.RpcCallback;
+import com.google.protobuf.RpcController;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class HdfsRpcController implements RpcController {
+
+ private IOException error;
+
+ @Override
+ public void reset() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public boolean failed() {
+ return error != null;
+ }
+
+ @Override
+ public String errorText() {
+ return error != null ? error.getMessage() : null;
+ }
+
+ @Override
+ public void startCancel() {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ public void setFailed(String reason) {
+ this.error = new IOException(reason);
+ }
+
+ public void setException(IOException error) {
+ this.error = error;
+ }
+
+ public IOException getException() {
+ return error;
+ }
+
+ @Override
+ public boolean isCanceled() {
+ return false;
+ }
+
+ @Override
+ public void notifyOnCancel(RpcCallback<Object> callback) {
+ throw new UnsupportedOperationException();
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/IPCUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/IPCUtil.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/IPCUtil.java
new file mode 100644
index 0000000..db46bdb
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/IPCUtil.java
@@ -0,0 +1,34 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import java.io.IOException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+@InterfaceAudience.Private
+class IPCUtil {
+
+ static IOException toIOE(Throwable t) {
+ if (t instanceof IOException) {
+ return (IOException) t;
+ } else {
+ return new IOException(t);
+ }
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcClient.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcClient.java
new file mode 100644
index 0000000..4792173
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcClient.java
@@ -0,0 +1,128 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import com.google.protobuf.Descriptors;
+import com.google.protobuf.Message;
+import com.google.protobuf.RpcCallback;
+import com.google.protobuf.RpcChannel;
+
+import io.netty.channel.Channel;
+import io.netty.channel.EventLoopGroup;
+import io.netty.channel.nio.NioEventLoopGroup;
+import io.netty.channel.socket.nio.NioSocketChannel;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.ipc.ClientId;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.Server.AuthProtocol;
+import org.apache.hadoop.security.UserGroupInformation;
+
+/**
+ * The protobuf based rpc client.
+ */
+@InterfaceAudience.Private
+public class RpcClient implements Closeable {
+
+ private final byte[] clientId;
+
+ private final EventLoopGroup group = new NioEventLoopGroup();
+
+ private final Class<? extends Channel> channelClass = NioSocketChannel.class;
+
+ private final AtomicInteger callIdCnt = new AtomicInteger(0);
+
+ private final ConcurrentMap<ConnectionId, RpcConnection> connections =
+ new ConcurrentHashMap<>();
+
+ public RpcClient() {
+ this.clientId = ClientId.getClientId();
+ }
+
+ private int nextCallId() {
+ int id, next;
+ do {
+ id = callIdCnt.get();
+ next = id < Integer.MAX_VALUE ? id + 1 : 0;
+ } while (!callIdCnt.compareAndSet(id, next));
+ return id;
+ }
+
+ private void onCallFinished(Call call, HdfsRpcController hrc,
+ InetSocketAddress addr, RpcCallback<Message> callback) {
+ IOException error = call.getError();
+ if (error != null) {
+ if (error instanceof RemoteException) {
+ error.fillInStackTrace();
+ }
+ hrc.setException(error);
+ callback.run(null);
+ } else {
+ callback.run(call.getResponse());
+ }
+ }
+
+ private void callMethod(String protocolName, long protocolVersion,
+ Descriptors.MethodDescriptor md, HdfsRpcController hrc, Message param,
+ Message returnType, UserGroupInformation ugi, InetSocketAddress addr,
+ RpcCallback<Message> callback) {
+ Call call =
+ new Call(nextCallId(), protocolName, protocolVersion, md.getName(),
+ param, returnType, c -> onCallFinished(c, hrc, addr, callback));
+ ConnectionId remoteId = new ConnectionId(ugi, protocolName, addr);
+ connections
+ .computeIfAbsent(remoteId,
+ k -> new RpcConnection(this, k, AuthProtocol.NONE))
+ .sendRequest(call);
+ }
+
+ public RpcChannel createRpcChannel(Class<?> protocol, InetSocketAddress addr,
+ UserGroupInformation ugi) {
+ String protocolName = RPC.getProtocolName(protocol);
+ long protocolVersion = RPC.getProtocolVersion(protocol);
+ return (method, controller, request, responsePrototype, done) -> callMethod(
+ protocolName, protocolVersion, method, (HdfsRpcController) controller,
+ request, responsePrototype, ugi, addr, done);
+ }
+
+ byte[] getClientId() {
+ return clientId;
+ }
+
+ EventLoopGroup getGroup() {
+ return group;
+ }
+
+ Class<? extends Channel> getChannelClass() {
+ return channelClass;
+ }
+
+ @Override
+ public void close() throws IOException {
+ connections.values().forEach(c -> c.shutdown());
+ connections.clear();
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcConnection.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcConnection.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcConnection.java
new file mode 100644
index 0000000..5e7b482
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcConnection.java
@@ -0,0 +1,153 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import static org.apache.hadoop.ipc.RpcConstants.CONNECTION_CONTEXT_CALL_ID;
+
+import com.google.protobuf.CodedOutputStream;
+import io.netty.bootstrap.Bootstrap;
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufOutputStream;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelFutureListener;
+import io.netty.channel.ChannelOption;
+import io.netty.channel.ChannelPipeline;
+import io.netty.handler.codec.LengthFieldBasedFrameDecoder;
+import java.io.IOException;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.hdfs.ipc.BufferCallBeforeInitHandler.BufferCallEvent;
+import org.apache.hadoop.ipc.RPC.RpcKind;
+import org.apache.hadoop.ipc.RpcConstants;
+import org.apache.hadoop.ipc.Server.AuthProtocol;
+import org.apache.hadoop.ipc.protobuf.IpcConnectionContextProtos.IpcConnectionContextProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto.OperationProto;
+import org.apache.hadoop.security.SaslRpcServer.AuthMethod;
+import org.apache.hadoop.util.ProtoUtil;
+
+/**
+ * The connection to remote server.
+ */
+@InterfaceAudience.Private
+class RpcConnection {
+
+ final RpcClient rpcClient;
+
+ final ConnectionId remoteId;
+
+ private final AuthProtocol authProtocol;
+
+ private Channel channel;
+
+ public RpcConnection(RpcClient rpcClient, ConnectionId remoteId,
+ AuthProtocol authProtocol) {
+ this.rpcClient = rpcClient;
+ this.remoteId = remoteId;
+ this.authProtocol = authProtocol;
+ }
+
+ private void writeConnectionHeader(Channel ch) {
+ ByteBuf header = ch.alloc().buffer(7);
+ header.writeBytes(RpcConstants.HEADER.duplicate());
+ header.writeByte(RpcConstants.CURRENT_VERSION);
+ header.writeByte(0); // service class
+ header.writeByte(authProtocol.callId);
+ ch.writeAndFlush(header);
+ }
+
+ private void writeConnectionContext(Channel ch) throws IOException {
+ RpcRequestHeaderProto connectionContextHeader =
+ ProtoUtil.makeRpcRequestHeader(RpcKind.RPC_PROTOCOL_BUFFER,
+ OperationProto.RPC_FINAL_PACKET, CONNECTION_CONTEXT_CALL_ID,
+ RpcConstants.INVALID_RETRY_COUNT, rpcClient.getClientId());
+ int headerSize = connectionContextHeader.getSerializedSize();
+ IpcConnectionContextProto message = ProtoUtil.makeIpcConnectionContext(
+ remoteId.getProtocolName(), remoteId.getTicket(), AuthMethod.SIMPLE);
+ int messageSize = message.getSerializedSize();
+
+ int totalSize =
+ CodedOutputStream.computeRawVarint32Size(headerSize) + headerSize +
+ CodedOutputStream.computeRawVarint32Size(messageSize) + messageSize;
+ ByteBuf buf = ch.alloc().buffer(totalSize + 4);
+ buf.writeInt(totalSize);
+ ByteBufOutputStream out = new ByteBufOutputStream(buf);
+ connectionContextHeader.writeDelimitedTo(out);
+ message.writeDelimitedTo(out);
+ ch.writeAndFlush(buf);
+ }
+
+ private void established(Channel ch) throws IOException {
+ ChannelPipeline p = ch.pipeline();
+ String addBeforeHandler =
+ p.context(BufferCallBeforeInitHandler.class).name();
+ p.addBefore(addBeforeHandler, "frameDecoder",
+ new LengthFieldBasedFrameDecoder(Integer.MAX_VALUE, 0, 4, 0, 4));
+ p.addBefore(addBeforeHandler, "rpcHandler", new RpcDuplexHandler(this));
+ p.fireUserEventTriggered(BufferCallEvent.success());
+ }
+
+ private Channel connect() {
+ if (channel != null) {
+ return channel;
+ }
+ channel = new Bootstrap().group(rpcClient.getGroup())
+ .channel(rpcClient.getChannelClass())
+ .option(ChannelOption.TCP_NODELAY, true)
+ .option(ChannelOption.SO_KEEPALIVE, true)
+ .handler(new BufferCallBeforeInitHandler())
+ .remoteAddress(remoteId.getAddress()).connect()
+ .addListener(new ChannelFutureListener() {
+
+ @Override
+ public void operationComplete(ChannelFuture future) throws Exception {
+ Channel ch = future.channel();
+ if (!future.isSuccess()) {
+ failInit(ch, IPCUtil.toIOE(future.cause()));
+ return;
+ }
+ writeConnectionHeader(ch);
+ writeConnectionContext(ch);
+ established(ch);
+ }
+ }).channel();
+ return channel;
+ }
+
+ private synchronized void failInit(Channel ch, IOException e) {
+ // fail all pending calls
+ ch.pipeline().fireUserEventTriggered(BufferCallEvent.fail(e));
+ shutdown0();
+ }
+
+ private void shutdown0() {
+ if (channel != null) {
+ channel.close();
+ channel = null;
+ }
+ }
+
+ public synchronized void shutdown() {
+ shutdown0();
+ }
+
+ public synchronized void sendRequest(Call call) {
+ Channel channel = connect();
+ channel.eventLoop().execute(() -> channel.writeAndFlush(call));
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcDuplexHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcDuplexHandler.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcDuplexHandler.java
new file mode 100644
index 0000000..3cc5659
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ipc/RpcDuplexHandler.java
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.ipc.RPC.RpcKind;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.ipc.protobuf.ProtobufRpcEngineProtos.RequestHeaderProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto.OperationProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto.RpcErrorCodeProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto.RpcStatusProto;
+import org.apache.hadoop.util.ProtoUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.protobuf.CodedOutputStream;
+import com.google.protobuf.Message;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufInputStream;
+import io.netty.buffer.ByteBufOutputStream;
+import io.netty.channel.ChannelDuplexHandler;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPromise;
+
+@InterfaceAudience.Private
+class RpcDuplexHandler extends ChannelDuplexHandler {
+
+ private static final Logger LOG =
+ LoggerFactory.getLogger(RpcDuplexHandler.class);
+
+ private final RpcConnection conn;
+
+ private final Map<Integer, Call> id2Call = new HashMap<>();
+
+ public RpcDuplexHandler(RpcConnection conn) {
+ this.conn = conn;
+ }
+
+ private void writeRequest(ChannelHandlerContext ctx, Call call,
+ ChannelPromise promise) throws IOException {
+ id2Call.put(call.getId(), call);
+
+ RpcRequestHeaderProto rpcHeader = ProtoUtil.makeRpcRequestHeader(
+ RpcKind.RPC_PROTOCOL_BUFFER, OperationProto.RPC_FINAL_PACKET,
+ call.getId(), 0, conn.rpcClient.getClientId());
+ int rpcHeaderSize = rpcHeader.getSerializedSize();
+ RequestHeaderProto requestHeader =
+ RequestHeaderProto.newBuilder().setMethodName(call.getMethodName())
+ .setDeclaringClassProtocolName(call.getProtocolName())
+ .setClientProtocolVersion(call.getProtocolVersion()).build();
+ int requestHeaderSize = requestHeader.getSerializedSize();
+ int totalSize = CodedOutputStream.computeRawVarint32Size(rpcHeaderSize) +
+ rpcHeaderSize +
+ CodedOutputStream.computeRawVarint32Size(requestHeaderSize) +
+ requestHeaderSize;
+ Message param = call.getParam();
+ if (param != null) {
+ int paramSize = param.getSerializedSize();
+ totalSize +=
+ CodedOutputStream.computeRawVarint32Size(paramSize) + paramSize;
+ }
+ ByteBufOutputStream out =
+ new ByteBufOutputStream(ctx.alloc().buffer(totalSize + 4));
+ out.writeInt(totalSize);
+ rpcHeader.writeDelimitedTo(out);
+ requestHeader.writeDelimitedTo(out);
+ if (param != null) {
+ param.writeDelimitedTo(out);
+ }
+ ctx.write(out.buffer(), promise);
+ }
+
+ @Override
+ public void write(ChannelHandlerContext ctx, Object msg,
+ ChannelPromise promise) throws Exception {
+ if (msg instanceof Call) {
+ writeRequest(ctx, (Call) msg, promise);
+ } else {
+ ctx.write(msg, promise);
+ }
+ }
+
+ private void readResponse(ChannelHandlerContext ctx, ByteBuf buf)
+ throws Exception {
+ ByteBufInputStream in = new ByteBufInputStream(buf);
+ RpcResponseHeaderProto header =
+ RpcResponseHeaderProto.parseDelimitedFrom(in);
+ int id = header.getCallId();
+ RpcStatusProto status = header.getStatus();
+ if (status != RpcStatusProto.SUCCESS) {
+ String exceptionClassName =
+ header.hasExceptionClassName() ? header.getExceptionClassName()
+ : "ServerDidNotSetExceptionClassName";
+ String errorMsg = header.hasErrorMsg() ? header.getErrorMsg()
+ : "ServerDidNotSetErrorMsg";
+ RpcErrorCodeProto errCode =
+ (header.hasErrorDetail() ? header.getErrorDetail() : null);
+ if (errCode == null) {
+ LOG.warn("Detailed error code not set by server on rpc error");
+ }
+ RemoteException re =
+ new RemoteException(exceptionClassName, errorMsg, errCode);
+ if (status == RpcStatusProto.ERROR) {
+ Call call = id2Call.remove(id);
+ call.setException(re);
+ } else if (status == RpcStatusProto.FATAL) {
+ exceptionCaught(ctx, re);
+ }
+ return;
+ }
+ Call call = id2Call.remove(id);
+ call.setResponse(call.getResponseDefaultType().getParserForType()
+ .parseDelimitedFrom(in));
+ }
+
+ @Override
+ public void channelRead(ChannelHandlerContext ctx, Object msg)
+ throws Exception {
+ if (msg instanceof ByteBuf) {
+ ByteBuf buf = (ByteBuf) msg;
+ try {
+ readResponse(ctx, buf);
+ } finally {
+ buf.release();
+ }
+ }
+ }
+
+ private void cleanupCalls(ChannelHandlerContext ctx, IOException error) {
+ for (Call call : id2Call.values()) {
+ call.setException(error);
+ }
+ id2Call.clear();
+ }
+
+ @Override
+ public void channelInactive(ChannelHandlerContext ctx) throws Exception {
+ if (!id2Call.isEmpty()) {
+ cleanupCalls(ctx, new IOException("Connection closed"));
+ }
+ conn.shutdown();
+ ctx.fireChannelInactive();
+ }
+
+ @Override
+ public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
+ throws Exception {
+ if (!id2Call.isEmpty()) {
+ cleanupCalls(ctx, new IOException("Connection closed"));
+ }
+ conn.shutdown();
+ }
+}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestAsyncIPC.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestAsyncIPC.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestAsyncIPC.java
new file mode 100644
index 0000000..86fde48
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestAsyncIPC.java
@@ -0,0 +1,88 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import static org.junit.Assert.assertEquals;
+
+import com.google.protobuf.RpcChannel;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.CountDownLatch;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.ipc.protobuf.TestRpcProtos;
+import org.apache.hadoop.hdfs.ipc.protobuf.TestRpcProtos.EchoRequestProto;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestAsyncIPC {
+
+ private static Configuration CONF;
+
+ private static TestServer SERVER;
+
+ private static int PORT;
+
+ @BeforeClass
+ public static void setUp() throws IOException {
+ CONF = new Configuration();
+ RPC.setProtocolEngine(CONF, TestRpcProtocolPB.class,
+ ProtobufRpcEngine.class);
+ SERVER = new TestServer(CONF);
+ SERVER.start();
+ PORT = SERVER.port();
+ }
+
+ @AfterClass
+ public static void tearDown() {
+ SERVER.stop();
+ }
+
+ @Test
+ public void test() throws IOException, InterruptedException {
+ try (RpcClient client = new RpcClient()) {
+ RpcChannel channel = client.createRpcChannel(TestRpcProtocolPB.class,
+ new InetSocketAddress("localhost", PORT),
+ UserGroupInformation.getCurrentUser());
+ TestRpcProtos.TestRpcService.Interface stub =
+ TestRpcProtos.TestRpcService.newStub(channel);
+ Map<Integer, String> results = new HashMap<>();
+ int count = 100;
+ CountDownLatch latch = new CountDownLatch(count);
+ for (int i = 0; i < count; i++) {
+ final int index = i;
+ stub.echo(new HdfsRpcController(),
+ EchoRequestProto.newBuilder().setMessage("Echo-" + index).build(),
+ resp -> {
+ results.put(index, resp.getMessage());
+ latch.countDown();
+ });
+ }
+ latch.await();
+ assertEquals(count, results.size());
+ for (int i = 0; i < count; i++) {
+ assertEquals("Echo-" + i, results.get(i));
+ }
+ }
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestRpcProtocolPB.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestRpcProtocolPB.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestRpcProtocolPB.java
new file mode 100644
index 0000000..c7f7f27
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestRpcProtocolPB.java
@@ -0,0 +1,27 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import org.apache.hadoop.hdfs.ipc.protobuf.TestRpcProtos;
+import org.apache.hadoop.ipc.ProtocolInfo;
+
+@ProtocolInfo(protocolName = "org.apache.hadoop.hdfs.ipc.TestRpcProtocol",
+ protocolVersion = 1)
+public interface TestRpcProtocolPB
+ extends TestRpcProtos.TestRpcService.BlockingInterface {
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestServer.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestServer.java b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestServer.java
new file mode 100644
index 0000000..3e06cc8
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/ipc/TestServer.java
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.ipc;
+
+import com.google.protobuf.RpcController;
+import com.google.protobuf.ServiceException;
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.ipc.protobuf.TestRpcProtos;
+import org.apache.hadoop.hdfs.ipc.protobuf.TestRpcProtos.EchoRequestProto;
+import org.apache.hadoop.hdfs.ipc.protobuf.TestRpcProtos.EchoResponseProto;
+import org.apache.hadoop.ipc.RPC;
+
+public class TestServer implements TestRpcProtocolPB {
+
+ private final RPC.Server server;
+
+ public TestServer(Configuration conf) throws IOException {
+ server = new RPC.Builder(conf).setProtocol(TestRpcProtocolPB.class)
+ .setInstance(
+ TestRpcProtos.TestRpcService.newReflectiveBlockingService(this))
+ .setNumHandlers(10).build();
+ }
+
+ public void start() {
+ server.start();
+ }
+
+ public void stop() {
+ server.stop();
+ }
+
+ public int port() {
+ return server.getPort();
+ }
+
+ @Override
+ public EchoResponseProto echo(RpcController controller,
+ EchoRequestProto request) throws ServiceException {
+ return EchoResponseProto.newBuilder().setMessage(request.getMessage())
+ .build();
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/48c41c1e/hadoop-hdfs-project/hadoop-hdfs-client/src/test/proto/test_rpc.proto
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/test/proto/test_rpc.proto b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/proto/test_rpc.proto
new file mode 100644
index 0000000..0997f56
--- /dev/null
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/test/proto/test_rpc.proto
@@ -0,0 +1,35 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+option java_package = "org.apache.hadoop.hdfs.ipc.protobuf";
+option java_outer_classname = "TestRpcProtos";
+option java_generic_services = true;
+option java_generate_equals_and_hash = true;
+package hadoop.hdfs;
+
+message EchoRequestProto {
+ required string message = 1;
+}
+
+message EchoResponseProto {
+ required string message = 1;
+}
+
+service TestRpcService {
+ rpc echo(EchoRequestProto) returns (EchoResponseProto);
+}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[28/50] hadoop git commit: YARN-8511. When AM releases a container,
RM removes allocation tags before it is released by NM. (Weiwei Yang
via wangda)
Posted by zh...@apache.org.
YARN-8511. When AM releases a container, RM removes allocation tags before it is released by NM. (Weiwei Yang via wangda)
Change-Id: I6f9f409f2ef685b405cbff547dea9623bf3322d9
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/752dcce5
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/752dcce5
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/752dcce5
Branch: refs/heads/HDFS-13572
Commit: 752dcce5f4cf0f6ebcb40a61f622f1a885c4bda7
Parents: 88b2794
Author: Wangda Tan <wa...@apache.org>
Authored: Mon Jul 16 10:54:41 2018 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Mon Jul 16 10:54:41 2018 -0700
----------------------------------------------------------------------
.../hadoop/yarn/sls/nodemanager/NodeInfo.java | 6 ++
.../yarn/sls/scheduler/RMNodeWrapper.java | 6 ++
.../rmcontainer/RMContainerImpl.java | 5 -
.../server/resourcemanager/rmnode/RMNode.java | 6 ++
.../resourcemanager/rmnode/RMNodeImpl.java | 5 +
.../scheduler/SchedulerNode.java | 15 +++
.../yarn/server/resourcemanager/MockNodes.java | 5 +
.../rmcontainer/TestRMContainerImpl.java | 16 ++-
.../scheduler/TestAbstractYarnScheduler.java | 104 +++++++++++++++++++
9 files changed, 162 insertions(+), 6 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
index 0c99139..69946c8 100644
--- a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
+++ b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
import org.apache.hadoop.yarn.api.records.ResourceUtilization;
import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
import org.apache.hadoop.yarn.server.api.records.OpportunisticContainersStatus;
+import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
import org.apache.hadoop.yarn.server.resourcemanager.rmnode
@@ -219,6 +220,11 @@ public class NodeInfo {
}
@Override
+ public RMContext getRMContext() {
+ return null;
+ }
+
+ @Override
public Resource getPhysicalResource() {
return null;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
index 78645e9..a96b790 100644
--- a/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
+++ b/hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
@@ -30,6 +30,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
import org.apache.hadoop.yarn.api.records.ResourceUtilization;
import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
import org.apache.hadoop.yarn.server.api.records.OpportunisticContainersStatus;
+import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
import org.apache.hadoop.yarn.server.resourcemanager.rmnode
@@ -207,6 +208,11 @@ public class RMNodeWrapper implements RMNode {
}
@Override
+ public RMContext getRMContext() {
+ return node.getRMContext();
+ }
+
+ @Override
public Resource getPhysicalResource() {
return null;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
index b5c8e7c..efac666 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/RMContainerImpl.java
@@ -701,11 +701,6 @@ public class RMContainerImpl implements RMContainer {
@Override
public void transition(RMContainerImpl container, RMContainerEvent event) {
- // Notify AllocationTagsManager
- container.rmContext.getAllocationTagsManager().removeContainer(
- container.getNodeId(), container.getContainerId(),
- container.getAllocationTags());
-
RMContainerFinishedEvent finishedEvent = (RMContainerFinishedEvent) event;
container.finishTime = System.currentTimeMillis();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
index 872f2a6..68a780e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
import org.apache.hadoop.yarn.api.records.ResourceUtilization;
import org.apache.hadoop.yarn.server.api.protocolrecords.NodeHeartbeatResponse;
import org.apache.hadoop.yarn.server.api.records.OpportunisticContainersStatus;
+import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
/**
* Node managers information on available resources
@@ -189,4 +190,9 @@ public interface RMNode {
* @return a map of each allocation tag and its count.
*/
Map<String, Long> getAllocationTagsWithCount();
+
+ /**
+ * @return the RM context associated with this RM node.
+ */
+ RMContext getRMContext();
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
index b942afa..dfd93e2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
@@ -1541,4 +1541,9 @@ public class RMNodeImpl implements RMNode, EventHandler<RMNodeEvent> {
return context.getAllocationTagsManager()
.getAllocationTagsWithCount(getNodeID());
}
+
+ @Override
+ public RMContext getRMContext() {
+ return this.context;
+ }
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
index d5bfc57..59771fd 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerNode.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
import org.apache.hadoop.yarn.api.records.ResourceUtilization;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager;
+import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerState;
@@ -74,6 +75,7 @@ public abstract class SchedulerNode {
private final RMNode rmNode;
private final String nodeName;
+ private final RMContext rmContext;
private volatile Set<String> labels = null;
@@ -83,6 +85,7 @@ public abstract class SchedulerNode {
public SchedulerNode(RMNode node, boolean usePortForNodeName,
Set<String> labels) {
this.rmNode = node;
+ this.rmContext = node.getRMContext();
this.unallocatedResource = Resources.clone(node.getTotalCapability());
this.totalResource = Resources.clone(node.getTotalCapability());
if (usePortForNodeName) {
@@ -242,6 +245,18 @@ public abstract class SchedulerNode {
launchedContainers.remove(containerId);
Container container = info.container.getContainer();
+
+ // We remove allocation tags when a container is actually
+ // released on NM. This is to avoid running into situation
+ // when AM releases a container and NM has some delay to
+ // actually release it, then the tag can still be visible
+ // at RM so that RM can respect it during scheduling new containers.
+ if (rmContext != null && rmContext.getAllocationTagsManager() != null) {
+ rmContext.getAllocationTagsManager()
+ .removeContainer(container.getNodeId(),
+ container.getId(), container.getAllocationTags());
+ }
+
updateResourceForReleasedContainer(container);
if (LOG.isDebugEnabled()) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
index 84105d9..9041132 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
@@ -286,6 +286,11 @@ public class MockNodes {
}
@Override
+ public RMContext getRMContext() {
+ return null;
+ }
+
+ @Override
public Resource getPhysicalResource() {
return this.physicalResource;
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
index 7a930cd..1115e8c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmcontainer/TestRMContainerImpl.java
@@ -60,10 +60,14 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEvent;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType;
import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.event.RMAppAttemptContainerFinishedEvent;
+import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNode;
import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType;
+import org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerNode;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestUtils;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTags;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.AllocationTagsManager;
import org.apache.hadoop.yarn.server.scheduler.SchedulerRequestKey;
@@ -401,6 +405,7 @@ public class TestRMContainerImpl {
Container container = BuilderUtils.newContainer(containerId, nodeId,
"host:3465", resource, priority, null);
+ container.setAllocationTags(ImmutableSet.of("mapper"));
ConcurrentMap<ApplicationId, RMApp> rmApps =
spy(new ConcurrentHashMap<ApplicationId, RMApp>());
RMApp rmApp = mock(RMApp.class);
@@ -423,11 +428,14 @@ public class TestRMContainerImpl {
true);
when(rmContext.getYarnConfiguration()).thenReturn(conf);
+ RMNode rmNode = new RMNodeImpl(nodeId, rmContext,
+ "localhost", 0, 0, null, Resource.newInstance(10240, 10), null);
+ SchedulerNode schedulerNode = new FiCaSchedulerNode(rmNode, false);
+
/* First container: ALLOCATED -> KILLED */
RMContainerImpl rmContainer = new RMContainerImpl(container,
SchedulerRequestKey.extractFrom(container), appAttemptId,
nodeId, "user", rmContext);
- rmContainer.setAllocationTags(ImmutableSet.of("mapper"));
Assert.assertEquals(0,
tagsManager.getNodeCardinalityByOp(nodeId,
@@ -437,6 +445,7 @@ public class TestRMContainerImpl {
rmContainer.handle(new RMContainerEvent(containerId,
RMContainerEventType.START));
+ schedulerNode.allocateContainer(rmContainer);
Assert.assertEquals(1,
tagsManager.getNodeCardinalityByOp(nodeId,
@@ -446,6 +455,7 @@ public class TestRMContainerImpl {
rmContainer.handle(new RMContainerFinishedEvent(containerId, ContainerStatus
.newInstance(containerId, ContainerState.COMPLETE, "", 0),
RMContainerEventType.KILL));
+ schedulerNode.releaseContainer(container.getId(), true);
Assert.assertEquals(0,
tagsManager.getNodeCardinalityByOp(nodeId,
@@ -465,6 +475,7 @@ public class TestRMContainerImpl {
rmContainer.setAllocationTags(ImmutableSet.of("mapper"));
rmContainer.handle(new RMContainerEvent(containerId,
RMContainerEventType.START));
+ schedulerNode.allocateContainer(rmContainer);
Assert.assertEquals(1,
tagsManager.getNodeCardinalityByOp(nodeId,
@@ -477,6 +488,7 @@ public class TestRMContainerImpl {
rmContainer.handle(new RMContainerFinishedEvent(containerId, ContainerStatus
.newInstance(containerId, ContainerState.COMPLETE, "", 0),
RMContainerEventType.FINISHED));
+ schedulerNode.releaseContainer(container.getId(), true);
Assert.assertEquals(0,
tagsManager.getNodeCardinalityByOp(nodeId,
@@ -496,6 +508,7 @@ public class TestRMContainerImpl {
rmContainer.handle(new RMContainerEvent(containerId,
RMContainerEventType.START));
+ schedulerNode.allocateContainer(rmContainer);
Assert.assertEquals(1,
tagsManager.getNodeCardinalityByOp(nodeId,
@@ -511,6 +524,7 @@ public class TestRMContainerImpl {
rmContainer.handle(new RMContainerFinishedEvent(containerId, ContainerStatus
.newInstance(containerId, ContainerState.COMPLETE, "", 0),
RMContainerEventType.FINISHED));
+ schedulerNode.releaseContainer(container.getId(), true);
Assert.assertEquals(0,
tagsManager.getNodeCardinalityByOp(nodeId,
http://git-wip-us.apache.org/repos/asf/hadoop/blob/752dcce5/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java
index c0f8d39..ba409b1 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAbstractYarnScheduler.java
@@ -27,9 +27,16 @@ import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Sets;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.service.Service;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
+import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
import org.apache.hadoop.yarn.api.records.*;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.event.Dispatcher;
@@ -416,6 +423,103 @@ public class TestAbstractYarnScheduler extends ParameterizedSchedulerTestBase {
}
}
+ @Test(timeout = 30000l)
+ public void testContainerReleaseWithAllocationTags() throws Exception {
+ // Currently only can be tested against capacity scheduler.
+ if (getSchedulerType().equals(SchedulerType.CAPACITY)) {
+ final String testTag1 = "some-tag";
+ final String testTag2 = "some-other-tag";
+ YarnConfiguration conf = getConf();
+ conf.set(YarnConfiguration.RM_PLACEMENT_CONSTRAINTS_HANDLER, "scheduler");
+ MockRM rm1 = new MockRM(conf);
+ rm1.start();
+ MockNM nm1 = new MockNM("127.0.0.1:1234",
+ 10240, rm1.getResourceTrackerService());
+ nm1.registerNode();
+ RMApp app1 =
+ rm1.submitApp(200, "name", "user", new HashMap<>(), false, "default",
+ -1, null, "Test", false, true);
+ MockAM am1 = MockRM.launchAndRegisterAM(app1, rm1, nm1);
+
+ // allocate 1 container with tag1
+ SchedulingRequest sr = SchedulingRequest
+ .newInstance(1l, Priority.newInstance(1),
+ ExecutionTypeRequest.newInstance(ExecutionType.GUARANTEED),
+ Sets.newHashSet(testTag1),
+ ResourceSizing.newInstance(1, Resource.newInstance(1024, 1)),
+ null);
+
+ // allocate 3 containers with tag2
+ SchedulingRequest sr1 = SchedulingRequest
+ .newInstance(2l, Priority.newInstance(1),
+ ExecutionTypeRequest.newInstance(ExecutionType.GUARANTEED),
+ Sets.newHashSet(testTag2),
+ ResourceSizing.newInstance(3, Resource.newInstance(1024, 1)),
+ null);
+
+ AllocateRequest ar = AllocateRequest.newBuilder()
+ .schedulingRequests(Lists.newArrayList(sr, sr1)).build();
+ am1.allocate(ar);
+ nm1.nodeHeartbeat(true);
+
+ List<Container> allocated = new ArrayList<>();
+ while (allocated.size() < 4) {
+ AllocateResponse rsp = am1
+ .allocate(new ArrayList<>(), new ArrayList<>());
+ allocated.addAll(rsp.getAllocatedContainers());
+ nm1.nodeHeartbeat(true);
+ Thread.sleep(1000);
+ }
+
+ Assert.assertEquals(4, allocated.size());
+
+ Set<Container> containers = allocated.stream()
+ .filter(container -> container.getAllocationRequestId() == 1l)
+ .collect(Collectors.toSet());
+ Assert.assertNotNull(containers);
+ Assert.assertEquals(1, containers.size());
+ ContainerId cid = containers.iterator().next().getId();
+
+ // mock container start
+ rm1.getRMContext().getScheduler()
+ .getSchedulerNode(nm1.getNodeId()).containerStarted(cid);
+
+ // verifies the allocation is made with correct number of tags
+ Map<String, Long> nodeTags = rm1.getRMContext()
+ .getAllocationTagsManager()
+ .getAllocationTagsWithCount(nm1.getNodeId());
+ Assert.assertNotNull(nodeTags.get(testTag1));
+ Assert.assertEquals(1, nodeTags.get(testTag1).intValue());
+
+ // release a container
+ am1.allocate(new ArrayList<>(), Lists.newArrayList(cid));
+
+ // before NM confirms, the tag should still exist
+ nodeTags = rm1.getRMContext().getAllocationTagsManager()
+ .getAllocationTagsWithCount(nm1.getNodeId());
+ Assert.assertNotNull(nodeTags);
+ Assert.assertNotNull(nodeTags.get(testTag1));
+ Assert.assertEquals(1, nodeTags.get(testTag1).intValue());
+
+ // NM reports back that container is released
+ // RM should cleanup the tag
+ ContainerStatus cs = ContainerStatus.newInstance(cid,
+ ContainerState.COMPLETE, "", 0);
+ nm1.nodeHeartbeat(Lists.newArrayList(cs), true);
+
+ // Wait on condition
+ // 1) tag1 doesn't exist anymore
+ // 2) num of tag2 is still 3
+ GenericTestUtils.waitFor(() -> {
+ Map<String, Long> tags = rm1.getRMContext()
+ .getAllocationTagsManager()
+ .getAllocationTagsWithCount(nm1.getNodeId());
+ return tags.get(testTag1) == null &&
+ tags.get(testTag2).intValue() == 3;
+ }, 500, 3000);
+ }
+ }
+
@Test(timeout=60000)
public void testContainerReleasedByNode() throws Exception {
System.out.println("Starting testContainerReleasedByNode");
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[21/50] hadoop git commit: HDDS-210. Make "-file" argument optional
for ozone getKey command. Contributed by Lokesh Jain.
Posted by zh...@apache.org.
HDDS-210. Make "-file" argument optional for ozone getKey command. Contributed by Lokesh Jain.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/103f2eeb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/103f2eeb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/103f2eeb
Branch: refs/heads/HDFS-13572
Commit: 103f2eeb57dbadd9abbbc25a05bb7c79b48fdc17
Parents: 88625f5
Author: Xiaoyu Yao <xy...@apache.org>
Authored: Fri Jul 13 11:44:24 2018 -0700
Committer: Xiaoyu Yao <xy...@apache.org>
Committed: Fri Jul 13 11:45:02 2018 -0700
----------------------------------------------------------------------
.../org/apache/hadoop/ozone/ozShell/TestOzoneShell.java | 12 ++++++++++++
.../hadoop/ozone/web/ozShell/keys/GetKeyHandler.java | 9 ++++++---
2 files changed, 18 insertions(+), 3 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/103f2eeb/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
index a4b30f0..000d530 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
@@ -705,6 +705,18 @@ public class TestOzoneShell {
randFile.read(dataBytes);
}
assertEquals(dataStr, DFSUtil.bytes2String(dataBytes));
+
+ tmpPath = baseDir.getAbsolutePath() + File.separatorChar + keyName;
+ args = new String[] {"-getKey",
+ url + "/" + volumeName + "/" + bucketName + "/" + keyName, "-file",
+ baseDir.getAbsolutePath()};
+ assertEquals(0, ToolRunner.run(shell, args));
+
+ dataBytes = new byte[dataStr.length()];
+ try (FileInputStream randFile = new FileInputStream(new File(tmpPath))) {
+ randFile.read(dataBytes);
+ }
+ assertEquals(dataStr, DFSUtil.bytes2String(dataBytes));
}
@Test
http://git-wip-us.apache.org/repos/asf/hadoop/blob/103f2eeb/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
index 34620b4..2d059e0 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/keys/GetKeyHandler.java
@@ -98,11 +98,14 @@ public class GetKeyHandler extends Handler {
Path dataFilePath = Paths.get(fileName);
File dataFile = new File(fileName);
+ if (dataFile.exists() && dataFile.isDirectory()) {
+ dataFile = new File(fileName, keyName);
+ }
if (dataFile.exists()) {
- throw new OzoneClientException(fileName +
- "exists. Download will overwrite an " +
- "existing file. Aborting.");
+ throw new OzoneClientException(
+ fileName + "exists. Download will overwrite an "
+ + "existing file. Aborting.");
}
OzoneVolume vol = client.getObjectStore().getVolume(volumeName);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[44/50] hadoop git commit: HADOOP-15547/ WASB: improve listStatus
performance. Contributed by Thomas Marquardt.
Posted by zh...@apache.org.
HADOOP-15547/ WASB: improve listStatus performance.
Contributed by Thomas Marquardt.
(cherry picked from commit 749fff577ed9afb4ef8a54b8948f74be083cc620)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/45d9568a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/45d9568a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/45d9568a
Branch: refs/heads/HDFS-13572
Commit: 45d9568aaaf532a6da11bd7c1844ff81bf66bab1
Parents: 5836e0a
Author: Steve Loughran <st...@apache.org>
Authored: Thu Jul 19 12:31:19 2018 -0700
Committer: Steve Loughran <st...@apache.org>
Committed: Thu Jul 19 12:31:19 2018 -0700
----------------------------------------------------------------------
.../dev-support/findbugs-exclude.xml | 10 +
hadoop-tools/hadoop-azure/pom.xml | 12 +
.../fs/azure/AzureNativeFileSystemStore.java | 182 ++++-----
.../apache/hadoop/fs/azure/FileMetadata.java | 77 ++--
.../hadoop/fs/azure/NativeAzureFileSystem.java | 376 ++++++++-----------
.../hadoop/fs/azure/NativeFileSystemStore.java | 15 +-
.../apache/hadoop/fs/azure/PartialListing.java | 61 ---
.../hadoop/fs/azure/ITestListPerformance.java | 196 ++++++++++
8 files changed, 514 insertions(+), 415 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/45d9568a/hadoop-tools/hadoop-azure/dev-support/findbugs-exclude.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/dev-support/findbugs-exclude.xml b/hadoop-tools/hadoop-azure/dev-support/findbugs-exclude.xml
index cde1734..38de35e 100644
--- a/hadoop-tools/hadoop-azure/dev-support/findbugs-exclude.xml
+++ b/hadoop-tools/hadoop-azure/dev-support/findbugs-exclude.xml
@@ -47,4 +47,14 @@
<Bug pattern="WMI_WRONG_MAP_ITERATOR" />
<Priority value="2" />
</Match>
+
+ <!-- FileMetadata is used internally for storing metadata but also
+ subclasses FileStatus to reduce allocations when listing a large number
+ of files. When it is returned to an external caller as a FileStatus, the
+ extra metadata is no longer useful and we want the equals and hashCode
+ methods of FileStatus to be used. -->
+ <Match>
+ <Class name="org.apache.hadoop.fs.azure.FileMetadata" />
+ <Bug pattern="EQ_DOESNT_OVERRIDE_EQUALS" />
+ </Match>
</FindBugsFilter>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/45d9568a/hadoop-tools/hadoop-azure/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/pom.xml b/hadoop-tools/hadoop-azure/pom.xml
index 44b67a0..52b5b72 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -43,6 +43,8 @@
<fs.azure.scale.test.huge.partitionsize>unset</fs.azure.scale.test.huge.partitionsize>
<!-- Timeout in seconds for scale tests.-->
<fs.azure.scale.test.timeout>7200</fs.azure.scale.test.timeout>
+ <fs.azure.scale.test.list.performance.threads>10</fs.azure.scale.test.list.performance.threads>
+ <fs.azure.scale.test.list.performance.files>1000</fs.azure.scale.test.list.performance.files>
</properties>
<build>
@@ -298,6 +300,8 @@
<fs.azure.scale.test.huge.filesize>${fs.azure.scale.test.huge.filesize}</fs.azure.scale.test.huge.filesize>
<fs.azure.scale.test.huge.huge.partitionsize>${fs.azure.scale.test.huge.partitionsize}</fs.azure.scale.test.huge.huge.partitionsize>
<fs.azure.scale.test.timeout>${fs.azure.scale.test.timeout}</fs.azure.scale.test.timeout>
+ <fs.azure.scale.test.list.performance.threads>${fs.azure.scale.test.list.performance.threads}</fs.azure.scale.test.list.performance.threads>
+ <fs.azure.scale.test.list.performance.files>${fs.azure.scale.test.list.performance.files}</fs.azure.scale.test.list.performance.files>
</systemPropertyVariables>
<includes>
<include>**/Test*.java</include>
@@ -326,6 +330,8 @@
<fs.azure.scale.test.huge.filesize>${fs.azure.scale.test.huge.filesize}</fs.azure.scale.test.huge.filesize>
<fs.azure.scale.test.huge.huge.partitionsize>${fs.azure.scale.test.huge.partitionsize}</fs.azure.scale.test.huge.huge.partitionsize>
<fs.azure.scale.test.timeout>${fs.azure.scale.test.timeout}</fs.azure.scale.test.timeout>
+ <fs.azure.scale.test.list.performance.threads>${fs.azure.scale.test.list.performance.threads}</fs.azure.scale.test.list.performance.threads>
+ <fs.azure.scale.test.list.performance.files>${fs.azure.scale.test.list.performance.files}</fs.azure.scale.test.list.performance.files>
</systemPropertyVariables>
<includes>
<include>**/TestRollingWindowAverage*.java</include>
@@ -367,6 +373,8 @@
<fs.azure.scale.test.huge.filesize>${fs.azure.scale.test.huge.filesize}</fs.azure.scale.test.huge.filesize>
<fs.azure.scale.test.huge.huge.partitionsize>${fs.azure.scale.test.huge.partitionsize}</fs.azure.scale.test.huge.huge.partitionsize>
<fs.azure.scale.test.timeout>${fs.azure.scale.test.timeout}</fs.azure.scale.test.timeout>
+ <fs.azure.scale.test.list.performance.threads>${fs.azure.scale.test.list.performance.threads}</fs.azure.scale.test.list.performance.threads>
+ <fs.azure.scale.test.list.performance.files>${fs.azure.scale.test.list.performance.files}</fs.azure.scale.test.list.performance.files>
</systemPropertyVariables>
<!-- Some tests cannot run in parallel. Tests that cover -->
<!-- access to the root directory must run in isolation -->
@@ -412,6 +420,8 @@
<fs.azure.scale.test.huge.filesize>${fs.azure.scale.test.huge.filesize}</fs.azure.scale.test.huge.filesize>
<fs.azure.scale.test.huge.huge.partitionsize>${fs.azure.scale.test.huge.partitionsize}</fs.azure.scale.test.huge.huge.partitionsize>
<fs.azure.scale.test.timeout>${fs.azure.scale.test.timeout}</fs.azure.scale.test.timeout>
+ <fs.azure.scale.test.list.performance.threads>${fs.azure.scale.test.list.performance.threads}</fs.azure.scale.test.list.performance.threads>
+ <fs.azure.scale.test.list.performance.files>${fs.azure.scale.test.list.performance.files}</fs.azure.scale.test.list.performance.files>
</systemPropertyVariables>
<includes>
<include>**/ITestFileSystemOperationsExceptionHandlingMultiThreaded.java</include>
@@ -454,6 +464,8 @@
<fs.azure.scale.test.enabled>${fs.azure.scale.test.enabled}</fs.azure.scale.test.enabled>
<fs.azure.scale.test.huge.filesize>${fs.azure.scale.test.huge.filesize}</fs.azure.scale.test.huge.filesize>
<fs.azure.scale.test.timeout>${fs.azure.scale.test.timeout}</fs.azure.scale.test.timeout>
+ <fs.azure.scale.test.list.performance.threads>${fs.azure.scale.test.list.performance.threads}</fs.azure.scale.test.list.performance.threads>
+ <fs.azure.scale.test.list.performance.files>${fs.azure.scale.test.list.performance.files}</fs.azure.scale.test.list.performance.files>
</systemPropertyVariables>
<forkedProcessTimeoutInSeconds>${fs.azure.scale.test.timeout}</forkedProcessTimeoutInSeconds>
<trimStackTrace>false</trimStackTrace>
http://git-wip-us.apache.org/repos/asf/hadoop/blob/45d9568a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
index 197ab22..d2f9ca6 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
@@ -30,7 +30,6 @@ import java.net.URISyntaxException;
import java.net.URLDecoder;
import java.net.URLEncoder;
import java.security.InvalidKeyException;
-import java.util.ArrayList;
import java.util.Calendar;
import java.util.Date;
import java.util.EnumSet;
@@ -128,6 +127,7 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
// computed as min(2*cpu,8)
private static final String KEY_CONCURRENT_CONNECTION_VALUE_OUT = "fs.azure.concurrentRequestCount.out";
+ private static final String HADOOP_BLOCK_SIZE_PROPERTY_NAME = "fs.azure.block.size";
private static final String KEY_STREAM_MIN_READ_SIZE = "fs.azure.read.request.size";
private static final String KEY_STORAGE_CONNECTION_TIMEOUT = "fs.azure.storage.timeout";
private static final String KEY_WRITE_BLOCK_SIZE = "fs.azure.write.request.size";
@@ -252,6 +252,7 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
// Default block sizes
public static final int DEFAULT_DOWNLOAD_BLOCK_SIZE = 4 * 1024 * 1024;
public static final int DEFAULT_UPLOAD_BLOCK_SIZE = 4 * 1024 * 1024;
+ public static final long DEFAULT_HADOOP_BLOCK_SIZE = 512 * 1024 * 1024L;
private static final int DEFAULT_INPUT_STREAM_VERSION = 2;
@@ -313,6 +314,7 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
private boolean tolerateOobAppends = DEFAULT_READ_TOLERATE_CONCURRENT_APPEND;
+ private long hadoopBlockSize = DEFAULT_HADOOP_BLOCK_SIZE;
private int downloadBlockSizeBytes = DEFAULT_DOWNLOAD_BLOCK_SIZE;
private int uploadBlockSizeBytes = DEFAULT_UPLOAD_BLOCK_SIZE;
private int inputStreamVersion = DEFAULT_INPUT_STREAM_VERSION;
@@ -740,6 +742,8 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
KEY_STREAM_MIN_READ_SIZE, DEFAULT_DOWNLOAD_BLOCK_SIZE);
this.uploadBlockSizeBytes = sessionConfiguration.getInt(
KEY_WRITE_BLOCK_SIZE, DEFAULT_UPLOAD_BLOCK_SIZE);
+ this.hadoopBlockSize = sessionConfiguration.getLong(
+ HADOOP_BLOCK_SIZE_PROPERTY_NAME, DEFAULT_HADOOP_BLOCK_SIZE);
this.inputStreamVersion = sessionConfiguration.getInt(
KEY_INPUT_STREAM_VERSION, DEFAULT_INPUT_STREAM_VERSION);
@@ -1234,7 +1238,14 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
return false;
}
-
+ /**
+ * Returns the file block size. This is a fake value used for integration
+ * of the Azure store with Hadoop.
+ */
+ @Override
+ public long getHadoopBlockSize() {
+ return hadoopBlockSize;
+ }
/**
* This should be called from any method that does any modifications to the
@@ -2066,7 +2077,7 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
// The key refers to root directory of container.
// Set the modification time for root to zero.
return new FileMetadata(key, 0, defaultPermissionNoBlobMetadata(),
- BlobMaterialization.Implicit);
+ BlobMaterialization.Implicit, hadoopBlockSize);
}
CloudBlobWrapper blob = getBlobReference(key);
@@ -2086,7 +2097,7 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
if (retrieveFolderAttribute(blob)) {
LOG.debug("{} is a folder blob.", key);
return new FileMetadata(key, properties.getLastModified().getTime(),
- getPermissionStatus(blob), BlobMaterialization.Explicit);
+ getPermissionStatus(blob), BlobMaterialization.Explicit, hadoopBlockSize);
} else {
LOG.debug("{} is a normal blob.", key);
@@ -2095,7 +2106,7 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
key, // Always return denormalized key with metadata.
getDataLength(blob, properties),
properties.getLastModified().getTime(),
- getPermissionStatus(blob));
+ getPermissionStatus(blob), hadoopBlockSize);
}
} catch(StorageException e){
if (!NativeAzureFileSystemHelper.isFileNotFoundException(e)) {
@@ -2129,7 +2140,7 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
BlobProperties properties = blob.getProperties();
return new FileMetadata(key, properties.getLastModified().getTime(),
- getPermissionStatus(blob), BlobMaterialization.Implicit);
+ getPermissionStatus(blob), BlobMaterialization.Implicit, hadoopBlockSize);
}
}
@@ -2178,46 +2189,13 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
}
@Override
- public PartialListing list(String prefix, final int maxListingCount,
+ public FileMetadata[] list(String prefix, final int maxListingCount,
final int maxListingDepth) throws IOException {
- return list(prefix, maxListingCount, maxListingDepth, null);
- }
-
- @Override
- public PartialListing list(String prefix, final int maxListingCount,
- final int maxListingDepth, String priorLastKey) throws IOException {
- return list(prefix, PATH_DELIMITER, maxListingCount, maxListingDepth,
- priorLastKey);
+ return listInternal(prefix, maxListingCount, maxListingDepth);
}
- @Override
- public PartialListing listAll(String prefix, final int maxListingCount,
- final int maxListingDepth, String priorLastKey) throws IOException {
- return list(prefix, null, maxListingCount, maxListingDepth, priorLastKey);
- }
-
- /**
- * Searches the given list of {@link FileMetadata} objects for a directory
- * with the given key.
- *
- * @param list
- * The list to search.
- * @param key
- * The key to search for.
- * @return The wanted directory, or null if not found.
- */
- private static FileMetadata getFileMetadataInList(
- final Iterable<FileMetadata> list, String key) {
- for (FileMetadata current : list) {
- if (current.getKey().equals(key)) {
- return current;
- }
- }
- return null;
- }
-
- private PartialListing list(String prefix, String delimiter,
- final int maxListingCount, final int maxListingDepth, String priorLastKey)
+ private FileMetadata[] listInternal(String prefix, final int maxListingCount,
+ final int maxListingDepth)
throws IOException {
try {
checkContainer(ContainerAccessType.PureRead);
@@ -2241,7 +2219,8 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
objects = listRootBlobs(prefix, true, enableFlatListing);
}
- ArrayList<FileMetadata> fileMetadata = new ArrayList<FileMetadata>();
+ HashMap<String, FileMetadata> fileMetadata = new HashMap<>(256);
+
for (ListBlobItem blobItem : objects) {
// Check that the maximum listing count is not exhausted.
//
@@ -2261,25 +2240,37 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
FileMetadata metadata;
if (retrieveFolderAttribute(blob)) {
- metadata = new FileMetadata(blobKey,
- properties.getLastModified().getTime(),
- getPermissionStatus(blob),
- BlobMaterialization.Explicit);
+ metadata = new FileMetadata(blobKey,
+ properties.getLastModified().getTime(),
+ getPermissionStatus(blob),
+ BlobMaterialization.Explicit,
+ hadoopBlockSize);
} else {
- metadata = new FileMetadata(
- blobKey,
- getDataLength(blob, properties),
- properties.getLastModified().getTime(),
- getPermissionStatus(blob));
+ metadata = new FileMetadata(
+ blobKey,
+ getDataLength(blob, properties),
+ properties.getLastModified().getTime(),
+ getPermissionStatus(blob),
+ hadoopBlockSize);
}
+ // Add the metadata but remove duplicates. Note that the azure
+ // storage java SDK returns two types of entries: CloudBlobWrappter
+ // and CloudDirectoryWrapper. In the case where WASB generated the
+ // data, there will be an empty blob for each "directory", and we will
+ // receive a CloudBlobWrapper. If there are also files within this
+ // "directory", we will also receive a CloudDirectoryWrapper. To
+ // complicate matters, the data may not be generated by WASB, in
+ // which case we may not have an empty blob for each "directory".
+ // So, sometimes we receive both a CloudBlobWrapper and a
+ // CloudDirectoryWrapper for each directory, and sometimes we receive
+ // one or the other but not both. We remove duplicates, but
+ // prefer CloudBlobWrapper over CloudDirectoryWrapper.
+ // Furthermore, it is very unfortunate that the list results are not
+ // ordered, and it is a partial list which uses continuation. So
+ // the HashMap is the best structure to remove the duplicates, despite
+ // its potential large size.
+ fileMetadata.put(blobKey, metadata);
- // Add the metadata to the list, but remove any existing duplicate
- // entries first that we may have added by finding nested files.
- FileMetadata existing = getFileMetadataInList(fileMetadata, blobKey);
- if (existing != null) {
- fileMetadata.remove(existing);
- }
- fileMetadata.add(metadata);
} else if (blobItem instanceof CloudBlobDirectoryWrapper) {
CloudBlobDirectoryWrapper directory = (CloudBlobDirectoryWrapper) blobItem;
// Determine format of directory name depending on whether an absolute
@@ -2298,12 +2289,15 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
// inherit the permissions of the first non-directory blob.
// Also, getting a proper value for last-modified is tricky.
FileMetadata directoryMetadata = new FileMetadata(dirKey, 0,
- defaultPermissionNoBlobMetadata(), BlobMaterialization.Implicit);
+ defaultPermissionNoBlobMetadata(), BlobMaterialization.Implicit,
+ hadoopBlockSize);
// Add the directory metadata to the list only if it's not already
- // there.
- if (getFileMetadataInList(fileMetadata, dirKey) == null) {
- fileMetadata.add(directoryMetadata);
+ // there. See earlier note, we prefer CloudBlobWrapper over
+ // CloudDirectoryWrapper because it may have additional metadata (
+ // properties and ACLs).
+ if (!fileMetadata.containsKey(dirKey)) {
+ fileMetadata.put(dirKey, directoryMetadata);
}
if (!enableFlatListing) {
@@ -2314,13 +2308,7 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
}
}
}
- // Note: Original code indicated that this may be a hack.
- priorLastKey = null;
- PartialListing listing = new PartialListing(priorLastKey,
- fileMetadata.toArray(new FileMetadata[] {}),
- 0 == fileMetadata.size() ? new String[] {}
- : new String[] { prefix });
- return listing;
+ return fileMetadata.values().toArray(new FileMetadata[fileMetadata.size()]);
} catch (Exception e) {
// Re-throw as an Azure storage exception.
//
@@ -2334,13 +2322,13 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
* the sorted order of the blob names.
*
* @param aCloudBlobDirectory Azure blob directory
- * @param aFileMetadataList a list of file metadata objects for each
+ * @param metadataHashMap a map of file metadata objects for each
* non-directory blob.
* @param maxListingCount maximum length of the built up list.
*/
private void buildUpList(CloudBlobDirectoryWrapper aCloudBlobDirectory,
- ArrayList<FileMetadata> aFileMetadataList, final int maxListingCount,
- final int maxListingDepth) throws Exception {
+ HashMap<String, FileMetadata> metadataHashMap, final int maxListingCount,
+ final int maxListingDepth) throws Exception {
// Push the blob directory onto the stack.
//
@@ -2371,12 +2359,12 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
// (2) maxListingCount > 0 implies that the number of items in the
// metadata list is less than the max listing count.
while (null != blobItemIterator
- && (maxListingCount <= 0 || aFileMetadataList.size() < maxListingCount)) {
+ && (maxListingCount <= 0 || metadataHashMap.size() < maxListingCount)) {
while (blobItemIterator.hasNext()) {
// Check if the count of items on the list exhausts the maximum
// listing count.
//
- if (0 < maxListingCount && aFileMetadataList.size() >= maxListingCount) {
+ if (0 < maxListingCount && metadataHashMap.size() >= maxListingCount) {
break;
}
@@ -2399,22 +2387,34 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
metadata = new FileMetadata(blobKey,
properties.getLastModified().getTime(),
getPermissionStatus(blob),
- BlobMaterialization.Explicit);
+ BlobMaterialization.Explicit,
+ hadoopBlockSize);
} else {
metadata = new FileMetadata(
blobKey,
getDataLength(blob, properties),
properties.getLastModified().getTime(),
- getPermissionStatus(blob));
+ getPermissionStatus(blob),
+ hadoopBlockSize);
}
- // Add the directory metadata to the list only if it's not already
- // there.
- FileMetadata existing = getFileMetadataInList(aFileMetadataList, blobKey);
- if (existing != null) {
- aFileMetadataList.remove(existing);
- }
- aFileMetadataList.add(metadata);
+ // Add the metadata but remove duplicates. Note that the azure
+ // storage java SDK returns two types of entries: CloudBlobWrappter
+ // and CloudDirectoryWrapper. In the case where WASB generated the
+ // data, there will be an empty blob for each "directory", and we will
+ // receive a CloudBlobWrapper. If there are also files within this
+ // "directory", we will also receive a CloudDirectoryWrapper. To
+ // complicate matters, the data may not be generated by WASB, in
+ // which case we may not have an empty blob for each "directory".
+ // So, sometimes we receive both a CloudBlobWrapper and a
+ // CloudDirectoryWrapper for each directory, and sometimes we receive
+ // one or the other but not both. We remove duplicates, but
+ // prefer CloudBlobWrapper over CloudDirectoryWrapper.
+ // Furthermore, it is very unfortunate that the list results are not
+ // ordered, and it is a partial list which uses continuation. So
+ // the HashMap is the best structure to remove the duplicates, despite
+ // its potential large size.
+ metadataHashMap.put(blobKey, metadata);
} else if (blobItem instanceof CloudBlobDirectoryWrapper) {
CloudBlobDirectoryWrapper directory = (CloudBlobDirectoryWrapper) blobItem;
@@ -2439,7 +2439,12 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
// absolute path is being used or not.
String dirKey = normalizeKey(directory);
- if (getFileMetadataInList(aFileMetadataList, dirKey) == null) {
+ // Add the directory metadata to the list only if it's not already
+ // there. See earlier note, we prefer CloudBlobWrapper over
+ // CloudDirectoryWrapper because it may have additional metadata (
+ // properties and ACLs).
+ if (!metadataHashMap.containsKey(dirKey)) {
+
// Reached the targeted listing depth. Return metadata for the
// directory using default permissions.
//
@@ -2450,10 +2455,11 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
FileMetadata directoryMetadata = new FileMetadata(dirKey,
0,
defaultPermissionNoBlobMetadata(),
- BlobMaterialization.Implicit);
+ BlobMaterialization.Implicit,
+ hadoopBlockSize);
// Add the directory metadata to the list.
- aFileMetadataList.add(directoryMetadata);
+ metadataHashMap.put(dirKey, directoryMetadata);
}
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/45d9568a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/FileMetadata.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/FileMetadata.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/FileMetadata.java
index 5085a0f..cbf3ab9 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/FileMetadata.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/FileMetadata.java
@@ -19,6 +19,8 @@
package org.apache.hadoop.fs.azure;
import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.permission.PermissionStatus;
/**
@@ -27,12 +29,9 @@ import org.apache.hadoop.fs.permission.PermissionStatus;
* </p>
*/
@InterfaceAudience.Private
-class FileMetadata {
- private final String key;
- private final long length;
- private final long lastModified;
- private final boolean isDir;
- private final PermissionStatus permissionStatus;
+class FileMetadata extends FileStatus {
+ // this is not final so that it can be cleared to save memory when not needed.
+ private String key;
private final BlobMaterialization blobMaterialization;
/**
@@ -46,16 +45,19 @@ class FileMetadata {
* The last modified date (milliseconds since January 1, 1970 UTC.)
* @param permissionStatus
* The permission for the file.
+ * @param blockSize
+ * The Hadoop file block size.
*/
public FileMetadata(String key, long length, long lastModified,
- PermissionStatus permissionStatus) {
+ PermissionStatus permissionStatus, final long blockSize) {
+ super(length, false, 1, blockSize, lastModified, 0,
+ permissionStatus.getPermission(),
+ permissionStatus.getUserName(),
+ permissionStatus.getGroupName(),
+ null);
this.key = key;
- this.length = length;
- this.lastModified = lastModified;
- this.isDir = false;
- this.permissionStatus = permissionStatus;
- this.blobMaterialization = BlobMaterialization.Explicit; // File are never
- // implicit.
+ // Files are never implicit.
+ this.blobMaterialization = BlobMaterialization.Explicit;
}
/**
@@ -70,37 +72,42 @@ class FileMetadata {
* @param blobMaterialization
* Whether this is an implicit (no real blob backing it) or explicit
* directory.
+ * @param blockSize
+ * The Hadoop file block size.
*/
public FileMetadata(String key, long lastModified,
- PermissionStatus permissionStatus, BlobMaterialization blobMaterialization) {
+ PermissionStatus permissionStatus, BlobMaterialization blobMaterialization,
+ final long blockSize) {
+ super(0, true, 1, blockSize, lastModified, 0,
+ permissionStatus.getPermission(),
+ permissionStatus.getUserName(),
+ permissionStatus.getGroupName(),
+ null);
this.key = key;
- this.isDir = true;
- this.length = 0;
- this.lastModified = lastModified;
- this.permissionStatus = permissionStatus;
this.blobMaterialization = blobMaterialization;
}
- public boolean isDir() {
- return isDir;
+ @Override
+ public Path getPath() {
+ Path p = super.getPath();
+ if (p == null) {
+ // Don't store this yet to reduce memory usage, as it will
+ // stay in the Eden Space and later we will update it
+ // with the full canonicalized path.
+ p = NativeAzureFileSystem.keyToPath(key);
+ }
+ return p;
}
+ /**
+ * Returns the Azure storage key for the file. Used internally by the framework.
+ *
+ * @return The key for the file.
+ */
public String getKey() {
return key;
}
- public long getLength() {
- return length;
- }
-
- public long getLastModified() {
- return lastModified;
- }
-
- public PermissionStatus getPermissionStatus() {
- return permissionStatus;
- }
-
/**
* Indicates whether this is an implicit directory (no real blob backing it)
* or an explicit one.
@@ -112,9 +119,7 @@ class FileMetadata {
return blobMaterialization;
}
- @Override
- public String toString() {
- return "FileMetadata[" + key + ", " + length + ", " + lastModified + ", "
- + permissionStatus + "]";
+ void removeKey() {
+ key = null;
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/45d9568a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
index 5202762..f8962d9 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
@@ -31,9 +31,7 @@ import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.EnumSet;
-import java.util.Set;
import java.util.TimeZone;
-import java.util.TreeSet;
import java.util.UUID;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.regex.Matcher;
@@ -129,20 +127,12 @@ public class NativeAzureFileSystem extends FileSystem {
this.dstKey = dstKey;
this.folderLease = lease;
this.fs = fs;
- ArrayList<FileMetadata> fileMetadataList = new ArrayList<FileMetadata>();
// List all the files in the folder.
long start = Time.monotonicNow();
- String priorLastKey = null;
- do {
- PartialListing listing = fs.getStoreInterface().listAll(srcKey, AZURE_LIST_ALL,
- AZURE_UNBOUNDED_DEPTH, priorLastKey);
- for(FileMetadata file : listing.getFiles()) {
- fileMetadataList.add(file);
- }
- priorLastKey = listing.getPriorLastKey();
- } while (priorLastKey != null);
- fileMetadata = fileMetadataList.toArray(new FileMetadata[fileMetadataList.size()]);
+ fileMetadata = fs.getStoreInterface().list(srcKey, AZURE_LIST_ALL,
+ AZURE_UNBOUNDED_DEPTH);
+
long end = Time.monotonicNow();
LOG.debug("Time taken to list {} blobs for rename operation is: {} ms", fileMetadata.length, (end - start));
@@ -669,7 +659,6 @@ public class NativeAzureFileSystem extends FileSystem {
public static final Logger LOG = LoggerFactory.getLogger(NativeAzureFileSystem.class);
- static final String AZURE_BLOCK_SIZE_PROPERTY_NAME = "fs.azure.block.size";
/**
* The time span in seconds before which we consider a temp blob to be
* dangling (not being actively uploaded to) and up for reclamation.
@@ -685,8 +674,6 @@ public class NativeAzureFileSystem extends FileSystem {
private static final int AZURE_LIST_ALL = -1;
private static final int AZURE_UNBOUNDED_DEPTH = -1;
- private static final long MAX_AZURE_BLOCK_SIZE = 512 * 1024 * 1024L;
-
/**
* The configuration property that determines which group owns files created
* in WASB.
@@ -1196,7 +1183,6 @@ public class NativeAzureFileSystem extends FileSystem {
private NativeFileSystemStore store;
private AzureNativeFileSystemStore actualStore;
private Path workingDir;
- private long blockSize = MAX_AZURE_BLOCK_SIZE;
private AzureFileSystemInstrumentation instrumentation;
private String metricsSourceName;
private boolean isClosed = false;
@@ -1361,13 +1347,10 @@ public class NativeAzureFileSystem extends FileSystem {
this.uri = URI.create(uri.getScheme() + "://" + uri.getAuthority());
this.workingDir = new Path("/user", UserGroupInformation.getCurrentUser()
.getShortUserName()).makeQualified(getUri(), getWorkingDirectory());
- this.blockSize = conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME,
- MAX_AZURE_BLOCK_SIZE);
this.appendSupportEnabled = conf.getBoolean(APPEND_SUPPORT_ENABLE_PROPERTY_NAME, false);
LOG.debug("NativeAzureFileSystem. Initializing.");
- LOG.debug(" blockSize = {}",
- conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, MAX_AZURE_BLOCK_SIZE));
+ LOG.debug(" blockSize = {}", store.getHadoopBlockSize());
// Initialize thread counts from user configuration
deleteThreadCount = conf.getInt(AZURE_DELETE_THREADS, DEFAULT_AZURE_DELETE_THREADS);
@@ -1491,7 +1474,7 @@ public class NativeAzureFileSystem extends FileSystem {
}
}
- private static Path keyToPath(String key) {
+ static Path keyToPath(String key) {
if (key.equals("/")) {
return new Path("/"); // container
}
@@ -1599,7 +1582,7 @@ public class NativeAzureFileSystem extends FileSystem {
throw new FileNotFoundException(f.toString());
}
- if (meta.isDir()) {
+ if (meta.isDirectory()) {
throw new FileNotFoundException(f.toString()
+ " is a directory not a file.");
}
@@ -1815,7 +1798,7 @@ public class NativeAzureFileSystem extends FileSystem {
FileMetadata existingMetadata = store.retrieveMetadata(key);
if (existingMetadata != null) {
- if (existingMetadata.isDir()) {
+ if (existingMetadata.isDirectory()) {
throw new FileAlreadyExistsException("Cannot create file " + f
+ "; already exists as a directory.");
}
@@ -1833,7 +1816,7 @@ public class NativeAzureFileSystem extends FileSystem {
// already exists.
String parentKey = pathToKey(parentFolder);
FileMetadata parentMetadata = store.retrieveMetadata(parentKey);
- if (parentMetadata != null && parentMetadata.isDir() &&
+ if (parentMetadata != null && parentMetadata.isDirectory() &&
parentMetadata.getBlobMaterialization() == BlobMaterialization.Explicit) {
if (parentFolderLease != null) {
store.updateFolderLastModifiedTime(parentKey, parentFolderLease);
@@ -1850,7 +1833,7 @@ public class NativeAzureFileSystem extends FileSystem {
firstExisting = firstExisting.getParent();
metadata = store.retrieveMetadata(pathToKey(firstExisting));
}
- mkdirs(parentFolder, metadata.getPermissionStatus().getPermission(), true);
+ mkdirs(parentFolder, metadata.getPermission(), true);
}
}
@@ -1988,7 +1971,7 @@ public class NativeAzureFileSystem extends FileSystem {
+ parentPath + " whose metadata cannot be retrieved. Can't resolve");
}
- if (!parentMetadata.isDir()) {
+ if (!parentMetadata.isDirectory()) {
// Invalid state: the parent path is actually a file. Throw.
throw new AzureException("File " + f + " has a parent directory "
+ parentPath + " which is also a file. Can't resolve.");
@@ -1997,7 +1980,7 @@ public class NativeAzureFileSystem extends FileSystem {
// The path exists, determine if it is a folder containing objects,
// an empty folder, or a simple file and take the appropriate actions.
- if (!metaFile.isDir()) {
+ if (!metaFile.isDirectory()) {
// The path specifies a file. We need to check the parent path
// to make sure it's a proper materialized directory before we
// delete the file. Otherwise we may get into a situation where
@@ -2114,9 +2097,9 @@ public class NativeAzureFileSystem extends FileSystem {
AzureFileSystemThreadTask task = new AzureFileSystemThreadTask() {
@Override
public boolean execute(FileMetadata file) throws IOException{
- if (!deleteFile(file.getKey(), file.isDir())) {
+ if (!deleteFile(file.getKey(), file.isDirectory())) {
LOG.warn("Attempt to delete non-existent {} {}",
- file.isDir() ? "directory" : "file",
+ file.isDirectory() ? "directory" : "file",
file.getKey());
}
return true;
@@ -2138,7 +2121,7 @@ public class NativeAzureFileSystem extends FileSystem {
// Delete the current directory if all underlying contents are deleted
if (isPartialDelete || (store.retrieveMetadata(metaFile.getKey()) != null
- && !deleteFile(metaFile.getKey(), metaFile.isDir()))) {
+ && !deleteFile(metaFile.getKey(), metaFile.isDirectory()))) {
LOG.error("Failed delete directory : {}", f);
return false;
}
@@ -2191,7 +2174,7 @@ public class NativeAzureFileSystem extends FileSystem {
// The path exists, determine if it is a folder containing objects,
// an empty folder, or a simple file and take the appropriate actions.
- if (!metaFile.isDir()) {
+ if (!metaFile.isDirectory()) {
// The path specifies a file. We need to check the parent path
// to make sure it's a proper materialized directory before we
// delete the file. Otherwise we may get into a situation where
@@ -2234,7 +2217,7 @@ public class NativeAzureFileSystem extends FileSystem {
+ parentPath + " whose metadata cannot be retrieved. Can't resolve");
}
- if (!parentMetadata.isDir()) {
+ if (!parentMetadata.isDirectory()) {
// Invalid state: the parent path is actually a file. Throw.
throw new AzureException("File " + f + " has a parent directory "
+ parentPath + " which is also a file. Can't resolve.");
@@ -2319,38 +2302,27 @@ public class NativeAzureFileSystem extends FileSystem {
}
}
- // List all the blobs in the current folder.
- String priorLastKey = null;
-
// Start time for list operation
long start = Time.monotonicNow();
- ArrayList<FileMetadata> fileMetadataList = new ArrayList<FileMetadata>();
+ final FileMetadata[] contents;
// List all the files in the folder with AZURE_UNBOUNDED_DEPTH depth.
- do {
- try {
- PartialListing listing = store.listAll(key, AZURE_LIST_ALL,
- AZURE_UNBOUNDED_DEPTH, priorLastKey);
- for(FileMetadata file : listing.getFiles()) {
- fileMetadataList.add(file);
- }
- priorLastKey = listing.getPriorLastKey();
- } catch (IOException e) {
- Throwable innerException = checkForAzureStorageException(e);
-
- if (innerException instanceof StorageException
- && isFileNotFoundException((StorageException) innerException)) {
- return false;
- }
+ try {
+ contents = store.list(key, AZURE_LIST_ALL,
+ AZURE_UNBOUNDED_DEPTH);
+ } catch (IOException e) {
+ Throwable innerException = checkForAzureStorageException(e);
- throw e;
+ if (innerException instanceof StorageException
+ && isFileNotFoundException((StorageException) innerException)) {
+ return false;
}
- } while (priorLastKey != null);
- long end = Time.monotonicNow();
- LOG.debug("Time taken to list {} blobs for delete operation: {} ms", fileMetadataList.size(), (end - start));
+ throw e;
+ }
- final FileMetadata[] contents = fileMetadataList.toArray(new FileMetadata[fileMetadataList.size()]);
+ long end = Time.monotonicNow();
+ LOG.debug("Time taken to list {} blobs for delete operation: {} ms", contents.length, (end - start));
if (contents.length > 0) {
if (!recursive) {
@@ -2365,9 +2337,9 @@ public class NativeAzureFileSystem extends FileSystem {
AzureFileSystemThreadTask task = new AzureFileSystemThreadTask() {
@Override
public boolean execute(FileMetadata file) throws IOException{
- if (!deleteFile(file.getKey(), file.isDir())) {
+ if (!deleteFile(file.getKey(), file.isDirectory())) {
LOG.warn("Attempt to delete non-existent {} {}",
- file.isDir() ? "directory" : "file",
+ file.isDirectory() ? "directory" : "file",
file.getKey());
}
return true;
@@ -2384,7 +2356,7 @@ public class NativeAzureFileSystem extends FileSystem {
// Delete the current directory
if (store.retrieveMetadata(metaFile.getKey()) != null
- && !deleteFile(metaFile.getKey(), metaFile.isDir())) {
+ && !deleteFile(metaFile.getKey(), metaFile.isDirectory())) {
LOG.error("Failed delete directory : {}", f);
return false;
}
@@ -2456,13 +2428,13 @@ public class NativeAzureFileSystem extends FileSystem {
boolean isPartialDelete = false;
- Path pathToDelete = makeAbsolute(keyToPath(folderToDelete.getKey()));
+ Path pathToDelete = makeAbsolute(folderToDelete.getPath());
foldersToProcess.push(folderToDelete);
while (!foldersToProcess.empty()) {
FileMetadata currentFolder = foldersToProcess.pop();
- Path currentPath = makeAbsolute(keyToPath(currentFolder.getKey()));
+ Path currentPath = makeAbsolute(currentFolder.getPath());
boolean canDeleteChildren = true;
// If authorization is enabled, check for 'write' permission on current folder
@@ -2478,8 +2450,8 @@ public class NativeAzureFileSystem extends FileSystem {
if (canDeleteChildren) {
// get immediate children list
- ArrayList<FileMetadata> fileMetadataList = getChildrenMetadata(currentFolder.getKey(),
- maxListingDepth);
+ FileMetadata[] fileMetadataList = store.list(currentFolder.getKey(),
+ AZURE_LIST_ALL, maxListingDepth);
// Process children of currentFolder and add them to list of contents
// that can be deleted. We Perform stickybit check on every file and
@@ -2490,12 +2462,12 @@ public class NativeAzureFileSystem extends FileSystem {
// This file/folder cannot be deleted and neither can the parent paths be deleted.
// Remove parent paths from list of contents that can be deleted.
canDeleteChildren = false;
- Path filePath = makeAbsolute(keyToPath(childItem.getKey()));
+ Path filePath = makeAbsolute(childItem.getPath());
LOG.error("User does not have permissions to delete {}. "
+ "Parent directory has sticky bit set.", filePath);
} else {
// push the child directories to the stack to process their contents
- if (childItem.isDir()) {
+ if (childItem.isDirectory()) {
foldersToProcess.push(childItem);
}
// Add items to list of contents that can be deleted.
@@ -2540,23 +2512,6 @@ public class NativeAzureFileSystem extends FileSystem {
return isPartialDelete;
}
- private ArrayList<FileMetadata> getChildrenMetadata(String key, int maxListingDepth)
- throws IOException {
-
- String priorLastKey = null;
- ArrayList<FileMetadata> fileMetadataList = new ArrayList<FileMetadata>();
- do {
- PartialListing listing = store.listAll(key, AZURE_LIST_ALL,
- maxListingDepth, priorLastKey);
- for (FileMetadata file : listing.getFiles()) {
- fileMetadataList.add(file);
- }
- priorLastKey = listing.getPriorLastKey();
- } while (priorLastKey != null);
-
- return fileMetadataList;
- }
-
private boolean isStickyBitCheckViolated(FileMetadata metaData,
FileMetadata parentMetadata, boolean throwOnException) throws IOException {
try {
@@ -2602,13 +2557,13 @@ public class NativeAzureFileSystem extends FileSystem {
}
// stickybit is not set on parent and hence cannot be violated
- if (!parentMetadata.getPermissionStatus().getPermission().getStickyBit()) {
+ if (!parentMetadata.getPermission().getStickyBit()) {
return false;
}
String currentUser = UserGroupInformation.getCurrentUser().getShortUserName();
- String parentDirectoryOwner = parentMetadata.getPermissionStatus().getUserName();
- String currentFileOwner = metaData.getPermissionStatus().getUserName();
+ String parentDirectoryOwner = parentMetadata.getOwner();
+ String currentFileOwner = metaData.getOwner();
// Files/Folders with no owner set will not pass stickybit check
if ((parentDirectoryOwner.equalsIgnoreCase(currentUser))
@@ -2687,7 +2642,15 @@ public class NativeAzureFileSystem extends FileSystem {
Path absolutePath = makeAbsolute(f);
String key = pathToKey(absolutePath);
if (key.length() == 0) { // root always exists
- return newDirectory(null, absolutePath);
+ return new FileStatus(
+ 0,
+ true,
+ 1,
+ store.getHadoopBlockSize(),
+ 0,
+ 0,
+ FsPermission.getDefault(), "", "",
+ absolutePath.makeQualified(getUri(), getWorkingDirectory()));
}
// The path is either a folder or a file. Retrieve metadata to
@@ -2709,7 +2672,7 @@ public class NativeAzureFileSystem extends FileSystem {
}
if (meta != null) {
- if (meta.isDir()) {
+ if (meta.isDirectory()) {
// The path is a folder with files in it.
//
@@ -2723,14 +2686,14 @@ public class NativeAzureFileSystem extends FileSystem {
}
// Return reference to the directory object.
- return newDirectory(meta, absolutePath);
+ return updateFileStatusPath(meta, absolutePath);
}
// The path is a file.
LOG.debug("Found the path: {} as a file.", f.toString());
// Return with reference to a file object.
- return newFile(meta, absolutePath);
+ return updateFileStatusPath(meta, absolutePath);
}
// File not found. Throw exception no such file or directory.
@@ -2787,7 +2750,7 @@ public class NativeAzureFileSystem extends FileSystem {
performAuthCheck(absolutePath, WasbAuthorizationOperations.READ, "liststatus", absolutePath);
String key = pathToKey(absolutePath);
- Set<FileStatus> status = new TreeSet<FileStatus>();
+
FileMetadata meta = null;
try {
meta = store.retrieveMetadata(key);
@@ -2804,101 +2767,93 @@ public class NativeAzureFileSystem extends FileSystem {
throw ex;
}
- if (meta != null) {
- if (!meta.isDir()) {
-
- LOG.debug("Found path as a file");
-
- return new FileStatus[] { newFile(meta, absolutePath) };
- }
-
- String partialKey = null;
- PartialListing listing = null;
-
- try {
- listing = store.list(key, AZURE_LIST_ALL, 1, partialKey);
- } catch (IOException ex) {
-
- Throwable innerException = NativeAzureFileSystemHelper.checkForAzureStorageException(ex);
-
- if (innerException instanceof StorageException
- && NativeAzureFileSystemHelper.isFileNotFoundException((StorageException) innerException)) {
+ if (meta == null) {
+ // There is no metadata found for the path.
+ LOG.debug("Did not find any metadata for path: {}", key);
+ throw new FileNotFoundException(f + " is not found");
+ }
- throw new FileNotFoundException(String.format("%s is not found", key));
- }
+ if (!meta.isDirectory()) {
+ LOG.debug("Found path as a file");
+ return new FileStatus[] { updateFileStatusPath(meta, absolutePath) };
+ }
- throw ex;
- }
- // NOTE: We don't check for Null condition as the Store API should return
- // an empty list if there are not listing.
+ FileMetadata[] listing;
- // For any -RenamePending.json files in the listing,
- // push the rename forward.
- boolean renamed = conditionalRedoFolderRenames(listing);
+ listing = listWithErrorHandling(key, AZURE_LIST_ALL, 1);
- // If any renames were redone, get another listing,
- // since the current one may have changed due to the redo.
- if (renamed) {
- listing = null;
- try {
- listing = store.list(key, AZURE_LIST_ALL, 1, partialKey);
- } catch (IOException ex) {
- Throwable innerException = NativeAzureFileSystemHelper.checkForAzureStorageException(ex);
+ // NOTE: We don't check for Null condition as the Store API should return
+ // an empty list if there are not listing.
- if (innerException instanceof StorageException
- && NativeAzureFileSystemHelper.isFileNotFoundException((StorageException) innerException)) {
+ // For any -RenamePending.json files in the listing,
+ // push the rename forward.
+ boolean renamed = conditionalRedoFolderRenames(listing);
- throw new FileNotFoundException(String.format("%s is not found", key));
- }
+ // If any renames were redone, get another listing,
+ // since the current one may have changed due to the redo.
+ if (renamed) {
+ listing = listWithErrorHandling(key, AZURE_LIST_ALL, 1);
+ }
- throw ex;
- }
- }
+ // We only need to check for AZURE_TEMP_FOLDER if the key is the root,
+ // and if it is not the root we also know the exact size of the array
+ // of FileStatus.
- // NOTE: We don't check for Null condition as the Store API should return
- // and empty list if there are not listing.
+ FileMetadata[] result = null;
- for (FileMetadata fileMetadata : listing.getFiles()) {
- Path subpath = keyToPath(fileMetadata.getKey());
+ if (key.equals("/")) {
+ ArrayList<FileMetadata> status = new ArrayList<>(listing.length);
- // Test whether the metadata represents a file or directory and
- // add the appropriate metadata object.
- //
- // Note: There was a very old bug here where directories were added
- // to the status set as files flattening out recursive listings
- // using "-lsr" down the file system hierarchy.
- if (fileMetadata.isDir()) {
+ for (FileMetadata fileMetadata : listing) {
+ if (fileMetadata.isDirectory()) {
// Make sure we hide the temp upload folder
if (fileMetadata.getKey().equals(AZURE_TEMP_FOLDER)) {
// Don't expose that.
continue;
}
- status.add(newDirectory(fileMetadata, subpath));
+ status.add(updateFileStatusPath(fileMetadata, fileMetadata.getPath()));
} else {
- status.add(newFile(fileMetadata, subpath));
+ status.add(updateFileStatusPath(fileMetadata, fileMetadata.getPath()));
}
}
+ result = status.toArray(new FileMetadata[0]);
+ } else {
+ for (int i = 0; i < listing.length; i++) {
+ FileMetadata fileMetadata = listing[i];
+ listing[i] = updateFileStatusPath(fileMetadata, fileMetadata.getPath());
+ }
+ result = listing;
+ }
- LOG.debug("Found path as a directory with {}"
- + " files in it.", status.size());
+ LOG.debug("Found path as a directory with {}"
+ + " files in it.", result.length);
- } else {
- // There is no metadata found for the path.
- LOG.debug("Did not find any metadata for path: {}", key);
+ return result;
+ }
- throw new FileNotFoundException(f + " is not found");
+ private FileMetadata[] listWithErrorHandling(String prefix, final int maxListingCount,
+ final int maxListingDepth) throws IOException {
+ try {
+ return store.list(prefix, maxListingCount, maxListingDepth);
+ } catch (IOException ex) {
+ Throwable innerException
+ = NativeAzureFileSystemHelper.checkForAzureStorageException(ex);
+ if (innerException instanceof StorageException
+ && NativeAzureFileSystemHelper.isFileNotFoundException(
+ (StorageException) innerException)) {
+ throw new FileNotFoundException(String.format("%s is not found", prefix));
+ }
+ throw ex;
}
-
- return status.toArray(new FileStatus[0]);
}
// Redo any folder renames needed if there are rename pending files in the
// directory listing. Return true if one or more redo operations were done.
- private boolean conditionalRedoFolderRenames(PartialListing listing)
+ private boolean conditionalRedoFolderRenames(FileMetadata[] listing)
throws IllegalArgumentException, IOException {
boolean renamed = false;
- for (FileMetadata fileMetadata : listing.getFiles()) {
- Path subpath = keyToPath(fileMetadata.getKey());
+ for (FileMetadata fileMetadata : listing) {
+ Path subpath = fileMetadata.getPath();
if (isRenamePendingFile(subpath)) {
FolderRenamePending pending =
new FolderRenamePending(subpath, this);
@@ -2914,32 +2869,11 @@ public class NativeAzureFileSystem extends FileSystem {
return path.toString().endsWith(FolderRenamePending.SUFFIX);
}
- private FileStatus newFile(FileMetadata meta, Path path) {
- return new FileStatus (
- meta.getLength(),
- false,
- 1,
- blockSize,
- meta.getLastModified(),
- 0,
- meta.getPermissionStatus().getPermission(),
- meta.getPermissionStatus().getUserName(),
- meta.getPermissionStatus().getGroupName(),
- path.makeQualified(getUri(), getWorkingDirectory()));
- }
-
- private FileStatus newDirectory(FileMetadata meta, Path path) {
- return new FileStatus (
- 0,
- true,
- 1,
- blockSize,
- meta == null ? 0 : meta.getLastModified(),
- 0,
- meta == null ? FsPermission.getDefault() : meta.getPermissionStatus().getPermission(),
- meta == null ? "" : meta.getPermissionStatus().getUserName(),
- meta == null ? "" : meta.getPermissionStatus().getGroupName(),
- path.makeQualified(getUri(), getWorkingDirectory()));
+ private FileMetadata updateFileStatusPath(FileMetadata meta, Path path) {
+ meta.setPath(path.makeQualified(getUri(), getWorkingDirectory()));
+ // reduce memory use by setting the internal-only key to null
+ meta.removeKey();
+ return meta;
}
private static enum UMaskApplyMode {
@@ -3000,8 +2934,8 @@ public class NativeAzureFileSystem extends FileSystem {
String currentKey = pathToKey(current);
FileMetadata currentMetadata = store.retrieveMetadata(currentKey);
- if (currentMetadata != null && currentMetadata.isDir()) {
- Path ancestor = keyToPath(currentMetadata.getKey());
+ if (currentMetadata != null && currentMetadata.isDirectory()) {
+ Path ancestor = currentMetadata.getPath();
LOG.debug("Found ancestor {}, for path: {}", ancestor.toString(), f.toString());
return ancestor;
}
@@ -3052,7 +2986,7 @@ public class NativeAzureFileSystem extends FileSystem {
current = parent, parent = current.getParent()) {
String currentKey = pathToKey(current);
FileMetadata currentMetadata = store.retrieveMetadata(currentKey);
- if (currentMetadata != null && !currentMetadata.isDir()) {
+ if (currentMetadata != null && !currentMetadata.isDirectory()) {
throw new FileAlreadyExistsException("Cannot create directory " + f + " because "
+ current + " is an existing file.");
} else if (currentMetadata == null) {
@@ -3099,7 +3033,7 @@ public class NativeAzureFileSystem extends FileSystem {
if (meta == null) {
throw new FileNotFoundException(f.toString());
}
- if (meta.isDir()) {
+ if (meta.isDirectory()) {
throw new FileNotFoundException(f.toString()
+ " is a directory not a file.");
}
@@ -3120,7 +3054,7 @@ public class NativeAzureFileSystem extends FileSystem {
}
return new FSDataInputStream(new BufferedFSInputStream(
- new NativeAzureFsInputStream(inputStream, key, meta.getLength()), bufferSize));
+ new NativeAzureFsInputStream(inputStream, key, meta.getLen()), bufferSize));
}
@Override
@@ -3196,7 +3130,7 @@ public class NativeAzureFileSystem extends FileSystem {
}
}
- if (dstMetadata != null && dstMetadata.isDir()) {
+ if (dstMetadata != null && dstMetadata.isDirectory()) {
// It's an existing directory.
performAuthCheck(absoluteDstPath, WasbAuthorizationOperations.WRITE, "rename",
absoluteDstPath);
@@ -3232,7 +3166,7 @@ public class NativeAzureFileSystem extends FileSystem {
LOG.debug("Parent of the destination {}"
+ " doesn't exist, failing the rename.", dst);
return false;
- } else if (!parentOfDestMetadata.isDir()) {
+ } else if (!parentOfDestMetadata.isDirectory()) {
LOG.debug("Parent of the destination {}"
+ " is a file, failing the rename.", dst);
return false;
@@ -3261,7 +3195,7 @@ public class NativeAzureFileSystem extends FileSystem {
// Source doesn't exist
LOG.debug("Source {} doesn't exist, failing the rename.", src);
return false;
- } else if (!srcMetadata.isDir()) {
+ } else if (!srcMetadata.isDirectory()) {
LOG.debug("Source {} found as a file, renaming.", src);
try {
// HADOOP-15086 - file rename must ensure that the destination does
@@ -3335,7 +3269,7 @@ public class NativeAzureFileSystem extends FileSystem {
// single file. In this case, the parent folder no longer exists if the
// file is renamed; so we can safely ignore the null pointer case.
if (parentMetadata != null) {
- if (parentMetadata.isDir()
+ if (parentMetadata.isDirectory()
&& parentMetadata.getBlobMaterialization() == BlobMaterialization.Implicit) {
store.storeEmptyFolder(parentKey,
createPermissionStatus(FsPermission.getDefault()));
@@ -3511,7 +3445,7 @@ public class NativeAzureFileSystem extends FileSystem {
&& !isAllowedUser(currentUgi.getShortUserName(), daemonUsers)) {
//Check if the user is the owner of the file.
- String owner = metadata.getPermissionStatus().getUserName();
+ String owner = metadata.getOwner();
if (!currentUgi.getShortUserName().equals(owner)) {
throw new WasbAuthorizationException(
String.format("user '%s' does not have the privilege to "
@@ -3522,16 +3456,16 @@ public class NativeAzureFileSystem extends FileSystem {
}
permission = applyUMask(permission,
- metadata.isDir() ? UMaskApplyMode.ChangeExistingDirectory
+ metadata.isDirectory() ? UMaskApplyMode.ChangeExistingDirectory
: UMaskApplyMode.ChangeExistingFile);
if (metadata.getBlobMaterialization() == BlobMaterialization.Implicit) {
// It's an implicit folder, need to materialize it.
store.storeEmptyFolder(key, createPermissionStatus(permission));
- } else if (!metadata.getPermissionStatus().getPermission().
+ } else if (!metadata.getPermission().
equals(permission)) {
store.changePermissionStatus(key, new PermissionStatus(
- metadata.getPermissionStatus().getUserName(),
- metadata.getPermissionStatus().getGroupName(),
+ metadata.getOwner(),
+ metadata.getGroup(),
permission));
}
}
@@ -3579,10 +3513,10 @@ public class NativeAzureFileSystem extends FileSystem {
PermissionStatus newPermissionStatus = new PermissionStatus(
username == null ?
- metadata.getPermissionStatus().getUserName() : username,
+ metadata.getOwner() : username,
groupname == null ?
- metadata.getPermissionStatus().getGroupName() : groupname,
- metadata.getPermissionStatus().getPermission());
+ metadata.getGroup() : groupname,
+ metadata.getPermission());
if (metadata.getBlobMaterialization() == BlobMaterialization.Implicit) {
// It's an implicit folder, need to materialize it.
store.storeEmptyFolder(key, newPermissionStatus);
@@ -3778,30 +3712,26 @@ public class NativeAzureFileSystem extends FileSystem {
AZURE_TEMP_EXPIRY_DEFAULT) * 1000;
// Go over all the blobs under the given root and look for blobs to
// recover.
- String priorLastKey = null;
- do {
- PartialListing listing = store.listAll(pathToKey(root), AZURE_LIST_ALL,
- AZURE_UNBOUNDED_DEPTH, priorLastKey);
-
- for (FileMetadata file : listing.getFiles()) {
- if (!file.isDir()) { // We don't recover directory blobs
- // See if this blob has a link in it (meaning it's a place-holder
- // blob for when the upload to the temp blob is complete).
- String link = store.getLinkInFileMetadata(file.getKey());
- if (link != null) {
- // It has a link, see if the temp blob it is pointing to is
- // existent and old enough to be considered dangling.
- FileMetadata linkMetadata = store.retrieveMetadata(link);
- if (linkMetadata != null
- && linkMetadata.getLastModified() >= cutoffForDangling) {
- // Found one!
- handler.handleFile(file, linkMetadata);
- }
+ FileMetadata[] listing = store.list(pathToKey(root), AZURE_LIST_ALL,
+ AZURE_UNBOUNDED_DEPTH);
+
+ for (FileMetadata file : listing) {
+ if (!file.isDirectory()) { // We don't recover directory blobs
+ // See if this blob has a link in it (meaning it's a place-holder
+ // blob for when the upload to the temp blob is complete).
+ String link = store.getLinkInFileMetadata(file.getKey());
+ if (link != null) {
+ // It has a link, see if the temp blob it is pointing to is
+ // existent and old enough to be considered dangling.
+ FileMetadata linkMetadata = store.retrieveMetadata(link);
+ if (linkMetadata != null
+ && linkMetadata.getModificationTime() >= cutoffForDangling) {
+ // Found one!
+ handler.handleFile(file, linkMetadata);
}
}
}
- priorLastKey = listing.getPriorLastKey();
- } while (priorLastKey != null);
+ }
}
/**
@@ -3888,7 +3818,7 @@ public class NativeAzureFileSystem extends FileSystem {
meta = store.retrieveMetadata(key);
if (meta != null) {
- owner = meta.getPermissionStatus().getUserName();
+ owner = meta.getOwner();
LOG.debug("Retrieved '{}' as owner for path - {}", owner, absolutePath);
} else {
// meta will be null if file/folder doen not exist
http://git-wip-us.apache.org/repos/asf/hadoop/blob/45d9568a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeFileSystemStore.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeFileSystemStore.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeFileSystemStore.java
index b67ab1b..36e3819 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeFileSystemStore.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeFileSystemStore.java
@@ -58,20 +58,21 @@ interface NativeFileSystemStore {
boolean isAtomicRenameKey(String key);
+ /**
+ * Returns the file block size. This is a fake value used for integration
+ * of the Azure store with Hadoop.
+ * @return The file block size.
+ */
+ long getHadoopBlockSize();
+
void storeEmptyLinkFile(String key, String tempBlobKey,
PermissionStatus permissionStatus) throws AzureException;
String getLinkInFileMetadata(String key) throws AzureException;
- PartialListing list(String prefix, final int maxListingCount,
+ FileMetadata[] list(String prefix, final int maxListingCount,
final int maxListingDepth) throws IOException;
- PartialListing list(String prefix, final int maxListingCount,
- final int maxListingDepth, String priorLastKey) throws IOException;
-
- PartialListing listAll(String prefix, final int maxListingCount,
- final int maxListingDepth, String priorLastKey) throws IOException;
-
void changePermissionStatus(String key, PermissionStatus newPermission)
throws AzureException;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/45d9568a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PartialListing.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PartialListing.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PartialListing.java
deleted file mode 100644
index 4a80d2e..0000000
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PartialListing.java
+++ /dev/null
@@ -1,61 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.fs.azure;
-
-import org.apache.hadoop.classification.InterfaceAudience;
-
-/**
- * <p>
- * Holds information on a directory listing for a {@link NativeFileSystemStore}.
- * This includes the {@link FileMetadata files} and directories (their names)
- * contained in a directory.
- * </p>
- * <p>
- * This listing may be returned in chunks, so a <code>priorLastKey</code> is
- * provided so that the next chunk may be requested.
- * </p>
- *
- * @see NativeFileSystemStore#list(String, int, String)
- */
-@InterfaceAudience.Private
-class PartialListing {
-
- private final String priorLastKey;
- private final FileMetadata[] files;
- private final String[] commonPrefixes;
-
- public PartialListing(String priorLastKey, FileMetadata[] files,
- String[] commonPrefixes) {
- this.priorLastKey = priorLastKey;
- this.files = files;
- this.commonPrefixes = commonPrefixes;
- }
-
- public FileMetadata[] getFiles() {
- return files;
- }
-
- public String[] getCommonPrefixes() {
- return commonPrefixes;
- }
-
- public String getPriorLastKey() {
- return priorLastKey;
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/45d9568a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestListPerformance.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestListPerformance.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestListPerformance.java
new file mode 100644
index 0000000..e7a3fa8
--- /dev/null
+++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestListPerformance.java
@@ -0,0 +1,196 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azure;
+
+import java.util.ArrayList;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+
+import com.microsoft.azure.storage.blob.CloudBlobContainer;
+import com.microsoft.azure.storage.blob.CloudBlockBlob;
+import org.junit.Assume;
+import org.junit.FixMethodOrder;
+import org.junit.Test;
+import org.junit.runners.MethodSorters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.fs.azure.integration.AbstractAzureScaleTest;
+import org.apache.hadoop.fs.azure.integration.AzureTestUtils;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+
+/**
+ * Test list performance.
+ */
+@FixMethodOrder(MethodSorters.NAME_ASCENDING)
+
+public class ITestListPerformance extends AbstractAzureScaleTest {
+ private static final Logger LOG = LoggerFactory.getLogger(
+ ITestListPerformance.class);
+
+ private static final Path TEST_DIR_PATH = new Path(
+ "DirectoryWithManyFiles");
+
+ private static final int NUMBER_OF_THREADS = 10;
+ private static final int NUMBER_OF_FILES_PER_THREAD = 1000;
+
+ private int threads;
+
+ private int filesPerThread;
+
+ private int expectedFileCount;
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ Configuration conf = getConfiguration();
+ // fail fast
+ threads = AzureTestUtils.getTestPropertyInt(conf,
+ "fs.azure.scale.test.list.performance.threads", NUMBER_OF_THREADS);
+ filesPerThread = AzureTestUtils.getTestPropertyInt(conf,
+ "fs.azure.scale.test.list.performance.files", NUMBER_OF_FILES_PER_THREAD);
+ expectedFileCount = threads * filesPerThread;
+ LOG.info("Thread = {}, Files per Thread = {}, expected files = {}",
+ threads, filesPerThread, expectedFileCount);
+ conf.set("fs.azure.io.retry.max.retries", "1");
+ conf.set("fs.azure.delete.threads", "16");
+ createTestAccount();
+ }
+
+ @Override
+ protected AzureBlobStorageTestAccount createTestAccount() throws Exception {
+ return AzureBlobStorageTestAccount.create(
+ "itestlistperformance",
+ EnumSet.of(AzureBlobStorageTestAccount.CreateOptions.CreateContainer),
+ null,
+ true);
+ }
+
+ @Test
+ public void test_0101_CreateDirectoryWithFiles() throws Exception {
+ Assume.assumeFalse("Test path exists; skipping", fs.exists(TEST_DIR_PATH));
+
+ ExecutorService executorService = Executors.newFixedThreadPool(threads);
+ CloudBlobContainer container = testAccount.getRealContainer();
+
+ final String basePath = (fs.getWorkingDirectory().toUri().getPath() + "/" + TEST_DIR_PATH + "/").substring(1);
+
+ ArrayList<Callable<Integer>> tasks = new ArrayList<>(threads);
+ fs.mkdirs(TEST_DIR_PATH);
+ ContractTestUtils.NanoTimer timer = new ContractTestUtils.NanoTimer();
+ for (int i = 0; i < threads; i++) {
+ tasks.add(
+ new Callable<Integer>() {
+ public Integer call() {
+ int written = 0;
+ for (int j = 0; j < filesPerThread; j++) {
+ String blobName = basePath + UUID.randomUUID().toString();
+ try {
+ CloudBlockBlob blob = container.getBlockBlobReference(
+ blobName);
+ blob.uploadText("");
+ written ++;
+ } catch (Exception e) {
+ LOG.error("Filed to write {}", blobName, e);
+ break;
+ }
+ }
+ LOG.info("Thread completed with {} files written", written);
+ return written;
+ }
+ }
+ );
+ }
+
+ List<Future<Integer>> futures = executorService.invokeAll(tasks,
+ getTestTimeoutMillis(), TimeUnit.MILLISECONDS);
+ long elapsedMs = timer.elapsedTimeMs();
+ LOG.info("time to create files: {} millis", elapsedMs);
+
+ for (Future<Integer> future : futures) {
+ assertTrue("Future timed out", future.isDone());
+ assertEquals("Future did not write all files timed out",
+ filesPerThread, future.get().intValue());
+ }
+ }
+
+ @Test
+ public void test_0200_ListStatusPerformance() throws Exception {
+ ContractTestUtils.NanoTimer timer = new ContractTestUtils.NanoTimer();
+ FileStatus[] fileList = fs.listStatus(TEST_DIR_PATH);
+ long elapsedMs = timer.elapsedTimeMs();
+ LOG.info(String.format(
+ "files=%1$d, elapsedMs=%2$d",
+ fileList.length,
+ elapsedMs));
+ Map<Path, FileStatus> foundInList =new HashMap<>(expectedFileCount);
+
+ for (FileStatus fileStatus : fileList) {
+ foundInList.put(fileStatus.getPath(), fileStatus);
+ LOG.info("{}: {}", fileStatus.getPath(),
+ fileStatus.isDirectory() ? "dir" : "file");
+ }
+ assertEquals("Mismatch between expected files and actual",
+ expectedFileCount, fileList.length);
+
+
+ // now do a listFiles() recursive
+ ContractTestUtils.NanoTimer initialStatusCallTimer
+ = new ContractTestUtils.NanoTimer();
+ RemoteIterator<LocatedFileStatus> listing
+ = fs.listFiles(TEST_DIR_PATH, true);
+ long initialListTime = initialStatusCallTimer.elapsedTimeMs();
+ timer = new ContractTestUtils.NanoTimer();
+ while (listing.hasNext()) {
+ FileStatus fileStatus = listing.next();
+ Path path = fileStatus.getPath();
+ FileStatus removed = foundInList.remove(path);
+ assertNotNull("Did not find " + path + "{} in the previous listing",
+ removed);
+ }
+ elapsedMs = timer.elapsedTimeMs();
+ LOG.info("time for listFiles() initial call: {} millis;"
+ + " time to iterate: {} millis", initialListTime, elapsedMs);
+ assertEquals("Not all files from listStatus() were found in listFiles()",
+ 0, foundInList.size());
+
+ }
+
+ @Test
+ public void test_0300_BulkDeletePerformance() throws Exception {
+ ContractTestUtils.NanoTimer timer = new ContractTestUtils.NanoTimer();
+ fs.delete(TEST_DIR_PATH,true);
+ long elapsedMs = timer.elapsedTimeMs();
+ LOG.info("time for delete(): {} millis; {} nanoS per file",
+ elapsedMs, timer.nanosPerOperation(expectedFileCount));
+ }
+}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[32/50] hadoop git commit: HDFS-13690. Improve error message when
creating encryption zone while KMS is unreachable. Contributed by Kitti
Nanasi.
Posted by zh...@apache.org.
HDFS-13690. Improve error message when creating encryption zone while KMS is unreachable. Contributed by Kitti Nanasi.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d2874e04
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d2874e04
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d2874e04
Branch: refs/heads/HDFS-13572
Commit: d2874e04173613b1a3d44eabf8d449c8a3920fa4
Parents: 0c7a578
Author: Xiao Chen <xi...@apache.org>
Authored: Mon Jul 16 13:19:24 2018 -0700
Committer: Xiao Chen <xi...@apache.org>
Committed: Mon Jul 16 13:19:53 2018 -0700
----------------------------------------------------------------------
.../apache/hadoop/crypto/key/kms/KMSClientProvider.java | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d2874e04/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
index 11815da..8125510 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -56,6 +56,7 @@ import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.Writer;
import java.lang.reflect.UndeclaredThrowableException;
+import java.net.ConnectException;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.MalformedURLException;
@@ -478,10 +479,14 @@ public class KMSClientProvider extends KeyProvider implements CryptoExtension,
return authUrl.openConnection(url, authToken, doAsUser);
}
});
+ } catch (ConnectException ex) {
+ String msg = "Failed to connect to: " + url.toString();
+ LOG.warn(msg);
+ throw new IOException(msg, ex);
+ } catch (SocketTimeoutException ex) {
+ LOG.warn("Failed to connect to {}:{}", url.getHost(), url.getPort());
+ throw ex;
} catch (IOException ex) {
- if (ex instanceof SocketTimeoutException) {
- LOG.warn("Failed to connect to {}:{}", url.getHost(), url.getPort());
- }
throw ex;
} catch (UndeclaredThrowableException ex) {
throw new IOException(ex.getUndeclaredThrowable());
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[47/50] hadoop git commit: YARN-6995. Improve use of
ResourceNotFoundException in resource types code. (Daniel Templeton and
Szilard Nemeth via Haibo Chen)
Posted by zh...@apache.org.
YARN-6995. Improve use of ResourceNotFoundException in resource types code. (Daniel Templeton and Szilard Nemeth via Haibo Chen)
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/f354f47f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/f354f47f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/f354f47f
Branch: refs/heads/HDFS-13572
Commit: f354f47f9959d8a79baee690858af3e160494c32
Parents: b3b4d4c
Author: Haibo Chen <ha...@apache.org>
Authored: Thu Jul 19 15:34:12 2018 -0700
Committer: Haibo Chen <ha...@apache.org>
Committed: Thu Jul 19 15:35:05 2018 -0700
----------------------------------------------------------------------
.../hadoop/yarn/api/records/Resource.java | 22 ++++-----------
.../exceptions/ResourceNotFoundException.java | 29 +++++++++++++++-----
.../api/records/impl/pb/ResourcePBImpl.java | 10 +++----
.../hadoop/yarn/util/resource/Resources.java | 6 ++--
4 files changed, 34 insertions(+), 33 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f354f47f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
index 3cac1d1..1a7252d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/Resource.java
@@ -257,18 +257,15 @@ public abstract class Resource implements Comparable<Resource> {
*
* @param resource name of the resource
* @return the ResourceInformation object for the resource
- * @throws ResourceNotFoundException if the resource can't be found
*/
@Public
@InterfaceStability.Unstable
- public ResourceInformation getResourceInformation(String resource)
- throws ResourceNotFoundException {
+ public ResourceInformation getResourceInformation(String resource) {
Integer index = ResourceUtils.getResourceTypeIndex().get(resource);
if (index != null) {
return resources[index];
}
- throw new ResourceNotFoundException("Unknown resource '" + resource
- + "'. Known resources are " + Arrays.toString(resources));
+ throw new ResourceNotFoundException(this, resource);
}
/**
@@ -299,12 +296,10 @@ public abstract class Resource implements Comparable<Resource> {
*
* @param resource name of the resource
* @return the value for the resource
- * @throws ResourceNotFoundException if the resource can't be found
*/
@Public
@InterfaceStability.Unstable
- public long getResourceValue(String resource)
- throws ResourceNotFoundException {
+ public long getResourceValue(String resource) {
return getResourceInformation(resource).getValue();
}
@@ -313,13 +308,11 @@ public abstract class Resource implements Comparable<Resource> {
*
* @param resource the resource for which the ResourceInformation is provided
* @param resourceInformation ResourceInformation object
- * @throws ResourceNotFoundException if the resource is not found
*/
@Public
@InterfaceStability.Unstable
public void setResourceInformation(String resource,
- ResourceInformation resourceInformation)
- throws ResourceNotFoundException {
+ ResourceInformation resourceInformation) {
if (resource.equals(ResourceInformation.MEMORY_URI)) {
this.setMemorySize(resourceInformation.getValue());
return;
@@ -348,8 +341,7 @@ public abstract class Resource implements Comparable<Resource> {
ResourceInformation resourceInformation)
throws ResourceNotFoundException {
if (index < 0 || index >= resources.length) {
- throw new ResourceNotFoundException("Unknown resource at index '" + index
- + "'. Valid resources are " + Arrays.toString(resources));
+ throwExceptionWhenArrayOutOfBound(index);
}
ResourceInformation.copy(resourceInformation, resources[index]);
}
@@ -360,12 +352,10 @@ public abstract class Resource implements Comparable<Resource> {
*
* @param resource the resource for which the value is provided.
* @param value the value to set
- * @throws ResourceNotFoundException if the resource is not found
*/
@Public
@InterfaceStability.Unstable
- public void setResourceValue(String resource, long value)
- throws ResourceNotFoundException {
+ public void setResourceValue(String resource, long value) {
if (resource.equals(ResourceInformation.MEMORY_URI)) {
this.setMemorySize(value);
return;
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f354f47f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/ResourceNotFoundException.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/ResourceNotFoundException.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/ResourceNotFoundException.java
index b5fece7..3fddcff 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/ResourceNotFoundException.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/exceptions/ResourceNotFoundException.java
@@ -18,8 +18,10 @@
package org.apache.hadoop.yarn.exceptions;
+import org.apache.commons.lang3.exception.ExceptionUtils;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.yarn.api.records.Resource;
/**
* This exception is thrown when details of an unknown resource type
@@ -28,18 +30,31 @@ import org.apache.hadoop.classification.InterfaceStability;
@InterfaceAudience.Public
@InterfaceStability.Unstable
public class ResourceNotFoundException extends YarnRuntimeException {
-
private static final long serialVersionUID = 10081982L;
+ private static final String MESSAGE = "The resource manager encountered a "
+ + "problem that should not occur under normal circumstances. "
+ + "Please report this error to the Hadoop community by opening a "
+ + "JIRA ticket at http://issues.apache.org/jira and including the "
+ + "following information:%n* Resource type requested: %s%n* Resource "
+ + "object: %s%n* The stack trace for this exception: %s%n"
+ + "After encountering this error, the resource manager is "
+ + "in an inconsistent state. It is safe for the resource manager "
+ + "to be restarted as the error encountered should be transitive. "
+ + "If high availability is enabled, failing over to "
+ + "a standby resource manager is also safe.";
- public ResourceNotFoundException(String message) {
- super(message);
+ public ResourceNotFoundException(Resource resource, String type) {
+ this(String.format(MESSAGE, type, resource,
+ ExceptionUtils.getStackTrace(new Exception())));
}
- public ResourceNotFoundException(Throwable cause) {
- super(cause);
+ public ResourceNotFoundException(Resource resource, String type,
+ Throwable cause) {
+ super(String.format(MESSAGE, type, resource,
+ ExceptionUtils.getStackTrace(cause)), cause);
}
- public ResourceNotFoundException(String message, Throwable cause) {
- super(message, cause);
+ public ResourceNotFoundException(String message) {
+ super(message);
}
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f354f47f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
index 6ebed6e..15d2470 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourcePBImpl.java
@@ -193,8 +193,7 @@ public class ResourcePBImpl extends Resource {
}
@Override
- public void setResourceValue(String resource, long value)
- throws ResourceNotFoundException {
+ public void setResourceValue(String resource, long value) {
maybeInitBuilder();
if (resource == null) {
throw new IllegalArgumentException("resource type object cannot be null");
@@ -203,14 +202,13 @@ public class ResourcePBImpl extends Resource {
}
@Override
- public ResourceInformation getResourceInformation(String resource)
- throws ResourceNotFoundException {
+ public ResourceInformation getResourceInformation(String resource) {
+ initResources();
return super.getResourceInformation(resource);
}
@Override
- public long getResourceValue(String resource)
- throws ResourceNotFoundException {
+ public long getResourceValue(String resource) {
return super.getResourceValue(resource);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/f354f47f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
index ace8b5d..db0f980 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
@@ -128,14 +128,12 @@ public class Resources {
@Override
public void setResourceInformation(String resource,
- ResourceInformation resourceInformation)
- throws ResourceNotFoundException {
+ ResourceInformation resourceInformation) {
throw new RuntimeException(name + " cannot be modified!");
}
@Override
- public void setResourceValue(String resource, long value)
- throws ResourceNotFoundException {
+ public void setResourceValue(String resource, long value) {
throw new RuntimeException(name + " cannot be modified!");
}
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[31/50] hadoop git commit: HADOOP-15598. DataChecksum calculate
checksum is contented on hashtable synchronization. Contributed by Prasanth
Jayachandran.
Posted by zh...@apache.org.
HADOOP-15598. DataChecksum calculate checksum is contented on hashtable synchronization. Contributed by Prasanth Jayachandran.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/0c7a5789
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/0c7a5789
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/0c7a5789
Branch: refs/heads/HDFS-13572
Commit: 0c7a578927032d5d1ef3469283d7d1fb7dee2a56
Parents: 238ffff
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Mon Jul 16 11:32:45 2018 -0700
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Mon Jul 16 11:32:45 2018 -0700
----------------------------------------------------------------------
.../src/main/java/org/apache/hadoop/util/NativeCrc32.java | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/0c7a5789/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java
index 0669b0a..3142df2 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCrc32.java
@@ -28,12 +28,12 @@ import com.google.common.annotations.VisibleForTesting;
* natively.
*/
class NativeCrc32 {
-
+ private static final boolean isSparc = System.getProperty("os.arch").toLowerCase().startsWith("sparc");
/**
* Return true if the JNI-based native CRC extensions are available.
*/
public static boolean isAvailable() {
- if (System.getProperty("os.arch").toLowerCase().startsWith("sparc")) {
+ if (isSparc) {
return false;
} else {
return NativeCodeLoader.isNativeCodeLoaded();
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[25/50] hadoop git commit: YARN-8421: when moving app,
activeUsers is increased,
even though app does not have outstanding request. Contributed by Kyungwan Nam
Posted by zh...@apache.org.
YARN-8421: when moving app, activeUsers is increased, even though app does not have outstanding request. Contributed by Kyungwan Nam
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/937ef39b
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/937ef39b
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/937ef39b
Branch: refs/heads/HDFS-13572
Commit: 937ef39b3ff90f72392b7a319e4346344db34e03
Parents: 5074ca9
Author: Eric E Payne <er...@oath.com>
Authored: Mon Jul 16 16:24:21 2018 +0000
Committer: Eric E Payne <er...@oath.com>
Committed: Mon Jul 16 16:24:21 2018 +0000
----------------------------------------------------------------------
.../scheduler/AppSchedulingInfo.java | 4 +-
.../TestSchedulerApplicationAttempt.java | 44 ++++++++++++++++++++
2 files changed, 47 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/937ef39b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
index 1efdd8b..8074f06 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
@@ -578,7 +578,9 @@ public class AppSchedulingInfo {
newMetrics.moveAppTo(this);
abstractUsersManager.deactivateApplication(user, applicationId);
abstractUsersManager = newQueue.getAbstractUsersManager();
- abstractUsersManager.activateApplication(user, applicationId);
+ if (!schedulerKeys.isEmpty()) {
+ abstractUsersManager.activateApplication(user, applicationId);
+ }
this.queue = newQueue;
} finally {
this.writeLock.unlock();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/937ef39b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java
index 17f9d23..c110b1c 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerApplicationAttempt.java
@@ -58,6 +58,50 @@ public class TestSchedulerApplicationAttempt {
QueueMetrics.clearQueueMetrics();
DefaultMetricsSystem.shutdown();
}
+
+ @Test
+ public void testActiveUsersWhenMove() {
+ final String user = "user1";
+ Queue parentQueue = createQueue("parent", null);
+ Queue queue1 = createQueue("queue1", parentQueue);
+ Queue queue2 = createQueue("queue2", parentQueue);
+ Queue queue3 = createQueue("queue3", parentQueue);
+
+ ApplicationAttemptId appAttId = createAppAttemptId(0, 0);
+ RMContext rmContext = mock(RMContext.class);
+ when(rmContext.getEpoch()).thenReturn(3L);
+ SchedulerApplicationAttempt app = new SchedulerApplicationAttempt(appAttId,
+ user, queue1, queue1.getAbstractUsersManager(), rmContext);
+
+ // Resource request
+ Resource requestedResource = Resource.newInstance(1536, 2);
+ Priority requestedPriority = Priority.newInstance(2);
+ ResourceRequest request = ResourceRequest.newInstance(requestedPriority,
+ ResourceRequest.ANY, requestedResource, 1);
+ app.updateResourceRequests(Arrays.asList(request));
+
+ assertEquals(1, queue1.getAbstractUsersManager().getNumActiveUsers());
+ // move app from queue1 to queue2
+ app.move(queue2);
+ // Active user count has to decrease from queue1
+ assertEquals(0, queue1.getAbstractUsersManager().getNumActiveUsers());
+ // Increase the active user count in queue2 if the moved app has pending requests
+ assertEquals(1, queue2.getAbstractUsersManager().getNumActiveUsers());
+
+ // Allocated container
+ RMContainer container1 = createRMContainer(appAttId, 1, requestedResource);
+ app.liveContainers.put(container1.getContainerId(), container1);
+ SchedulerNode node = createNode();
+ app.appSchedulingInfo.allocate(NodeType.OFF_SWITCH, node,
+ toSchedulerKey(requestedPriority), container1.getContainer());
+
+ // Active user count has to decrease from queue2 due to app has NO pending requests
+ assertEquals(0, queue2.getAbstractUsersManager().getNumActiveUsers());
+ // move app from queue2 to queue3
+ app.move(queue3);
+ // Active user count in queue3 stays same if the moved app has NO pending requests
+ assertEquals(0, queue3.getAbstractUsersManager().getNumActiveUsers());
+ }
@Test
public void testMove() {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[43/50] hadoop git commit: YARN-8501. Reduce complexity of
RMWebServices getApps method. Contributed by Szilard Nemeth
Posted by zh...@apache.org.
YARN-8501. Reduce complexity of RMWebServices getApps method.
Contributed by Szilard Nemeth
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5836e0a4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5836e0a4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5836e0a4
Branch: refs/heads/HDFS-13572
Commit: 5836e0a46bf9793e0a61bb8ec46536f4a67d38d7
Parents: ccf2db7
Author: Eric Yang <ey...@apache.org>
Authored: Thu Jul 19 12:30:38 2018 -0400
Committer: Eric Yang <ey...@apache.org>
Committed: Thu Jul 19 12:30:38 2018 -0400
----------------------------------------------------------------------
.../hadoop/yarn/server/webapp/WebServices.java | 2 +-
.../webapp/ApplicationsRequestBuilder.java | 231 ++++++++
.../resourcemanager/webapp/RMWebServices.java | 145 +----
.../webapp/TestApplicationsRequestBuilder.java | 529 +++++++++++++++++++
4 files changed, 777 insertions(+), 130 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5836e0a4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
index 03b1055..5bb5448 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
@@ -392,7 +392,7 @@ public class WebServices {
response.setContentType(null);
}
- protected static Set<String>
+ public static Set<String>
parseQueries(Set<String> queries, boolean isState) {
Set<String> params = new HashSet<String>();
if (!queries.isEmpty()) {
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5836e0a4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/ApplicationsRequestBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/ApplicationsRequestBuilder.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/ApplicationsRequestBuilder.java
new file mode 100644
index 0000000..876d044
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/ApplicationsRequestBuilder.java
@@ -0,0 +1,231 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.webapp;
+
+import com.google.common.collect.Sets;
+import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
+import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity
+ .CapacityScheduler;
+import org.apache.hadoop.yarn.webapp.BadRequestException;
+
+import java.io.IOException;
+import java.util.Set;
+
+import static org.apache.hadoop.yarn.server.webapp.WebServices.parseQueries;
+
+public class ApplicationsRequestBuilder {
+
+ private Set<String> statesQuery = Sets.newHashSet();
+ private Set<String> users = Sets.newHashSetWithExpectedSize(1);
+ private Set<String> queues = Sets.newHashSetWithExpectedSize(1);
+ private String limit = null;
+ private Long limitNumber;
+
+ // set values suitable in case both of begin/end not specified
+ private long startedTimeBegin = 0;
+ private long startedTimeEnd = Long.MAX_VALUE;
+ private long finishTimeBegin = 0;
+ private long finishTimeEnd = Long.MAX_VALUE;
+ private Set<String> appTypes = Sets.newHashSet();
+ private Set<String> appTags = Sets.newHashSet();
+ private ResourceManager rm;
+
+ private ApplicationsRequestBuilder() {
+ }
+
+ public static ApplicationsRequestBuilder create() {
+ return new ApplicationsRequestBuilder();
+ }
+
+ public ApplicationsRequestBuilder withStateQuery(String stateQuery) {
+ // stateQuery is deprecated.
+ if (stateQuery != null && !stateQuery.isEmpty()) {
+ statesQuery.add(stateQuery);
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withStatesQuery(
+ Set<String> statesQuery) {
+ if (statesQuery != null) {
+ this.statesQuery.addAll(statesQuery);
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withUserQuery(String userQuery) {
+ if (userQuery != null && !userQuery.isEmpty()) {
+ users.add(userQuery);
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withQueueQuery(ResourceManager rm,
+ String queueQuery) {
+ this.rm = rm;
+ if (queueQuery != null && !queueQuery.isEmpty()) {
+ queues.add(queueQuery);
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withLimit(String limit) {
+ if (limit != null && !limit.isEmpty()) {
+ this.limit = limit;
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withStartedTimeBegin(
+ String startedBegin) {
+ if (startedBegin != null && !startedBegin.isEmpty()) {
+ startedTimeBegin = parseLongValue(startedBegin, "startedTimeBegin");
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withStartedTimeEnd(String startedEnd) {
+ if (startedEnd != null && !startedEnd.isEmpty()) {
+ startedTimeEnd = parseLongValue(startedEnd, "startedTimeEnd");
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withFinishTimeBegin(String finishBegin) {
+ if (finishBegin != null && !finishBegin.isEmpty()) {
+ finishTimeBegin = parseLongValue(finishBegin, "finishedTimeBegin");
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withFinishTimeEnd(String finishEnd) {
+ if (finishEnd != null && !finishEnd.isEmpty()) {
+ finishTimeEnd = parseLongValue(finishEnd, "finishedTimeEnd");
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withApplicationTypes(
+ Set<String> applicationTypes) {
+ if (applicationTypes != null) {
+ appTypes = parseQueries(applicationTypes, false);
+ }
+ return this;
+ }
+
+ public ApplicationsRequestBuilder withApplicationTags(
+ Set<String> applicationTags) {
+ if (applicationTags != null) {
+ appTags = parseQueries(applicationTags, false);
+ }
+ return this;
+ }
+
+ private void validate() {
+ queues.forEach(q -> validateQueueExists(rm, q));
+ validateLimit();
+ validateStartTime();
+ validateFinishTime();
+ }
+
+ private void validateQueueExists(ResourceManager rm, String queueQuery) {
+ ResourceScheduler rs = rm.getResourceScheduler();
+ if (rs instanceof CapacityScheduler) {
+ CapacityScheduler cs = (CapacityScheduler) rs;
+ try {
+ cs.getQueueInfo(queueQuery, false, false);
+ } catch (IOException e) {
+ throw new BadRequestException(e.getMessage());
+ }
+ }
+ }
+
+ private void validateLimit() {
+ if (limit != null) {
+ limitNumber = parseLongValue(limit, "limit");
+ if (limitNumber <= 0) {
+ throw new BadRequestException("limit value must be greater then 0");
+ }
+ }
+ }
+
+ private long parseLongValue(String strValue, String queryName) {
+ try {
+ return Long.parseLong(strValue);
+ } catch (NumberFormatException e) {
+ throw new BadRequestException(queryName + " value must be a number!");
+ }
+ }
+
+ private void validateStartTime() {
+ if (startedTimeBegin < 0) {
+ throw new BadRequestException("startedTimeBegin must be greater than 0");
+ }
+ if (startedTimeEnd < 0) {
+ throw new BadRequestException("startedTimeEnd must be greater than 0");
+ }
+ if (startedTimeBegin > startedTimeEnd) {
+ throw new BadRequestException(
+ "startedTimeEnd must be greater than startTimeBegin");
+ }
+ }
+
+ private void validateFinishTime() {
+ if (finishTimeBegin < 0) {
+ throw new BadRequestException("finishTimeBegin must be greater than 0");
+ }
+ if (finishTimeEnd < 0) {
+ throw new BadRequestException("finishTimeEnd must be greater than 0");
+ }
+ if (finishTimeBegin > finishTimeEnd) {
+ throw new BadRequestException(
+ "finishTimeEnd must be greater than finishTimeBegin");
+ }
+ }
+
+ public GetApplicationsRequest build() {
+ validate();
+ GetApplicationsRequest request = GetApplicationsRequest.newInstance();
+
+ Set<String> appStates = parseQueries(statesQuery, true);
+ if (!appStates.isEmpty()) {
+ request.setApplicationStates(appStates);
+ }
+ if (!users.isEmpty()) {
+ request.setUsers(users);
+ }
+ if (!queues.isEmpty()) {
+ request.setQueues(queues);
+ }
+ if (limitNumber != null) {
+ request.setLimit(limitNumber);
+ }
+ request.setStartRange(startedTimeBegin, startedTimeEnd);
+ request.setFinishRange(finishTimeBegin, finishTimeEnd);
+
+ if (!appTypes.isEmpty()) {
+ request.setApplicationTypes(appTypes);
+ }
+ if (!appTags.isEmpty()) {
+ request.setApplicationTags(appTags);
+ }
+
+ return request;
+ }
+}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5836e0a4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index 15b58d7..4527a02 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
@@ -482,7 +482,7 @@ public class RMWebServices extends WebServices implements RMWebServiceProtocol {
@QueryParam(RMWSConsts.FINAL_STATUS) String finalStatusQuery,
@QueryParam(RMWSConsts.USER) String userQuery,
@QueryParam(RMWSConsts.QUEUE) String queueQuery,
- @QueryParam(RMWSConsts.LIMIT) String count,
+ @QueryParam(RMWSConsts.LIMIT) String limit,
@QueryParam(RMWSConsts.STARTED_TIME_BEGIN) String startedBegin,
@QueryParam(RMWSConsts.STARTED_TIME_END) String startedEnd,
@QueryParam(RMWSConsts.FINISHED_TIME_BEGIN) String finishBegin,
@@ -493,135 +493,22 @@ public class RMWebServices extends WebServices implements RMWebServiceProtocol {
initForReadableEndpoints();
- boolean checkCount = false;
- boolean checkStart = false;
- boolean checkEnd = false;
- boolean checkAppTypes = false;
- boolean checkAppStates = false;
- boolean checkAppTags = false;
- long countNum = 0;
-
- // set values suitable in case both of begin/end not specified
- long sBegin = 0;
- long sEnd = Long.MAX_VALUE;
- long fBegin = 0;
- long fEnd = Long.MAX_VALUE;
-
- if (count != null && !count.isEmpty()) {
- checkCount = true;
- countNum = Long.parseLong(count);
- if (countNum <= 0) {
- throw new BadRequestException("limit value must be greater then 0");
- }
- }
-
- if (startedBegin != null && !startedBegin.isEmpty()) {
- checkStart = true;
- sBegin = Long.parseLong(startedBegin);
- if (sBegin < 0) {
- throw new BadRequestException(
- "startedTimeBegin must be greater than 0");
- }
- }
- if (startedEnd != null && !startedEnd.isEmpty()) {
- checkStart = true;
- sEnd = Long.parseLong(startedEnd);
- if (sEnd < 0) {
- throw new BadRequestException("startedTimeEnd must be greater than 0");
- }
- }
- if (sBegin > sEnd) {
- throw new BadRequestException(
- "startedTimeEnd must be greater than startTimeBegin");
- }
-
- if (finishBegin != null && !finishBegin.isEmpty()) {
- checkEnd = true;
- fBegin = Long.parseLong(finishBegin);
- if (fBegin < 0) {
- throw new BadRequestException("finishTimeBegin must be greater than 0");
- }
- }
- if (finishEnd != null && !finishEnd.isEmpty()) {
- checkEnd = true;
- fEnd = Long.parseLong(finishEnd);
- if (fEnd < 0) {
- throw new BadRequestException("finishTimeEnd must be greater than 0");
- }
- }
- if (fBegin > fEnd) {
- throw new BadRequestException(
- "finishTimeEnd must be greater than finishTimeBegin");
- }
-
- Set<String> appTypes = parseQueries(applicationTypes, false);
- if (!appTypes.isEmpty()) {
- checkAppTypes = true;
- }
-
- Set<String> appTags = parseQueries(applicationTags, false);
- if (!appTags.isEmpty()) {
- checkAppTags = true;
- }
-
- // stateQuery is deprecated.
- if (stateQuery != null && !stateQuery.isEmpty()) {
- statesQuery.add(stateQuery);
- }
- Set<String> appStates = parseQueries(statesQuery, true);
- if (!appStates.isEmpty()) {
- checkAppStates = true;
- }
-
- GetApplicationsRequest request = GetApplicationsRequest.newInstance();
-
- if (checkStart) {
- request.setStartRange(sBegin, sEnd);
- }
-
- if (checkEnd) {
- request.setFinishRange(fBegin, fEnd);
- }
-
- if (checkCount) {
- request.setLimit(countNum);
- }
-
- if (checkAppTypes) {
- request.setApplicationTypes(appTypes);
- }
-
- if (checkAppTags) {
- request.setApplicationTags(appTags);
- }
-
- if (checkAppStates) {
- request.setApplicationStates(appStates);
- }
-
- if (queueQuery != null && !queueQuery.isEmpty()) {
- ResourceScheduler rs = rm.getResourceScheduler();
- if (rs instanceof CapacityScheduler) {
- CapacityScheduler cs = (CapacityScheduler) rs;
- // validate queue exists
- try {
- cs.getQueueInfo(queueQuery, false, false);
- } catch (IOException e) {
- throw new BadRequestException(e.getMessage());
- }
- }
- Set<String> queues = new HashSet<String>(1);
- queues.add(queueQuery);
- request.setQueues(queues);
- }
-
- if (userQuery != null && !userQuery.isEmpty()) {
- Set<String> users = new HashSet<String>(1);
- users.add(userQuery);
- request.setUsers(users);
- }
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create()
+ .withStateQuery(stateQuery)
+ .withStatesQuery(statesQuery)
+ .withUserQuery(userQuery)
+ .withQueueQuery(rm, queueQuery)
+ .withLimit(limit)
+ .withStartedTimeBegin(startedBegin)
+ .withStartedTimeEnd(startedEnd)
+ .withFinishTimeBegin(finishBegin)
+ .withFinishTimeEnd(finishEnd)
+ .withApplicationTypes(applicationTypes)
+ .withApplicationTags(applicationTags)
+ .build();
- List<ApplicationReport> appReports = null;
+ List<ApplicationReport> appReports;
try {
appReports = rm.getClientRMService().getApplications(request)
.getApplicationList();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/5836e0a4/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestApplicationsRequestBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestApplicationsRequestBuilder.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestApplicationsRequestBuilder.java
new file mode 100644
index 0000000..7c9b711
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestApplicationsRequestBuilder.java
@@ -0,0 +1,529 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.webapp;
+
+import com.google.common.collect.Sets;
+import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
+import org.apache.hadoop.yarn.api.records.YarnApplicationState;
+import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
+import org.apache.hadoop.yarn.webapp.BadRequestException;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.Set;
+
+import static org.apache.hadoop.yarn.server.webapp.WebServices.parseQueries;
+import static org.junit.Assert.assertEquals;
+import static org.mockito.Matchers.anyBoolean;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+public class TestApplicationsRequestBuilder {
+
+ private GetApplicationsRequest getDefaultRequest() {
+ GetApplicationsRequest req = GetApplicationsRequest.newInstance();
+ req.setStartRange(0, Long.MAX_VALUE);
+ req.setFinishRange(0, Long.MAX_VALUE);
+ return req;
+ }
+
+ @Test
+ public void testDefaultRequest() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullStateQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withStateQuery(null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptyStateQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withStateQuery("").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidStateQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStateQuery("invalidState").build();
+ }
+
+ @Test
+ public void testRequestWithValidStateQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStateQuery(YarnApplicationState.NEW_SAVING.toString()).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ Set<String> appStates =
+ Sets.newHashSet(YarnApplicationState.NEW_SAVING.toString());
+ Set<String> appStatesLowerCase = parseQueries(appStates, true);
+ expectedRequest.setApplicationStates(appStatesLowerCase);
+
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptyStateQueries() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStatesQuery(Sets.newHashSet()).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidStateQueries() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStatesQuery(Sets.newHashSet("a1", "a2", "")).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullStateQueries() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withStatesQuery(null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidStateQueries() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStatesQuery(
+ Sets.newHashSet(YarnApplicationState.NEW_SAVING.toString(),
+ YarnApplicationState.NEW.toString()))
+ .build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ Set<String> appStates =
+ Sets.newHashSet(YarnApplicationState.NEW_SAVING.toString(),
+ YarnApplicationState.NEW.toString());
+ Set<String> appStatesLowerCase = parseQueries(appStates, true);
+ expectedRequest.setApplicationStates(appStatesLowerCase);
+
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullUserQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withUserQuery(null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptyUserQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withUserQuery("").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithUserQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withUserQuery("user1").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setUsers(Sets.newHashSet("user1"));
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullQueueQuery() {
+ ResourceManager rm = mock(ResourceManager.class);
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withQueueQuery(rm, null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptyQueueQuery() {
+ ResourceManager rm = mock(ResourceManager.class);
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withQueueQuery(rm, "").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithQueueQueryExistingQueue() {
+ ResourceManager rm = mock(ResourceManager.class);
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withQueueQuery(rm, "queue1").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setQueues(Sets.newHashSet("queue1"));
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithQueueQueryNotExistingQueue() throws IOException {
+ CapacityScheduler cs = mock(CapacityScheduler.class);
+ when(cs.getQueueInfo(eq("queue1"), anyBoolean(), anyBoolean()))
+ .thenThrow(new IOException());
+ ResourceManager rm = mock(ResourceManager.class);
+ when(rm.getResourceScheduler()).thenReturn(cs);
+
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withQueueQuery(rm, "queue1").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setQueues(Sets.newHashSet("queue1"));
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullLimitQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withLimit(null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptyLimitQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withLimit("").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidLimitQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withLimit("bla").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidNegativeLimitQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withLimit("-10").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidLimitQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withLimit("999").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setLimit(999L);
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullStartedTimeBeginQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStartedTimeBegin(null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptyStartedTimeBeginQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withStartedTimeBegin("").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidStartedTimeBeginQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStartedTimeBegin("bla").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidNegativeStartedTimeBeginQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStartedTimeBegin("-1").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidStartedTimeBeginQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStartedTimeBegin("999").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setStartRange(999L, Long.MAX_VALUE);
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullStartedTimeEndQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withStartedTimeEnd(null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptywithStartedTimeEndQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withStartedTimeEnd("").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidStartedTimeEndQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStartedTimeEnd("bla").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidNegativeStartedTimeEndQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withStartedTimeEnd("-1").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidStartedTimeEndQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStartedTimeEnd("999").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setStartRange(0L, 999L);
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullFinishedTimeBeginQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withFinishTimeBegin(null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptyFinishedTimeBeginQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withFinishTimeBegin("").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidFinishedTimeBeginQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withFinishTimeBegin("bla").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidNegativeFinishedTimeBeginQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withFinishTimeBegin("-1").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidFinishedTimeBeginQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withFinishTimeBegin("999").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setFinishRange(999L, Long.MAX_VALUE);
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullFinishedTimeEndQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withFinishTimeEnd(null).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithEmptyFinishTimeEndQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withFinishTimeEnd("").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidFinishTimeEndQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withFinishTimeEnd("bla").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidNegativeFinishedTimeEndQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withFinishTimeEnd("-1").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidFinishTimeEndQuery() {
+ GetApplicationsRequest request =
+ ApplicationsRequestBuilder.create().withFinishTimeEnd("999").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setFinishRange(0L, 999L);
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidStartTimeRangeQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStartedTimeBegin("1000").withStartedTimeEnd("2000").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setStartRange(1000L, 2000L);
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidStartTimeRangeQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withStartedTimeBegin("2000").withStartedTimeEnd("1000").build();
+ }
+
+ @Test
+ public void testRequestWithValidFinishTimeRangeQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withFinishTimeBegin("1000").withFinishTimeEnd("2000").build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setFinishRange(1000L, 2000L);
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test(expected = BadRequestException.class)
+ public void testRequestWithInvalidFinishTimeRangeQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withFinishTimeBegin("2000").withFinishTimeEnd("1000").build();
+ }
+
+ @Test
+ public void testRequestWithNullApplicationTypesQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withApplicationTypes(null).build();
+ }
+
+ @Test
+ public void testRequestWithEmptyApplicationTypesQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withApplicationTypes(Sets.newHashSet()).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setApplicationTypes(Sets.newHashSet());
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidApplicationTypesQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withApplicationTypes(Sets.newHashSet("type1")).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setApplicationTypes(Sets.newHashSet("type1"));
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithNullApplicationTagsQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withApplicationTags(null).build();
+ }
+
+ @Test
+ public void testRequestWithEmptyApplicationTagsQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withApplicationTags(Sets.newHashSet()).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setApplicationTags(Sets.newHashSet());
+ assertEquals(expectedRequest, request);
+ }
+
+ @Test
+ public void testRequestWithValidApplicationTagsQuery() {
+ GetApplicationsRequest request = ApplicationsRequestBuilder.create()
+ .withApplicationTags(Sets.newHashSet("tag1")).build();
+
+ GetApplicationsRequest expectedRequest = getDefaultRequest();
+ expectedRequest.setApplicationTags(Sets.newHashSet("tag1"));
+ assertEquals(expectedRequest, request);
+ }
+}
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[03/50] hadoop git commit: HADOOP-15594. Exclude commons-lang3 from
hadoop-client-minicluster. Contributed by Takanobu Asanuma.
Posted by zh...@apache.org.
HADOOP-15594. Exclude commons-lang3 from hadoop-client-minicluster. Contributed by Takanobu Asanuma.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d36ed94e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d36ed94e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d36ed94e
Branch: refs/heads/HDFS-13572
Commit: d36ed94ee06945fe9122970b196968fd1c997dcc
Parents: 2ae13d4
Author: Akira Ajisaka <aa...@apache.org>
Authored: Wed Jul 11 10:53:08 2018 -0400
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Wed Jul 11 10:53:08 2018 -0400
----------------------------------------------------------------------
hadoop-client-modules/hadoop-client-minicluster/pom.xml | 8 ++++++++
1 file changed, 8 insertions(+)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d36ed94e/hadoop-client-modules/hadoop-client-minicluster/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index 6fa24b4..490281a 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -167,6 +167,10 @@
<artifactId>commons-io</artifactId>
</exclusion>
<exclusion>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-lang3</artifactId>
+ </exclusion>
+ <exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
@@ -492,6 +496,10 @@
<artifactId>commons-codec</artifactId>
</exclusion>
<exclusion>
+ <groupId>org.apache.commons</groupId>
+ <artifactId>commons-lang3</artifactId>
+ </exclusion>
+ <exclusion>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
</exclusion>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[29/50] hadoop git commit: YARN-8361. Change App Name Placement Rule
to use App Name instead of App Id for configuration. (Zian Chen via wangda)
Posted by zh...@apache.org.
YARN-8361. Change App Name Placement Rule to use App Name instead of App Id for configuration. (Zian Chen via wangda)
Change-Id: I17e5021f8f611a9c5e3bd4b38f25e08585afc6b1
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/a2e49f41
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/a2e49f41
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/a2e49f41
Branch: refs/heads/HDFS-13572
Commit: a2e49f41a8bcc03ce0a85b294d0b86fee7e86f31
Parents: 752dcce
Author: Wangda Tan <wa...@apache.org>
Authored: Mon Jul 16 10:57:37 2018 -0700
Committer: Wangda Tan <wa...@apache.org>
Committed: Mon Jul 16 10:57:37 2018 -0700
----------------------------------------------------------------------
.../placement/AppNameMappingPlacementRule.java | 18 ++++----
.../TestAppNameMappingPlacementRule.java | 43 ++++++++++----------
.../placement/TestPlacementManager.java | 7 ++--
.../src/site/markdown/CapacityScheduler.md | 6 +--
4 files changed, 38 insertions(+), 36 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2e49f41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
index c1264e9..2debade 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/AppNameMappingPlacementRule.java
@@ -66,7 +66,7 @@ public class AppNameMappingPlacementRule extends PlacementRule {
CapacitySchedulerConfiguration conf = schedulerContext.getConfiguration();
boolean overrideWithQueueMappings = conf.getOverrideWithQueueMappings();
LOG.info(
- "Initialized queue mappings, override: " + overrideWithQueueMappings);
+ "Initialized App Name queue mappings, override: " + overrideWithQueueMappings);
List<QueueMappingEntity> queueMappings =
conf.getQueueMappingEntity(QUEUE_MAPPING_NAME);
@@ -139,6 +139,8 @@ public class AppNameMappingPlacementRule extends PlacementRule {
if (newMappings.size() > 0) {
this.mappings = newMappings;
this.overrideWithQueueMappings = overrideWithQueueMappings;
+ LOG.info("get valid queue mapping from app name config: " +
+ newMappings.toString() + ", override: " + overrideWithQueueMappings);
return true;
}
return false;
@@ -149,16 +151,16 @@ public class AppNameMappingPlacementRule extends PlacementRule {
}
private ApplicationPlacementContext getAppPlacementContext(String user,
- ApplicationId applicationId) throws IOException {
+ String applicationName) throws IOException {
for (QueueMappingEntity mapping : mappings) {
if (mapping.getSource().equals(CURRENT_APP_MAPPING)) {
if (mapping.getQueue().equals(CURRENT_APP_MAPPING)) {
- return getPlacementContext(mapping, String.valueOf(applicationId));
+ return getPlacementContext(mapping, applicationName);
} else {
return getPlacementContext(mapping);
}
}
- if (mapping.getSource().equals(applicationId.toString())) {
+ if (mapping.getSource().equals(applicationName)) {
return getPlacementContext(mapping);
}
}
@@ -169,25 +171,25 @@ public class AppNameMappingPlacementRule extends PlacementRule {
public ApplicationPlacementContext getPlacementForApp(
ApplicationSubmissionContext asc, String user) throws YarnException {
String queueName = asc.getQueue();
- ApplicationId applicationId = asc.getApplicationId();
+ String applicationName = asc.getApplicationName();
if (mappings != null && mappings.size() > 0) {
try {
ApplicationPlacementContext mappedQueue = getAppPlacementContext(user,
- applicationId);
+ applicationName);
if (mappedQueue != null) {
// We have a mapping, should we use it?
if (queueName.equals(YarnConfiguration.DEFAULT_QUEUE_NAME)
//queueName will be same as mapped queue name in case of recovery
|| queueName.equals(mappedQueue.getQueue())
|| overrideWithQueueMappings) {
- LOG.info("Application " + applicationId
+ LOG.info("Application " + applicationName
+ " mapping [" + queueName + "] to [" + mappedQueue
+ "] override " + overrideWithQueueMappings);
return mappedQueue;
}
}
} catch (IOException ioex) {
- String message = "Failed to submit application " + applicationId +
+ String message = "Failed to submit application " + applicationName +
" reason: " + ioex.getMessage();
throw new YarnException(message);
}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2e49f41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestAppNameMappingPlacementRule.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestAppNameMappingPlacementRule.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestAppNameMappingPlacementRule.java
index 0542633..88b7e68 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestAppNameMappingPlacementRule.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestAppNameMappingPlacementRule.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.yarn.api.records.ApplicationId;
import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.activities.ActivitiesLogger;
import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.SimpleGroupsMapping;
import org.apache.hadoop.yarn.util.Records;
import org.junit.Assert;
@@ -33,13 +34,7 @@ import org.junit.Test;
import java.util.Arrays;
public class TestAppNameMappingPlacementRule {
-
- private static final long CLUSTER_TIMESTAMP = System.currentTimeMillis();
- public static final String APPIDSTRPREFIX = "application";
- private static final String APPLICATION_ID_PREFIX = APPIDSTRPREFIX + '_';
- private static final String APPLICATION_ID_SUFFIX = '_' + "0001";
- private static final String CLUSTER_APP_ID = APPLICATION_ID_PREFIX +
- CLUSTER_TIMESTAMP + APPLICATION_ID_SUFFIX;
+ private static final String APP_NAME = "DistributedShell";
private YarnConfiguration conf = new YarnConfiguration();
@@ -50,24 +45,29 @@ public class TestAppNameMappingPlacementRule {
}
private void verifyQueueMapping(QueueMappingEntity queueMapping,
- String inputAppId, String expectedQueue) throws YarnException {
- verifyQueueMapping(queueMapping, inputAppId,
- YarnConfiguration.DEFAULT_QUEUE_NAME, expectedQueue, false);
+ String user, String expectedQueue) throws YarnException {
+ verifyQueueMapping(queueMapping, user,
+ queueMapping.getQueue(), expectedQueue, false);
}
private void verifyQueueMapping(QueueMappingEntity queueMapping,
- String inputAppId, String inputQueue, String expectedQueue,
+ String user, String inputQueue, String expectedQueue,
boolean overwrite) throws YarnException {
AppNameMappingPlacementRule rule = new AppNameMappingPlacementRule(
overwrite, Arrays.asList(queueMapping));
ApplicationSubmissionContext asc = Records.newRecord(
ApplicationSubmissionContext.class);
+ if (inputQueue.equals("%application")) {
+ inputQueue = APP_NAME;
+ }
asc.setQueue(inputQueue);
- ApplicationId appId = ApplicationId.newInstance(CLUSTER_TIMESTAMP,
- Integer.parseInt(inputAppId));
- asc.setApplicationId(appId);
+ String appName = queueMapping.getSource();
+ if (appName.equals("%application")) {
+ appName = APP_NAME;
+ }
+ asc.setApplicationName(appName);
ApplicationPlacementContext ctx = rule.getPlacementForApp(asc,
- queueMapping.getSource());
+ user);
Assert.assertEquals(expectedQueue,
ctx != null ? ctx.getQueue() : inputQueue);
}
@@ -75,19 +75,20 @@ public class TestAppNameMappingPlacementRule {
@Test
public void testMapping() throws YarnException {
// simple base case for mapping user to queue
- verifyQueueMapping(new QueueMappingEntity(CLUSTER_APP_ID,
- "q1"), "1", "q1");
- verifyQueueMapping(new QueueMappingEntity("%application", "q2"), "1", "q2");
+ verifyQueueMapping(new QueueMappingEntity(APP_NAME,
+ "q1"), "user_1", "q1");
+ verifyQueueMapping(new QueueMappingEntity("%application", "q2"), "user_1",
+ "q2");
verifyQueueMapping(new QueueMappingEntity("%application", "%application"),
- "1", CLUSTER_APP_ID);
+ "user_1", APP_NAME);
// specify overwritten, and see if user specified a queue, and it will be
// overridden
- verifyQueueMapping(new QueueMappingEntity(CLUSTER_APP_ID,
+ verifyQueueMapping(new QueueMappingEntity(APP_NAME,
"q1"), "1", "q2", "q1", true);
// if overwritten not specified, it should be which user specified
- verifyQueueMapping(new QueueMappingEntity(CLUSTER_APP_ID,
+ verifyQueueMapping(new QueueMappingEntity(APP_NAME,
"q1"), "1", "q2", "q2", false);
}
}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2e49f41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
index 7776ec3..13111be 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/placement/TestPlacementManager.java
@@ -39,6 +39,7 @@ import static org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.T
public class TestPlacementManager {
public static final String USER = "user_";
+ public static final String APP_NAME = "DistributedShell";
public static final String APP_ID1 = "1";
public static final String USER1 = USER + APP_ID1;
public static final String APP_ID2 = "2";
@@ -82,9 +83,7 @@ public class TestPlacementManager {
ApplicationSubmissionContext asc = Records.newRecord(
ApplicationSubmissionContext.class);
- ApplicationId appId = ApplicationId.newInstance(CLUSTER_TIMESTAMP,
- Integer.parseInt(APP_ID1));
- asc.setApplicationId(appId);
+ asc.setApplicationName(APP_NAME);
boolean caughtException = false;
try{
@@ -94,7 +93,7 @@ public class TestPlacementManager {
}
Assert.assertTrue(caughtException);
- QueueMappingEntity queueMappingEntity = new QueueMappingEntity(APP_ID1,
+ QueueMappingEntity queueMappingEntity = new QueueMappingEntity(APP_NAME,
USER1, PARENT_QUEUE);
AppNameMappingPlacementRule anRule = new AppNameMappingPlacementRule(false,
http://git-wip-us.apache.org/repos/asf/hadoop/blob/a2e49f41/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
index 5be32d4..5ac1d0a 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
@@ -162,7 +162,7 @@ Configuration
| Property | Description |
|:---- |:---- |
| `yarn.scheduler.capacity.queue-mappings` | This configuration specifies the mapping of user or group to a specific queue. You can map a single user or a list of users to queues. Syntax: `[u or g]:[name]:[queue_name][,next_mapping]*`. Here, *u or g* indicates whether the mapping is for a user or group. The value is *u* for user and *g* for group. *name* indicates the user name or group name. To specify the user who has submitted the application, %user can be used. *queue_name* indicates the queue name for which the application has to be mapped. To specify queue name same as user name, *%user* can be used. To specify queue name same as the name of the primary group for which the user belongs to, *%primary_group* can be used.|
-| `yarn.scheduler.queue-placement-rules.app-name` | This configuration specifies the mapping of application_id to a specific queue. You can map a single application or a list of applications to queues. Syntax: `[app_id]:[queue_name][,next_mapping]*`. Here, *app_id* indicates the application id you want to do the mapping. To specify the current application's id as the app_id, %application can be used. *queue_name* indicates the queue name for which the application has to be mapped. To specify queue name same as application id, *%application* can be used.|
+| `yarn.scheduler.queue-placement-rules.app-name` | This configuration specifies the mapping of application_name to a specific queue. You can map a single application or a list of applications to queues. Syntax: `[app_name]:[queue_name][,next_mapping]*`. Here, *app_name* indicates the application name you want to do the mapping. *queue_name* indicates the queue name for which the application has to be mapped. To specify the current application's name as the app_name, %application can be used.|
| `yarn.scheduler.capacity.queue-mappings-override.enable` | This function is used to specify whether the user specified queues can be overridden. This is a Boolean value and the default value is *false*. |
Example:
@@ -181,9 +181,9 @@ Example:
<property>
<name>yarn.scheduler.queue-placement-rules.app-name</name>
- <value>appId1:queue1,%application:%application</value>
+ <value>appName1:queue1,%application:%application</value>
<description>
- Here, <appId1> is mapped to <queue1>, maps applications to queues with
+ Here, <appName1> is mapped to <queue1>, maps applications to queues with
the same name as application respectively. The mappings will be
evaluated from left to right, and the first valid mapping will be used.
</description>
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[39/50] hadoop git commit: HDDS-207. ozone listVolume command accepts
random values as argument. Contributed by Lokesh Jain.
Posted by zh...@apache.org.
HDDS-207. ozone listVolume command accepts random values as argument. Contributed by Lokesh Jain.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/129269f9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/129269f9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/129269f9
Branch: refs/heads/HDFS-13572
Commit: 129269f98926775ccb5046d9dd41b58f1013211d
Parents: d5d4447
Author: Xiaoyu Yao <xy...@apache.org>
Authored: Wed Jul 18 11:05:42 2018 -0700
Committer: Xiaoyu Yao <xy...@apache.org>
Committed: Wed Jul 18 11:05:42 2018 -0700
----------------------------------------------------------------------
.../src/test/acceptance/basic/ozone-shell.robot | 8 +++++---
.../apache/hadoop/ozone/ozShell/TestOzoneShell.java | 12 ++++++++++--
.../org/apache/hadoop/ozone/web/ozShell/Shell.java | 1 +
.../ozone/web/ozShell/volume/ListVolumeHandler.java | 13 ++++++++++++-
4 files changed, 28 insertions(+), 6 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/129269f9/hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot
----------------------------------------------------------------------
diff --git a/hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot b/hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot
index f4be3e0..cc4b035 100644
--- a/hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot
+++ b/hadoop-ozone/acceptance-test/src/test/acceptance/basic/ozone-shell.robot
@@ -52,7 +52,9 @@ Test ozone shell
${result} = Execute on datanode ozone oz -createVolume ${protocol}${server}/${volume} -user bilbo -quota 100TB -root
Should not contain ${result} Failed
Should contain ${result} Creating Volume: ${volume}
- ${result} = Execute on datanode ozone oz -listVolume o3://ozoneManager -user bilbo | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.volumeName=="${volume}")'
+ ${result} = Execute on datanode ozone oz -listVolume ${protocol}${server}/ -user bilbo | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.volumeName=="${volume}")'
+ Should contain ${result} createdOn
+ ${result} = Execute on datanode ozone oz -listVolume -user bilbo | grep -Ev 'Removed|DEBUG|ERROR|INFO|TRACE|WARN' | jq -r '.[] | select(.volumeName=="${volume}")'
Should contain ${result} createdOn
Execute on datanode ozone oz -updateVolume ${protocol}${server}/${volume} -user bill -quota 10TB
${result} = Execute on datanode ozone oz -infoVolume ${protocol}${server}/${volume} | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '. | select(.volumeName=="${volume}") | .owner | .name'
@@ -66,7 +68,7 @@ Test ozone shell
Should Be Equal ${result} GROUP
${result} = Execute on datanode ozone oz -updateBucket ${protocol}${server}/${volume}/bb1 -removeAcl group:samwise:r | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '. | select(.bucketName=="bb1") | .acls | .[] | select(.name=="frodo") | .type'
Should Be Equal ${result} USER
- ${result} = Execute on datanode ozone oz -listBucket o3://ozoneManager/${volume}/ | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.bucketName=="bb1") | .volumeName'
+ ${result} = Execute on datanode ozone oz -listBucket ${protocol}${server}/${volume}/ | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.bucketName=="bb1") | .volumeName'
Should Be Equal ${result} ${volume}
Run Keyword and Return If ${withkeytest} Test key handling ${protocol} ${server} ${volume}
Execute on datanode ozone oz -deleteBucket ${protocol}${server}/${volume}/bb1
@@ -80,6 +82,6 @@ Test key handling
Execute on datanode ls -l NOTICE.txt.1
${result} = Execute on datanode ozone oz -infoKey ${protocol}${server}/${volume}/bb1/key1 | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '. | select(.keyName=="key1")'
Should contain ${result} createdOn
- ${result} = Execute on datanode ozone oz -listKey o3://ozoneManager/${volume}/bb1 | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.keyName=="key1") | .keyName'
+ ${result} = Execute on datanode ozone oz -listKey ${protocol}${server}/${volume}/bb1 | grep -Ev 'Removed|WARN|DEBUG|ERROR|INFO|TRACE' | jq -r '.[] | select(.keyName=="key1") | .keyName'
Should Be Equal ${result} key1
Execute on datanode ozone oz -deleteKey ${protocol}${server}/${volume}/bb1/key1 -v
http://git-wip-us.apache.org/repos/asf/hadoop/blob/129269f9/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
index 000d530..8f53049 100644
--- a/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
+++ b/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java
@@ -71,6 +71,7 @@ import org.apache.hadoop.test.GenericTestUtils;
import org.apache.hadoop.util.ToolRunner;
import org.junit.After;
import org.junit.AfterClass;
+import org.junit.Assert;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Rule;
@@ -332,7 +333,7 @@ public class TestOzoneShell {
public void testListVolume() throws Exception {
LOG.info("Running testListVolume");
String protocol = clientProtocol.getName().toLowerCase();
- String commandOutput;
+ String commandOutput, commandError;
List<VolumeInfo> volumes;
final int volCount = 20;
final String user1 = "test-user-a-" + protocol;
@@ -361,8 +362,15 @@ public class TestOzoneShell {
assertNotNull(vol);
}
+ String[] args = new String[] {"-listVolume", url + "/abcde", "-user",
+ user1, "-length", "100"};
+ assertEquals(1, ToolRunner.run(shell, args));
+ commandError = err.toString();
+ Assert.assertTrue(commandError.contains("Invalid URI:"));
+
+ err.reset();
// test -length option
- String[] args = new String[] {"-listVolume", url + "/", "-user",
+ args = new String[] {"-listVolume", url + "/", "-user",
user1, "-length", "100"};
assertEquals(0, ToolRunner.run(shell, args));
commandOutput = out.toString();
http://git-wip-us.apache.org/repos/asf/hadoop/blob/129269f9/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
index 2aec0fc..726f4ca 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/Shell.java
@@ -207,6 +207,7 @@ public class Shell extends Configured implements Tool {
"For example : ozone oz -listVolume <ozoneURI>" +
"-user <username> -root or ozone oz " +
"-listVolume");
+ listVolume.setOptionalArg(true);
options.addOption(listVolume);
Option updateVolume =
http://git-wip-us.apache.org/repos/asf/hadoop/blob/129269f9/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/ListVolumeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/ListVolumeHandler.java b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/ListVolumeHandler.java
index 3749df4..85b7b2b 100644
--- a/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/ListVolumeHandler.java
+++ b/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/web/ozShell/volume/ListVolumeHandler.java
@@ -18,6 +18,7 @@
package org.apache.hadoop.ozone.web.ozShell.volume;
+import com.google.common.base.Strings;
import org.apache.commons.cli.CommandLine;
import org.apache.hadoop.ozone.client.OzoneClientUtils;
import org.apache.hadoop.ozone.client.OzoneVolume;
@@ -30,6 +31,7 @@ import org.apache.hadoop.ozone.web.utils.JsonUtils;
import org.apache.hadoop.ozone.web.utils.OzoneUtils;
import java.io.IOException;
+import java.net.URI;
import java.net.URISyntaxException;
import java.util.ArrayList;
import java.util.Iterator;
@@ -77,7 +79,16 @@ public class ListVolumeHandler extends Handler {
}
String ozoneURIString = cmd.getOptionValue(Shell.LIST_VOLUME);
- verifyURI(ozoneURIString);
+ if (Strings.isNullOrEmpty(ozoneURIString)) {
+ ozoneURIString = "/";
+ }
+ URI ozoneURI = verifyURI(ozoneURIString);
+ if (!Strings.isNullOrEmpty(ozoneURI.getPath()) && !ozoneURI.getPath()
+ .equals("/")) {
+ throw new OzoneClientException(
+ "Invalid URI: " + ozoneURI + " . Specified path not used." + ozoneURI
+ .getPath());
+ }
if (cmd.hasOption(Shell.USER)) {
userName = cmd.getOptionValue(Shell.USER);
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[02/50] hadoop git commit: HDFS-13726. RBF: Fix RBF configuration
links. Contributed by Takanobu Asanuma.
Posted by zh...@apache.org.
HDFS-13726. RBF: Fix RBF configuration links. Contributed by Takanobu Asanuma.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2ae13d41
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2ae13d41
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2ae13d41
Branch: refs/heads/HDFS-13572
Commit: 2ae13d41dcd4f49e6b4ebc099e5f8bb8280b9872
Parents: 52e1bc8
Author: Yiqun Lin <yq...@apache.org>
Authored: Wed Jul 11 22:11:59 2018 +0800
Committer: Yiqun Lin <yq...@apache.org>
Committed: Wed Jul 11 22:11:59 2018 +0800
----------------------------------------------------------------------
.../hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2ae13d41/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
index 70c6226..73e0f4a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
+++ b/hadoop-hdfs-project/hadoop-hdfs-rbf/src/site/markdown/HDFSRouterFederation.md
@@ -175,7 +175,7 @@ Deployment
By default, the Router is ready to take requests and monitor the NameNode in the local machine.
It needs to know the State Store endpoint by setting `dfs.federation.router.store.driver.class`.
-The rest of the options are documented in [hdfs-default.xml](../hadoop-hdfs/hdfs-default.xml).
+The rest of the options are documented in [hdfs-rbf-default.xml](../hadoop-hdfs-rbf/hdfs-rbf-default.xml).
Once the Router is configured, it can be started:
@@ -290,7 +290,7 @@ Router configuration
--------------------
One can add the configurations for Router-based federation to **hdfs-site.xml**.
-The main options are documented in [hdfs-default.xml](../hadoop-hdfs/hdfs-default.xml).
+The main options are documented in [hdfs-rbf-default.xml](../hadoop-hdfs-rbf/hdfs-rbf-default.xml).
The configuration values are described in this section.
### RPC server
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[27/50] hadoop git commit: HDFS-13524. Occasional "All datanodes are
bad" error in TestLargeBlock#testLargeBlockSize. Contributed by Siyao Meng.
Posted by zh...@apache.org.
HDFS-13524. Occasional "All datanodes are bad" error in TestLargeBlock#testLargeBlockSize. Contributed by Siyao Meng.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/88b27942
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/88b27942
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/88b27942
Branch: refs/heads/HDFS-13572
Commit: 88b2794244d19b6432253eb649a375e5bcdcf964
Parents: 359ea4e
Author: Wei-Chiu Chuang <we...@apache.org>
Authored: Mon Jul 16 10:51:23 2018 -0700
Committer: Wei-Chiu Chuang <we...@apache.org>
Committed: Mon Jul 16 10:51:23 2018 -0700
----------------------------------------------------------------------
.../src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/88b27942/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
index a37da35..ec7a077 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
@@ -50,6 +50,7 @@ public class TestLargeBlock {
// should we verify the data read back from the file? (slow)
static final boolean verifyData = true;
static final byte[] pattern = { 'D', 'E', 'A', 'D', 'B', 'E', 'E', 'F'};
+ static final int numDatanodes = 3;
// creates a file
static FSDataOutputStream createFile(FileSystem fileSys, Path name, int repl,
@@ -158,7 +159,7 @@ public class TestLargeBlock {
* timeout here.
* @throws IOException in case of errors
*/
- @Test (timeout = 900000)
+ @Test (timeout = 1800000)
public void testLargeBlockSize() throws IOException {
final long blockSize = 2L * 1024L * 1024L * 1024L + 512L; // 2GB + 512B
runTest(blockSize);
@@ -175,7 +176,8 @@ public class TestLargeBlock {
final long fileSize = blockSize + 1L;
Configuration conf = new Configuration();
- MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).build();
+ MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
+ .numDataNodes(numDatanodes).build();
FileSystem fs = cluster.getFileSystem();
try {
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[06/50] hadoop git commit: Revert "HDDS-242. Introduce NEW_NODE,
STALE_NODE and DEAD_NODE event" This reverts commit
a47ec5dac4a1cdfec788ce3296b4f610411911ea. There was a spurious file in this
commit. Revert to clean it.
Posted by zh...@apache.org.
Revert "HDDS-242. Introduce NEW_NODE, STALE_NODE and DEAD_NODE event"
This reverts commit a47ec5dac4a1cdfec788ce3296b4f610411911ea.
There was a spurious file in this commit. Revert to clean it.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b5678587
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b5678587
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b5678587
Branch: refs/heads/HDFS-13572
Commit: b56785873a4ec9f6f5617e4252888b23837604e2
Parents: 418cc7f
Author: Anu Engineer <ae...@apache.org>
Authored: Wed Jul 11 12:03:42 2018 -0700
Committer: Anu Engineer <ae...@apache.org>
Committed: Wed Jul 11 12:03:42 2018 -0700
----------------------------------------------------------------------
.../scm/container/ContainerReportHandler.java | 47 ------------------
.../hadoop/hdds/scm/node/DeadNodeHandler.java | 42 ----------------
.../hadoop/hdds/scm/node/NewNodeHandler.java | 50 -------------------
.../hadoop/hdds/scm/node/NodeReportHandler.java | 42 ----------------
.../hadoop/hdds/scm/node/StaleNodeHandler.java | 42 ----------------
.../common/src/main/bin/ozone-config.sh | 51 --------------------
6 files changed, 274 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
deleted file mode 100644
index 486162e..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.container;
-
-import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
- .ContainerReportFromDatanode;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-/**
- * Handles container reports from datanode.
- */
-public class ContainerReportHandler implements
- EventHandler<ContainerReportFromDatanode> {
-
- private final Mapping containerMapping;
- private final Node2ContainerMap node2ContainerMap;
-
- public ContainerReportHandler(Mapping containerMapping,
- Node2ContainerMap node2ContainerMap) {
- this.containerMapping = containerMapping;
- this.node2ContainerMap = node2ContainerMap;
- }
-
- @Override
- public void onMessage(ContainerReportFromDatanode containerReportFromDatanode,
- EventPublisher publisher) {
- // TODO: process container report.
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
deleted file mode 100644
index 427aef8..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.node;
-
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-/**
- * Handles Dead Node event.
- */
-public class DeadNodeHandler implements EventHandler<DatanodeDetails> {
-
- private final Node2ContainerMap node2ContainerMap;
-
- public DeadNodeHandler(Node2ContainerMap node2ContainerMap) {
- this.node2ContainerMap = node2ContainerMap;
- }
-
- @Override
- public void onMessage(DatanodeDetails datanodeDetails,
- EventPublisher publisher) {
- //TODO: add logic to handle dead node.
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
deleted file mode 100644
index 79b75a5..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NewNodeHandler.java
+++ /dev/null
@@ -1,50 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.node;
-
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.scm.exceptions.SCMException;
-import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-import java.util.Collections;
-
-/**
- * Handles New Node event.
- */
-public class NewNodeHandler implements EventHandler<DatanodeDetails> {
-
- private final Node2ContainerMap node2ContainerMap;
-
- public NewNodeHandler(Node2ContainerMap node2ContainerMap) {
- this.node2ContainerMap = node2ContainerMap;
- }
-
- @Override
- public void onMessage(DatanodeDetails datanodeDetails,
- EventPublisher publisher) {
- try {
- node2ContainerMap.insertNewDatanode(datanodeDetails.getUuid(),
- Collections.emptySet());
- } catch (SCMException e) {
- // TODO: log exception message.
- }
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
deleted file mode 100644
index aa78d53..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/NodeReportHandler.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.node;
-
-import org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher
- .NodeReportFromDatanode;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-/**
- * Handles Node Reports from datanode.
- */
-public class NodeReportHandler implements EventHandler<NodeReportFromDatanode> {
-
- private final NodeManager nodeManager;
-
- public NodeReportHandler(NodeManager nodeManager) {
- this.nodeManager = nodeManager;
- }
-
- @Override
- public void onMessage(NodeReportFromDatanode nodeReportFromDatanode,
- EventPublisher publisher) {
- //TODO: process node report.
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java b/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
deleted file mode 100644
index b37dd93..0000000
--- a/hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/StaleNodeHandler.java
+++ /dev/null
@@ -1,42 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.hdds.scm.node;
-
-import org.apache.hadoop.hdds.protocol.DatanodeDetails;
-import org.apache.hadoop.hdds.scm.node.states.Node2ContainerMap;
-import org.apache.hadoop.hdds.server.events.EventHandler;
-import org.apache.hadoop.hdds.server.events.EventPublisher;
-
-/**
- * Handles Stale node event.
- */
-public class StaleNodeHandler implements EventHandler<DatanodeDetails> {
-
- private final Node2ContainerMap node2ContainerMap;
-
- public StaleNodeHandler(Node2ContainerMap node2ContainerMap) {
- this.node2ContainerMap = node2ContainerMap;
- }
-
- @Override
- public void onMessage(DatanodeDetails datanodeDetails,
- EventPublisher publisher) {
- //TODO: logic to handle stale node.
- }
-}
http://git-wip-us.apache.org/repos/asf/hadoop/blob/b5678587/hadoop-ozone/common/src/main/bin/ozone-config.sh
----------------------------------------------------------------------
diff --git a/hadoop-ozone/common/src/main/bin/ozone-config.sh b/hadoop-ozone/common/src/main/bin/ozone-config.sh
deleted file mode 100755
index 83f30ce..0000000
--- a/hadoop-ozone/common/src/main/bin/ozone-config.sh
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements. See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# included in all the ozone scripts with source command
-# should not be executed directly
-
-function hadoop_subproject_init
-{
- if [[ -z "${HADOOP_OZONE_ENV_PROCESSED}" ]]; then
- if [[ -e "${HADOOP_CONF_DIR}/hdfs-env.sh" ]]; then
- . "${HADOOP_CONF_DIR}/hdfs-env.sh"
- export HADOOP_OZONES_ENV_PROCESSED=true
- fi
- fi
- HADOOP_OZONE_HOME="${HADOOP_OZONE_HOME:-$HADOOP_HOME}"
-
-}
-
-if [[ -z "${HADOOP_LIBEXEC_DIR}" ]]; then
- _hd_this="${BASH_SOURCE-$0}"
- HADOOP_LIBEXEC_DIR=$(cd -P -- "$(dirname -- "${_hd_this}")" >/dev/null && pwd -P)
-fi
-
-# shellcheck source=./hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
-
-if [[ -n "${HADOOP_COMMON_HOME}" ]] &&
- [[ -e "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh" ]]; then
- . "${HADOOP_COMMON_HOME}/libexec/hadoop-config.sh"
-elif [[ -e "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh" ]]; then
- . "${HADOOP_LIBEXEC_DIR}/hadoop-config.sh"
-elif [ -e "${HADOOP_HOME}/libexec/hadoop-config.sh" ]; then
- . "${HADOOP_HOME}/libexec/hadoop-config.sh"
-else
- echo "ERROR: Hadoop common not found." 2>&1
- exit 1
-fi
-
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[14/50] hadoop git commit: HDFS-13663. Should throw exception when
incorrect block size is set. Contributed by Shweta.
Posted by zh...@apache.org.
HDFS-13663. Should throw exception when incorrect block size is set. Contributed by Shweta.
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/87eeb26e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/87eeb26e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/87eeb26e
Branch: refs/heads/HDFS-13572
Commit: 87eeb26e7200fa3be0ca62ebf163985b58ad309e
Parents: 1bc106a
Author: Xiao Chen <xi...@apache.org>
Authored: Thu Jul 12 20:19:14 2018 -0700
Committer: Xiao Chen <xi...@apache.org>
Committed: Thu Jul 12 20:24:11 2018 -0700
----------------------------------------------------------------------
.../apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/87eeb26e/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
index 94835e2..34f6c33 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
@@ -275,7 +275,9 @@ public class BlockRecoveryWorker {
}
// recover() guarantees syncList will have at least one replica with RWR
// or better state.
- assert minLength != Long.MAX_VALUE : "wrong minLength";
+ if (minLength == Long.MAX_VALUE) {
+ throw new IOException("Incorrect block size");
+ }
newBlock.setNumBytes(minLength);
break;
case RUR:
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org
[41/50] hadoop git commit: HADOOP-15610. Fixed pylint version for
Hadoop docker image. Contributed by Jack Bearden
Posted by zh...@apache.org.
HADOOP-15610. Fixed pylint version for Hadoop docker image.
Contributed by Jack Bearden
Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ba1ab08f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ba1ab08f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ba1ab08f
Branch: refs/heads/HDFS-13572
Commit: ba1ab08fdae96ad7c9c4f4bf8672abd741b7f758
Parents: c492eac
Author: Eric Yang <ey...@apache.org>
Authored: Wed Jul 18 20:09:43 2018 -0400
Committer: Eric Yang <ey...@apache.org>
Committed: Wed Jul 18 20:09:43 2018 -0400
----------------------------------------------------------------------
dev-support/docker/Dockerfile | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/hadoop/blob/ba1ab08f/dev-support/docker/Dockerfile
----------------------------------------------------------------------
diff --git a/dev-support/docker/Dockerfile b/dev-support/docker/Dockerfile
index 369c606..a8c5c12 100644
--- a/dev-support/docker/Dockerfile
+++ b/dev-support/docker/Dockerfile
@@ -154,9 +154,10 @@ RUN apt-get -q update && apt-get -q install -y shellcheck
RUN apt-get -q update && apt-get -q install -y bats
####
-# Install pylint (always want latest)
+# Install pylint at fixed version (2.0.0 removed python2 support)
+# https://github.com/PyCQA/pylint/issues/2294
####
-RUN pip2 install pylint
+RUN pip2 install pylint==1.9.2
####
# Install dateutil.parser
---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org